url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/revisions/59980/list | ## Return to Answer
Every torsion-free abelian group of cardinality at most $2^\omega$ is isomorphic to a subgroup of the reals. (To see this, note that any such group can be embedded in a divisible torsion-free group of the same cardinality, i.e., a vector space over $\mathbb Q$, which can therefore in turn be embedded in any other vector space over $\mathbb Q$ of the same or greater dimension.) Since already the structure of rank 2 abelian groups is hopelessly complicated, you are not going to find any sensible classification.
Every torsion-free abelian group of cardinality at most $2^\omega$ is isomorphic to a subgroup of the reals. (To see this, note that any such group can be embedded in a divisible torsion-free group of the same cardinality, i.e., a vector space over $\mathbb Q$, which can therefore be embedded in any other vector space over $\mathbb Q$ of the same or greater dimension.) Since already the structure of rank 2 abelian groups is hopelessly complicated, you are not going to find any sensible classification. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9068315625190735, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/114021/cardinality-of-the-power-set-of-natural-numbers | Cardinality of the power set of natural numbers
I was reading an article on infinite sets and I came across a proof about how the power set of natural numbers has a greater cardinality than the set of natural numbers. I know that both given that proof and the proof of Cantor's theorem, that it must be true. So, my question is why does encoding the each set of natural numbers in the power set using Gödel numbering not work?
Thanks.
-
How do you propose to assign Gödel numbers to arbitrary subsets of $\mathbb N$? The standard arithmization schemes only handle finite subsets of $\mathbb N$. – Henning Makholm Feb 27 '12 at 15:36
Not all subsets of $\mathbb N$ are definable. – azarel Feb 27 '12 at 15:38
You can find encodings, though not fully satisfactory ones, of the recursively enumerable subsets of the natural numbers, via the index of a Turing machine that recognizes the set. But encoding all subsets is ruled out by cardinality considerations. – André Nicolas Feb 27 '12 at 16:39
1 Answer
You can find encodings in the natural numbers of certain collections of subsets of $\mathbb{N}$. For example, as was pointed out by Henning Makholm, there is a very nice encoding of the collection of finite subsets of $\mathbb{N}$.
More generally, let $A$ be a recursively enumerable subset of $\mathbb{N}$. Let $T_A$ be a Turing machine that, on input $n$, halts if $n\in A$, and does not halt otherwise. Then we can encode $A$ by using the usual index of the machine $T_A$.
This encoding is not fully satisfactory. There are infinitely many Turing machines that halt on input $n$ iff $n\in A$. Thus every recursively enumerable set has infinitely many encodings. Moreover, there is no algorithm for telling, given two natural numbers, whether they encode the same set.
For more modest collections than the collection of recursively enumerable sets, there are far more satisfactory encodings. As a small and not very interesting example, consider the collection of subsets of $\mathbb{N}$ that are either finite or cofinite (their complement is finite). A small modification of the encoding of finite subsets will take care of the finites and cofinites. Basically, just use the even natural numbers for the finites. If you have used $2k$ to encode a finite, use $2k+1$ to encode its complement.
In addition, the elements of any countably infinite subset of the power set of $\mathbb{N}$ can, by the definition of countably infinite, be encoded using the natural numbers. However, by cardinality considerations, we cannot encode all the subsets of $\mathbb{N}$ using elements of $\mathbb{N}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066829085350037, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/15031/is-the-massive-calogero-moser-system-still-integrable/15335 | ## Is the ‘massive’ Calogero-Moser system still integrable?
### Background
The (rational) Calogero-Moser system is the dynamical system which describes the evolution of $n$ particles on the line $\mathbb{C}$ which repel each other with force proportional to the cube of their distance. If the particles have (distinct!) position $q_i$ and momentum $p_i$, then the Hamiltonian which describes this system is $$H=\sum_i p_i^2+\sum_{i\neq k}\frac{1}{(x_i-x_k)^2}$$ There are many interesting properties of this system, but one of the first interesting properties is that it is `completely integrable'. This means that solving it explicitly amounts to solving a series of straight-forward integrals.
The integrability can most easily be shown by showing that the phase space for this system includes into a symplectic reduction of a certain matrix space, and then noticing that the above Hamiltonian is a restriction of a integrable Hamiltonian on the whole space. This is done by assigning to any ensemble of points $q_i$ and momenta $q_i$ a pair of $n\times n$ matrices $X$ and $Y$, where $X$ is the diagonal matrix with $q_i$ on the diagonal entries, while $Y$ is given by $$Y_{ii}=p_i, \; Y_{ik}=(x_i-x_k)^{-1}, \; i\neq k$$ This matrix assignment defines a map from the configuration space $CM_n$ of the CM system to the space of pairs of matrices. The space of pairs of matrices $(X,Y)$ is naturally a symplectic space from the bilinear form $(X,Y)\cdot (X',Y')=Tr(XY')-Tr(X'Y)$, and the action of $GL_n$ by simultaneous conjugation naturally has a moment map. Therefore, we sympletically reduce the space of pairs of matrices at a specific coadjoint orbit (not the origin) and get a new symplectic space $\overline{CM}_n$.
Composing the above matrix assignment with symplectic reduction, we get a map $CM_n\rightarrow \overline{CM}_n$. This map turns out to be a symplectic inclusion which has dense image. We also notice that the functions $Tr(Y^i)$, as $i$ goes from $1$ to $n$, descend to a Poisson-commuting family of functions on $\overline{CM}_n$, and because $\overline{CM}_n$ is $2n$ dimensional, each of the functions $Tr(Y^i)$ gives an integrable flow on $\overline{CM}_n$. Finally, we notice that $Tr(Y^2)$ restricts to $H$ on $CM_n$.
### The Massive Version of the CM System
Now, make the following change to the system. To every particle, assign a number $m_i$ (the mass), which can be in $\mathbb{C}$, but I am interested in the case where the $m_i$ are positive integers. Define a the massive CM Hamiltonian as $$H_m=\sum_i\frac{p_i^2}{m_i}+\sum_{i\neq k}\frac{m_im_k}{(x_i-x_k)^2}$$ The physical meaning of this equation is that particles still have force proportional to the inverse of the cube of their distance, but the force is proportional to the mass of that particle; also, particles resist acceleration proportional to their mass. If the force were to drop off proportional to the inverse square of their distance, and attract instead of repel, this would model how massive particles move under the influence of gravity.
### Questions
1. Is this system integrable?
2. Can it be realized in a similar matrix form?
3. Does it have any interesting or new behavior than the usual CM system?
### An Idea
It is almost possible to realize this Hamiltonian in a simple modification of the previous approach. Let $M$ denote the diagonal matrix with the $m_i$s on the diagonal. Then $$Tr(MYMY)=\sum_im_i^2p_i^2+\sum_{i\neq k}\frac{m_im_k}{(x_i-x_k)^2}$$ The functions $Tr( (MY)^i)$ should again be a Poisson commutative family. Rescaling the $p_i$ by $m_i^{3/2}$ gives the massive Hamiltonian $H_m$; however, this rescaling is not symplectic, and so it won't preserve the flows.
### Another Idea
In the case of integer $m_i$, one possibility is to work with $N\times N$ matrices rather than $n\times n$ matrices, where $N=\sum m_i$. Then it is possible to construct a matrix $X$ with eigenvalues $q_i$, each occuring with multiplicity $m_i$, as well as a matrix $Y$ such that $(X,Y)$ defines a point in $\overline{CM}_N$. The Hamiltonian $Tr(Y^2)$ even restricts to the correct 'massive' Hamiltonian $H_m$. However, the flow described by this Hamiltonian on $\overline{CM}_n$ will in almost all cases immediately separate eigenvalues that started together, which we don't want. If we restrict the Hamiltonian to the closed subspace where the eigenvalues are required to stay together, then this gives the desired flow. Unfortunately, restricting to a closed subvariety doesn't preserve a Hamiltonian being integrable.
-
Is there a Lax formulation for the massive model? – José Figueroa-O'Farrill Feb 11 2010 at 21:16
1
My colleague Harry Braden, who apparently has no time to answer himself, claims that the case of $N=3$ is still integrable, but not for higher $N$. I don't have a reference, whence I am writing this as a comment, since I don't think that "hearsay" should be considered an answer :) – José Figueroa-O'Farrill Feb 12 2010 at 14:24
1
An even more general case $H=\sum_i\frac{p_i^2}{m_i}+\sum_{i\neq k}\frac{g_{ik}}{(x_i-x_k)^2}$ also is apparently nonintegrable as stated at Scholarpedia (but there's no direct proof there either, that's why this is a comment): scholarpedia.org/article/Calogero-Moser_system – mathphysicist Feb 12 2010 at 16:10
Hmm, its interesting that $N=3$ is a special case, since that was the number of CM particles I could get to stay together, in the manner I describe in the 'Another Idea' section. – Greg Muller Feb 12 2010 at 16:25
Since the particles lie in C, shouldn't expression in the Hamiltonian be |x_i - x_j|² and not (x_i - x_j)² ? – Tom LaGatta Feb 15 2010 at 21:53
show 1 more comment
## 2 Answers
The paper "Meromorphic Parametric Non-Integrability, the Inverse Square Potential" by E. J. Tosel, proves almost what was claimed in the comments. Except for Jacobi's theorem:
The 3-body problem on a line with arbitrary masses and inverse square potential is completely integrable with rational first integrals.
and Calogero-Moser’s Theorem:
The n-body problem with equal masses on a line with an inverse square potential is completely integrable. More precisely, there exists a complete family of commuting first integrals which are rational in $(Q,P)$.
all other cases are non-integrable. The main theorem is:
Theorem 3 (Non-integrability meromorphic in linear momenta and masses, rational in positions).
(i) For $n = 4$, the $n$-body problem on a line with an inverse square potential does not have a complete system of generically independent first integrals which are rational with respect to $Q$ and meromorphic with respect to $P$ and $(m _i) _{1\le i\le n}$
(ii) For $n = 3$ and $p \geq 2$, the $n$-body problem in $\mathbb{R} ^p$ with an inverse square potential does not have a complete system of generically independent first integrals which are rational with respect to $Q$ and meromorphic with respect to $P$ and $(m _i) _{1\le i\le n}$
A corollary of this theorem is that Calogero-Moser’s theorem deals with an exceptional case: there cannot exist a Lax pair $(L,B)$ which would depend meromorphically on the masses for $n$ bodies on a line ($n \geq 4$).
There is a remark there that the rationality condition in the case of the line needs only be checked in $n-4$ positions.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just to add to Gjergji Zaimi's answer, Harry Braden has sent me the expressions for the conserved charges responsible for the integrability of the $N=3$ model:
• The total momentum $P = p_1 + p_2 + p_3$
• The hamiltonian $H$
• $Q = 2 H \sum_{i=1}^3 m_i x_i^2 -(\sum_{i=1}^3 x_i p_i )^2$
-
Do you happen to know where the third integral of motion comes from? For my purposes, I don't really need a complete set of $n$ integrals of motion, I really only need the natural analog of the third integral of motion, after momentum and energy. My guess is it has the above form even when there are more particles. – Greg Muller Feb 15 2010 at 17:09
My guess -- I'll try to confirm -- is that this is probably the trace of the square of the Lax operator for this model. – José Figueroa-O'Farrill Feb 15 2010 at 17:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323291778564453, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/8314/maximum-time-difference-between-clocks-in-a-gravity-field/8330 | # Maximum time difference between clocks in a gravity field
From Surely you must be joking, Mr Feynman.
You blast off in a rocket which has a clock on board, and there's a clock on the ground. The idea is that you have to be back when the clock on the ground says one hour has passed. Now you want it so that when you come back, your clock is as far ahead as possible. According to Einstein, if you go very high, your clock will go faster, because the higher something is in a gravitational field, the faster its clock goes. But if you try to go too high, since you've only got an hour, you have to go so fast to get there that the speed slows your clock down. So you can't go too high. The question is, exactly what program of speed and height should you make so that you get the maximum time on your clock?
This assistant of Einstein worked on it for quite a bit before he realized that the answer is the real motion of matter. If you shoot something up in a normal way, so that the time it takes the shell to go up and come down is an hour, that's the correct motion. It's the fundamental principle of Einstein's gravity--that is, what's called the "proper time" is at a maximum for the actual curve.
How is this result proven?
-
– Raskolnikov Apr 9 '11 at 11:25
## 3 Answers
To elaborate on Marek's (correct) answer, since it seems that math is the issue that @Casebash is having:
Start with an integral representing the time elapsed on the moving observer's clock in terms of the stationary observer's coordinates (I'm supressing G and c below. Feel free to replace all $t$'s with $c\,t$'s, and all $M$'s with $\frac{GM}{c^{2}}$'s):
$$\begin{equation} \Delta \tau = \int d\tau \sqrt{{\dot t}^{2}(1-\frac{2M}{r})- \frac{{\dot r}^{2}}{1-\frac{2M}{r}}- r^{2} {\dot \theta}^{2} - r^{2}\sin^{2}\theta {\dot \phi}^{2}} \end{equation}$$
Where $(t,r,\theta,\phi)$ are all considered to be functions of $\tau$ denoting the moving observer's position at time $\tau$. Our problem amounts to looking for the path beginning at $(t_{0},R,\Theta,\Phi)$ and ending at $(t_{f},R,\Theta,\Phi)$ that maximizes this integral, subject to the constraint that the quantity under the square root will be equal to $1$ when the calculation is complete. To make our calculations easier, note that the square root is a monotonic function over all its domain, so we might as well maximize
$$\begin{equation} \frac{1}{2}\int d\tau \left({\dot t}^{2}(1-\frac{2M}{r})- \frac{{\dot r}^{2}}{1-\frac{2M}{r}}- r^{2} {\dot \theta}^{2} - r^{2}\sin^{2}\theta {\dot \phi}^{2}\right) \end{equation}$$
which is considerably simpler. It would take a whole page to carefully vary (meaning that we basically, but not quite, take the derivative of the function with respect to) this with respect to the four independent functions. So, I'm going to just give you a taste of the task by varying with respect to $\theta$. The path of maximum "moving clock" time (heretofore called 'proper time') will be the one for which these variations are zero. The other four functions follow just as easily. The variation of this integral with respect to $\theta$ gives us
$$\begin{align} \delta \Delta\tau |{}_{\theta}&=\int d\tau\left(-r^{2}{\dot \theta}\delta {\dot \theta}-r^{2}\sin\theta \cos \theta {\dot \phi}^{2}\delta \theta\right)\\ 0&=\int d \tau \left(-\frac{d}{d\tau}\left({\dot \theta} r^{2}\delta \theta\right) + \left({\ddot \theta}r^{2} + 2 r {\dot r}{\dot \theta}-r^{2}\sin\theta \cos \theta {\dot \phi}^{2}\right)\delta \theta\right)\\ &=\left(\dot \theta(t_{0})r(t_{0})^{2}\delta \theta (t_{0})-\dot \theta(t_{f})r(t_{f})^{2}\delta \theta (t_{f})\right)\\ &\quad + \int d\tau\; r^{2}\left({\ddot \theta} + \frac{1}{r}2{\dot r} {\dot \theta}- \sin \theta \cos \theta {\dot \phi}^{2}\right)\delta \theta \end{align}$$
The first term, that we obtained by integrating the total derivative, vanishes since the variation of $\theta$ is zero at the endpoints $t_{0}$ and $t_{f}$ (our space of states is paths beginning and ending at a fixed value of $\theta$. If the variation will be zero, then the remaining stuff under the integral must turn out to be zero if it is going to be a minimum for an arbitrary fixed-point variation (if the integrand isn't zero, just imagine making the variation a hundred kajillion only at the point where it is nonzero--clearly this isn't an extremum). Therefore, the $\theta$ variable of the maximum proper time path must satisfy ${\ddot \theta} + \frac{1}{r}2{\dot r} {\dot \theta}- \sin \theta \cos \theta {\dot \phi}^{2}=0$. It turns out that this equation is exactly the same equation you get for geodesic motion of a particle, which is the path followed by an observer being acted upon only by gravity. A simple calculation of the first integral for any other path (and the fact that it will come out higher) will then show you that this, in fact, is the maximum time path to travel.
-
In the last line, how did the multiple of {\delta \theta} change from only affecting the {2 r \dot r \dot \theta} to affecting everything inside the brackets? – Casebash Apr 10 '11 at 1:41
Ah, okay, the issue is actually with the line before that. The other variables are missing a `{\delta \theta}`. BTW, how did you know to rewrite the equation to involve the derivative of {\dot \theta r^2 \delta \theta}? – Casebash Apr 10 '11 at 1:49
@Casebash: this is standard procedure in calculus of variations. The integral Jerry started with is known as Lagrangian and equations he obtains are called Euler-Lagrange equations. As to how did he know that: it's a standard trick. You want to remove every $\delta \dot \theta$ because you are not perturbing in these variables. That's why you need integration by parts to produce just $\delta \theta$. Then the whole integrand will be linear in $\delta \theta$ and you can "factor it out", obtaining local equations. – Marek Apr 10 '11 at 6:19
As Raskolnikov says, it's the principle of least action (more precisely extremal action but let's put that off till later). Consider the task of finding the shortest path between two points in Euclidean space. We call such a special path geodesic and it's obvious that here it is just plain old line segment between the two points. But to prove this statement is not so trivial (one needs calculus of variations at hand), so let's concentrate on just the intuitive picture.
If you try to deform the line a little to make the total path shorter, you'll find it's not possible. Reason is that infinitesimally, each such deformation will correspond to a triangle with longest side $c$ being the part of the line and $a$ and $b$ lines comprising the perturbed path. As per basic Euclidean geometry we have $c < a + b$.
In Minkowski space-time, the picture is very similar. Straight lines are again geodesics although now they extremize space-time intervals or equivalenty proper time along the path (at least if the particle is massive). Again, if you repeat the above argument you will obtain an inequality but with the sign reversed because time component has different signature than the space components. That means that time will be maximized along the geodesic.
The above can be generalized to curved spaces and space-times. E.g. on a sphere you'll find the shortest path between two points are parts of great circles. And in fact, our argument will work too because infinitesimally sphere looks like a plane (as I am sure you know from your day to day experience living on the surface of Earth) so our arguments holds. And similarly for space-times and proper times.
A note on the word extremal is in order. If you inspect the sketches of the proofs I made, you'll see that we always make only local (infinitesimal) perturbations around the extremal paths to see if there is something shorter/longer around. That means that we only find local extrema. Also, these paths need not exist or be unique between every pair of points (consider the sphere again and all the geodesics between north and south poles). This is good to keep in mind but I won't delve into general mathematical theory as it's messy and not needed usually.
-
1. What do you mean by "proper time along the time"? 2. How do you extremize a space-time interval? For example, what units is it measured in? – Casebash Apr 9 '11 at 12:05
Also, what do you mean by the time component having "a different signature"? – Casebash Apr 9 '11 at 12:09
2
@Casebash: thanks, that was just a typo. There are probably few more lurking about. As for the signature, special (and general) relativity are based on the concept of metric which (in contrast with usual geometry) is not positive definite. The space-time interval is written as $ds^2 = -c^2dt^2 + dx^2 + dy^2 + dz^2$. This is just a special relativity version of Pythagorean theorem. But I am not going to explain to you all of special (not to mention general) relativity, sorry. If you want to delve deeper into these matters ask a separate question. – Marek Apr 9 '11 at 12:14
There's a way to see the answer in the OP logically, no math. In SR, clocks that "feel" acceleration, like the ground clock does, run slower than floating clocks (ones in free fall) do. Therefore the clock that floats during the entire experiment will elapse the most time. Only a clock "shot up in a normal way", to use your words, floats during the entire experiment. Search for the Non-Mathematical Proof of Gravitational Time Dilation for an elaboration. The falling clock in that experiment is one half of a symmetrical experiment involving a clock "shot up in a normal way".
The clock on the ground is analogous to the traveling twin that thrusts its rocket engines in the twin paradox. The clock "shot up in the normal way" is analogous to the stay-at-home twin that floats "stationary" in space, waiting for the traveling twin to return. Any clock shot up, but not in the normal way, will be a bit less like the stay-at-home twin and a bit more like the traveling twin.
BTW, the reason that a floating clock elapses more time than a clock that "feels" acceleration, when they start and end at the same location, is also given by SR: The clocks move relative to each other, completing trips between when they start and end at the same location. The trip travel distance of the clock that "feels" acceleration is shortened by length contraction predicted by SR. (A ruler that freely falls downward and near the clock on the ground shortens as it falls, as measured in that clock's frame, because the ruler's speed is increasing as measured in that frame. The ruler measures a portion of the remaining distance the clock has to travel.) Therefore that clock travels less distance at the same average speed relative to the other clock, elapsing less time than the floating clock does.
-
Oops, should be "the reason that a floating clock elapses more time than a clock that "feels" acceleration" – finbot Apr 12 '11 at 20:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479669332504272, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/44096-tangent-circle.html | # Thread:
1. ## Tangent of a Circle
The question is
Find the equation of the tangent and normal to the circle x^2 + y^2 = 13 at the point t(-2,3)
I was thinking about this question and would the tangent be perpendicular to the straight line joining the (-2,3) and Origin?
2. Originally Posted by immunofort
The question is
Find the equation of the tangent and normal to the circle x^2 + y^2 = 13 at the point t(-2,3)
I was thinking about this question and would the tangent be perpendicular to the straight line joining the (-2,3) and Origin?
I don't think so. From what I see, the lines don't pass through the origin.
To find the slope, we can use implicit differentiation:
$\frac{d}{dx}\bigg[x^2+y^2\bigg]=\frac{d}{dx}\bigg[13\bigg]$
$\implies 2x+2y\frac{dy}{dx}=0$
$\implies \frac{dy}{dx}=-\frac{x}{y}$
We can find the value of the slope of the tangent line at the point $(-2,3)$:
$\bigg.\frac{dy}{dx}\bigg|_{x=-2, \ y=3}=-\frac{-2}{3}=\frac{2}{3}$
The slope of the normal line would be $-\frac{3}{2}$.
Thus, the tangent line has the form $y=\frac{2}{3}(x-x_0)+y_0$
At the point $(-2,3)$, the equation of our tangent line is $y=\frac{2}{3}(x+2)+3\implies \color{red}\boxed{y=\frac{2}{3}x+\frac{13}{3}}$
The normal line has the form $y=\frac{2}{3}(x-x_0)+y_0$
At the point $(-2,3)$, the equation of our normal line is $y=-\frac{3}{2}(x+2)+3\implies \color{red}\boxed{y=-\frac{3}{2}x}$
I don't think so. From what I see, the lines don't pass through the origin.
I take that back. The tangent line is perpendicular to the normal line [and the normal line happens to pass through the origin].
I hope that this makes sense!
--Chris
3. Originally Posted by immunofort
The question is
Find the equation of the tangent and normal to the circle x^2 + y^2 = 13 at the point t(-2,3)
I was thinking about this question and would the tangent be perpendicular to the straight line joining the (-2,3) and Origin?
Yes. It definitely will. Find your slope from the origin to (-2,3), find a perpendicular slope, and put that slope through (-2,3).
That is absolutely how you do this problem.
4. ## A few points!
Just a few points to add!
From your circle equation you should be able to recognise a few things. Firstly we know the center of the circle is the origin as it has not been translated by anything if it had it would look like (x - 2)^2 + (y - 5)^2 = 13 or similar.
Secondly any tangent of a circle will ALWAYS make a right angle with the radius at the point where it touches the circle. With that in mind the equation of the normal is simply the equation of the radius at that point.
So we have the coords for the center of the circle (0, 0) and we have the the coords for the point on the circumference (-2, 3) so with these we can find the gradient of the radius thus m = (3 - 0)/(-2 - 0) ie change in y divided by the change in x and we get m = - 3/2. To find the equation of the normal we need the gradient of the normal which we simply use the rule
m1 x m2 = -1. That is if two lines are at right angles then the product of there gradients = -1. so -3/2 x m2 = -1
m2 = -1/-3/2
m2 = 2/3
So the gradient of the normal is 2/3. so the eqation of the normal is worked out thus
2x + 4 = y – 3
3 3 NOTE - This should read 4/3 but i cant get the 3 under the 4!
2x + 4 = 3y – 9
2x + 13 = 3y
I would leave it as 3y – 2x = 13
Hope that adds to the above post!
5. Originally Posted by gtbiyb
Just a few points to add!
From your circle equation you should be able to recognise a few things. Firstly we know the center of the circle is the origin as it has not been translated by anything if it had it would look like (x - 2)^2 + (y - 5)^2 = 13 or similar.
Secondly any tangent of a circle will ALWAYS make a right angle with the radius at the point where it touches the circle. With that in mind the equation of the normal is simply the equation of the radius at that point.
So we have the coords for the center of the circle (0, 0) and we have the the coords for the point on the circumference (-2, 3) so with these we can find the gradient of the radius thus m = (3 - 0)/(-2 - 0) ie change in y divided by the change in x and we get m = - 3/2. To find the equation of the normal we need the gradient of the normal which we simply use the rule
m1 x m2 = -1. That is if two lines are at right angles then the product of there gradients = -1. so -3/2 x m2 = -1
m2 = -1/-3/2
m2 = 2/3
So the gradient of the normal is 2/3. so the eqation of the normal is worked out thus
2x + 4 = y – 3
3 3 NOTE - This should read 4/3 but i cant get the 3 under the 4!
2x + 4 = 3y – 9
2x + 13 = 3y
I would leave it as 3y – 2x = 13
Hope that adds to the above post!
You're correct.
--Chris | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446744918823242, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/19107?sort=oldest | ## a question about flatness
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the book "étale cohomology" by Milne, proposition 2.5 at p.9, it said :
Let $B$ be a flat $A-$algebra where $A$ and $B$ are noetherian rings, and consider $b \in B$. If the image of $b$ in $B/mB$ is not a zero-divisor for any maximal ideal $m$ of $A$, then $B/(b)$ is a flat $A-$ algebra.
At the beginning of the proof, he said we can reduce to the case where $\phi : A \rightarrow B$ is a local homomorphism of local noetherian rings. The proof in this case uses the fact that it's a local homomorphsim.
But I think that in order to reduce the general case to the local case, we need the following condition, which I can't get from the original condition.
For any maximal ideal $n$ of $B$, the image of $b$ in $B/pB$ is not a zero-divisor, where $p = \phi^{-1} (n)$.
How do you think?
-
1
you are exactly right. cf. the erratum on milne's home page: jmilne.org/math/Books/add/addendaB.pdf – LRG Mar 23 2010 at 21:52
## 1 Answer
I am not sure about Milne's reduction, but your fix is too strong. First off all, I don't understand why you write $B/p B$ with $p = \phi^{-1}(n)$. I assume $\phi$ is the map $A \to B$. But then $\phi(n)$ is an ideal of $A$, not $B$, and $b$ is an element of $B$. I am going to assume you meant "$b$ is not a zero divisor in $B/nB$."
But requiring this for every maximal ideal of $B$ implies that $b$ is a unit! We surely don't want to impose that.
Let me add that, in my opinion, it is rude to use the name of a living person as a pseudonym. I do not think that Grothendieck would appreciate other people posting under his name. If you want to honor him, why not name yourself for one of his theorems or definitions, as fpqc does? (Of course, if your name is Grothendieck, I apologize.)
-
Yes, $\phi : A \rightarrow B$ is the ring homomorphism which defines the $A$-algebra structure of $B$. $p = \phi^{-1} (n)$ is a prime ideal of $A$, but $pB$ is the ideal of $B$ generated by $\phi (p) = \phi \circ \phi^{-1} (n)$, which doesn't equal to $n$ in general. Hence the condition is not as strong as you think. The reason I impose the condition is "flatness is a local property". About the name, I am sorry, is there anyway to change it? – Rothendieck Mar 23 2010 at 13:58
If you were a registered user, you'd go to your profile (click your name where it is displayed at the top of the screen) and click edit where it says "Registered User edit | add openid". I'm not sure if there is a way for an unregistered user to change names. – David Speyer Mar 23 2010 at 14:38
And now I understand where you're coming from mathematically, I'll think about it. – David Speyer Mar 23 2010 at 14:39
1
@Rothendieck: I've slightly changed your username on the assumption that that's what you wanted to do. Please email me if you have any trouble with your account. – Anton Geraschenko♦ Mar 23 2010 at 20:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477505683898926, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/201205/inclusion-exclusion-and-lcm/206228 | # Inclusion Exclusion and lcm
I would like to show that for any positive integers $d_1, \dots, d_r$ one has $$\sum_{i=1}^r (-1)^{i+1}\biggl( \sum_{1\leq k_1 < \dots < k_i \leq r} \gcd(d_{k_1}, \dots , d_{k_i})\biggr) ~\leq~ \prod_{i=1}^r\biggl( \prod_{1\leq k_1 < \dots < k_i \leq r} \gcd(d_{k_1}, \dots , d_{k_i}) \biggl)^{(-1)^{i+1}}.$$ Note that the rhs of the upper inequality is exactly $\operatorname{lcm}(d_1,\dots,d_r)$. Also note that if we denote the lhs of the upper equation by $L(d_1, \dots, d_r)$, then one has that $$L(d_1, \dots, d_r) = L(d_1, \dots, d_{r-2}, d_{r-1}) + L(d_1, \dots, d_{r-2}, d_{r}) - L(d_1, \dots, d_{r-2}, \text{gcd}(d_{r-1},d_r)).$$
Thanks for the help!
-
Your formula is not clear: on which index is summed the first sum? why there are $k_i,\ldots,k_i$ as indexes in the second sum, but you use $k_i,\ldots,k_r$ inside the argument of the sum? what does it means $1=1,\ldots,r$? – enzotib Sep 23 '12 at 16:56
Thank you @enzotib, I have corrected it. – florek Sep 23 '12 at 19:02
I compute {1,1,2} as a counter-example. It looks like you want strict inequality $k_1<k_2<\cdots<k_i$. In this case, however, {1,2} is a counter-example to the claim "equality only if $d_1=\cdots=d_r$" (but the inequality doesn't seem to have a small counter-example). – Douglas S. Stones Sep 25 '12 at 8:19
## 1 Answer
For any $r\geq 1$ and any positive integers $d_1, \dots d_r\in \mathbb Z_{>0}$ define $L(d_1,\dots , d_r)$ (think logarithmic lcm) to be \begin{equation*} L(d_1,\dots , d_r) ~:=~ \sum_{i=1}^r (-1)^{i+1}\Big(\sum_{1\leq k_1 < ... < k_i\leq r} \text{gcd}(d_{k_1}, \dots, d_{k_i}) \Big). \end{equation*} It is straightforward to check that $L$ is symmetric, homogeneous of degree 1 and that
\begin{equation*} (i) ~~~~~ L(d_1,\dots , d_r) = L(d_1,\dots , d_{r-1}) + L(d_1,\dots , d_{r-2}, d_n) - L(d_1,\dots , d_{r-2}, \text{gcd}( d_{r-1},d_r)). \end{equation*} Furthermore it follows directly from symmetry and property $(i)$ that \begin{equation*} (ii) ~~~~\text{if} ~~ d_r \big \vert d_i~~ \text{for some}~ 1\leq i \leq r-1 \text{, then} ~~L(d_1,\dots , d_r) = L(d_1,\dots , d_{r-1}). \end{equation*} The third property we want to establish is that \begin{eqnarray*} (iii) ~~~L(d_1,\dots , d_r) %&=& \sum_{i=1}^r (-1)^{i+1}\Big(\sum_{1\leq k_1 < ... < k_i\leq r} \text{gcd}(d_{k_1}, \dots, d_{k_i}) \Big) %\\ &\leq& \prod_{i=1}^r \Big(\prod_{1\leq k_1 < ... < k_i\leq r} \text{gcd}(d_{k_1}, \dots, d_{k_i}) \Big)^{(-1)^{i+1}} = \text{lcm}( d_1 ,\dots d_r), \end{eqnarray*} with equality if and only if for some $1\leq i \leq r$, $L(d_1,\dots , d_r)$ can be reduced to $L(d_i)$ in the sense of property $(ii)$, i.e. $d_j \big \vert d_i$ for all $j\neq i$.
To see this let $G$ be the cyclic group of order $\text{lcm}( d_1 ,\dots, d_r)$ and pick elements $c_1, \dots, c_r\in G$ of order $d_1, \dots, d_r$ respectively. One obviously has $$\# \Big ( \bigcup_{i=1}^r ~ \langle c_i \rangle \Big )~\leq ~ \# G. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(\star)$$ Since $G$ is cyclic one has $\# \big( \langle c_{k_1} \rangle \cap \dots \cap \langle c_{k_i} \rangle \big) = \text{gcd}(d_{k_1}, \dots, d_{k_i})$. Hence by the inclusion exclusion principal the left hand side of equation ($\star$) is equal to $L(d_1,\dots, d_r)$ and the right hand side is equal to $\text{lcm}( d_1 ,\dots, d_r)$. Observe that equality holds iff one of the $c_i$ has order $\text{lcm}( d_1 ,\dots, d_r)$, which proves the claim.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8655306696891785, "perplexity_flag": "middle"} |
http://johncarlosbaez.wordpress.com/2011/12/22/quantropy/ | Azimuth
Quantropy (Part 1)
I wish you all happy holidays! My wife Lisa and I are going to Bangkok on Christmas Eve, and thence to Luang Prabang, a town in Laos where the Nam Khan river joins the Mekong. We’ll return to Singapore on the 30th. See you then! And in the meantime, here’s a little present—something to mull over.
Statistical mechanics versus quantum mechanics
There’s a famous analogy between statistical mechanics and quantum mechanics. In statistical mechanics, a system can be in any state, but its probability of being in a state with energy $E$ is proportional to
$\exp(-E/T)$
where $T$ is the temperature in units where Boltzmann’s constant is 1. In quantum mechanics, a system can move along any path, but its amplitude for moving along a path with action $S$ is proportional to
$\exp(i S/\hbar)$
where $\hbar$ is Planck’s constant. So, we have an analogy where Planck’s constant is like an imaginary temperature:
Statistical Mechanics Quantum Mechanics
probabilities amplitudes
energy action
temperature Planck’s constant times i
In other words, making the replacements
$E \mapsto S$
$T \mapsto i \hbar$
formally turns the probabilities for states in statistical mechanics into the amplitudes for paths, or ‘histories’, in quantum mechanics.
But the probabilities $\exp(-E/T)$ arise naturally from maximizing entropy subject to a constraint on the expected energy. So what about the amplitudes $\exp(i S/\hbar)$?
Following the analogy without thinking too hard, we’d guess it arises from minimizing something subject to a constraint on the expected action.
But now we’re dealing with complex numbers, so ‘minimizing’ doesn’t sound right. It’s better talk about finding a ‘stationary point’: a place where the derivative of something is zero.
More importantly, what is this something? We’ll have to see—indeed, we’ll have to see if this whole idea makes sense! But for now, let’s just call it ‘quantropy’. This is a goofy word whose only virtue is that it quickly gets the idea across: just as the main ideas in statistical mechanics follow from the idea of maximizing entropy, we’d like the main ideas in quantum mechanics to follow from maximizing… err, well, finding a stationary point… of ‘quantropy’.
I don’t know how well this idea works, but there’s no way to know except by trying, so I’ll try it here. I got this idea thanks to a nudge from Uwe Stroinski and WebHubTel, who started talking about the principle of least action and the principle of maximum entropy at a moment when I was thinking hard about probabilities versus amplitudes.
Of course, if this idea makes sense, someone probably had it already. If you know where, please tell me.
Here’s the story…
Statics
Static systems at temperature zero obey the principle of minimum energy. Energy is typically the sum of kinetic and potential energy:
$E = K + V$
where the potential energy $V$ depends only on the system’s position, while the kinetic energy $K$ also depends on its velocity. The kinetic energy is often (but not always) a quadratic function of velocity with a minimum at velocity zero. In classical physics this lets our system minimize energy in a two-step way. First it will minimize kinetic energy, $K,$ by staying still. Then it will go on to minimize potential energy, $V,$ by choosing the right place to stay still.
This is actually somewhat surprising: usually minimizing the sum of two things involves an interesting tradeoff. But sometimes it doesn’t!
In quantum physics, a tradeoff is required, thanks to the uncertainty principle. We can’t know the position and velocity of a particle simultaneously, so we can’t simultaneously minimize potential and kinetic energy. This makes minimizing their sum much more interesting, as you’ll know if you’ve ever worked out the lowest-energy state of a harmonic oscillator or hydrogen atom.
But in classical physics, minimizing energy often forces us into ‘statics’: the boring part of physics, the part that studies things that don’t move. And people usually say statics at temperature zero is governed by the principle of minimum potential energy.
Next let’s turn up the heat. What about static systems at nonzero temperature? This is what people study in the subject called ‘thermostatics’, or more often, ‘equilibrium thermodynamics’.
In classical or quantum thermostatics at any fixed temperature, a closed system will obey the principle of minimum free energy. Now it will minimize
$F = E - T S$
where $T$ is the temperature and $S$ is the entropy. Note that this principle reduces to the principle of minimum energy when $T = 0.$ But as $T$ gets bigger, the second term in the above formula becomes more important, so the system gets more interested in having lots of entropy. That’s why water forms orderly ice crystals at low temperatures (more or less minimizing energy despite low entropy) and a wild random gas at high temperatures (more or less maximizing entropy despite high energy).
But where does the principle of minimum free energy come from?
One nice way to understand it uses probability theory. Suppose for simplicity that our system has a finite set of states, say $X,$ and the energy of the state $x \in X$ is $E_x.$ Instead of our system occupying a single definite state, let’s suppose it can be in any state, with a probability $p_x$ of being in the state $x.$ Then its entropy is, by definition:
$\displaystyle{ S = - \sum_x p_x \ln(p_x) }$
The expected value of the energy is
$\displaystyle{ E = \sum_x p_x E_x }$
Now suppose our system maximizes entropy subject to a constraint on the expected value of energy. Thanks to the Lagrange multiplier trick, this is the same as maximizing
$S - \beta E$
where $\beta$ is a Lagrange multiplier. When we go ahead and maximize this, we see the system chooses a Boltzmann distribution:
$\displaystyle{ p_x = \frac{\exp(-\beta E_x)}{\sum_x \exp(-\beta E_x)}}$
This is just a calculation; you must do it for yourself someday, and I will not rob you of that joy.
But what does this mean? We could call $\beta$ the coolness, since its inverse is the temperature, $T,$ at least in units where Boltzmann’s constant is set to 1. So, when the temperature is positive, maximizing $S - \beta E$ is the same as minimizing the free energy:
$F = E - T S$
(For negative temperatures, maximizing $S - \beta E$ would amount to maximizing free energy.)
So, every minimum or maximum principle described so far can be seen as a special case or limiting case of the principle of maximum entropy, as long as we admit that sometimes we need to maximize entropy subject to constraints.
Why ‘limiting case’? Because the principle of least energy only shows up as the low-temperature limit, or $\beta \to \infty$ limit, of the idea of maximizing entropy subject to a constraint on expected energy. But that’s good enough for me.
Dynamics
Now suppose things are changing as time passes, so we’re doing ‘dynamics’ instead of mere ‘statics’. In classical mechanics we can imagine a system tracing out a path $\gamma(t)$ as time passes from one time to another, for example from $t = t_0$ to $t = t_1.$ The action of this path is typically the integral of the kinetic minus potential energy:
$A(\gamma) = \displaystyle{ \int_{t_0}^{t_1} (K(t) - V(t)) \, dt }$
where $K(t)$ and $V(t)$ depend on the path $\gamma.$ Note that now I’m calling action $A$ instead of the more usual $S,$ since we’re already using $S$ for entropy and I don’t want things to get any more confusing than necessary.
The principle of least action says that if we fix the endpoints of this path, that is the points $\gamma(t_0)$ and $\gamma(t_1),$ the system will follow the path that minimizes the action subject to these constraints.
Why is there a minus sign in the definition of action? How did people come up with principle of least action? How is it related to the principle of least energy in statics? These are all fascinating questions. But I have a half-written book that tackles these questions, so I won’t delve into them here:
• John Baez and Derek Wise, Lectures on Classical Mechanics.
Instead, let’s go straight to dynamics in quantum mechanics. Here Feynman proposed that instead of our following a single definite path, it can follow any path, with an amplitude $a(\gamma)$ of following the path $\gamma.$ And he proposed this prescription for the amplitude:
$\displaystyle{ a(\gamma) = \frac{\exp(i A(\gamma)/\hbar)}{\int \exp(i A(\gamma)/\hbar) \, d \gamma}}$
where $\hbar$ is Planck’s constant. He also gave a heuristic argument showing that as $\hbar \to 0$, this prescription reduces to the principle of least action!
Unfortunately the integral over all paths—called a ‘path integral’—is hard to make rigorous except in certain special cases. And it’s a bit of a distraction for what I’m talking about now. So let’s talk more abstractly about ‘histories’ instead of paths with fixed endpoints, and consider a system whose possible ‘histories’ form a finite set, say $X.$ Systems of this sort frequently show up as discrete approximations to continuous ones, but they also show up in other contexts, like quantum cellular automata and topological quantum field theories. Don’t worry if you don’t know what those things are. I’d just prefer to write sums instead of integrals now, to make everything easier.
Suppose the action of the history $x \in X$ is $A_x.$ Then Feynman’s sum over histories formulation of quantum mechanics says the amplitude of the history $x$ is:
$\displaystyle{ a_x = \frac{\exp(i A_x /\hbar)}{\sum_x \exp(i A_x /\hbar) }}$
This looks very much like the Boltzmann distribution:
$\displaystyle{ p_x = \frac{\exp(-E_x/T)}{\sum_x \exp(- E_x/T)}}$
Indeed, the only serious difference is that we’re taking the exponential of an imaginary quantity instead of a real one.
So far everything has been a review of very standard stuff. Now comes something weird and new—at least, new to me.
Quantropy
I’ve described statics and dynamics, and a famous analogy between them, but there are some missing items in the analogy, which would be good to fill in:
Statics Dynamics
statistical mechanics quantum mechanics
probabilities amplitudes
Boltzmann distribution Feynman sum over histories
energy action
temperature Planck’s constant times i
entropy ???
Since the Boltzmann distribution
$\displaystyle{ p_x = \frac{\exp(-E_x/T)}{\sum_x \exp(- E_x/T)}}$
comes from the principle of maximum entropy, you might hope Feynman’s sum over histories formulation of quantum mechanics:
$\displaystyle{ a_x = \frac{\exp(i A_x /\hbar)}{\sum_x \exp(i A_x /\hbar) }}$
comes from a maximum principle too!
Unfortunately Feynman’s sum over histories involves complex numbers, and it doesn’t make sense to maximize a complex function. However, when we say nature likes to minimize or maximize something, it often behaves like a bad freshman who applies the first derivative test and quits there: it just finds a stationary point, where the first derivative is zero. For example, in statics we have ‘stable’ equilibria, which are local minima of the energy, but also ‘unstable’ equilibria, which are still stationary points of the energy, but not local minima. This is good for us, because stationary points still make sense for complex functions.
So let’s try to derive Feynman’s prescription from some sort of ‘principle of stationary quantropy’.
Suppose we have a finite set of histories, $X,$ and each history $x \in X$ has a complex amplitude $a_x \in \mathbb{C}.$ We’ll assume these amplitudes are normalized so that
$\sum_x a_x = 1$
since that’s what Feynman’s normalization actually achieves. We can try to define the quantropy of $a$ by:
$\displaystyle{ Q = - \sum_x a_x \ln(a_x) }$
You might fear this is ill-defined when $a_x = 0,$ but that’s not the worst problem; in the study of entropy we typically set
$0 \ln 0 = 0$
and everything works fine. The worst problem is that the logarithm has different branches: we can add any multiple of $2 \pi i$ to our logarithm and get another equally good logarithm. For now suppose we’ve chosen a specific logarithm for each number $a_x,$ and suppose that when we vary them they don’t go through zero, so we can smoothly change the logarithm as we move them. This should let us march ahead for now, but clearly it’s a disturbing issue which we should revisit someday.
Next, suppose each history $x$ has an action $A_x \in \mathbb{R}.$ Let’s seek amplitudes $a_x$ that give a stationary point of the quantropy $Q$ subject to a constraint on the expected action:
$\displaystyle{ A = \sum_x a_x A_x }$
The term ‘expected action’ is a bit odd, since the numbers $a_x$ are amplitudes rather than probabilities. While I could try to justify it from how expected values are computed in Feynman’s formalism, I’m mainly using this term because $A$ is analogous to the expected value of the energy, which we saw earlier. We can worry later what all this stuff really means; right now I’m just trying to push forwards with an analogy and do a calculation.
So, let’s look for a stationary point of $Q$ subject to a constraint on $A.$ To do this, I’d be inclined to use Lagrange multipliers and look for a stationary point of
$Q - \lambda A$
But there’s another constraint, too, namely
$\sum_x a_x = 1$
So let’s write
$B = \sum_x a_x$
and look for stationary points of $Q$ subject to the constraints
$A = \alpha , \qquad B = 1$
To do this, the Lagrange multiplier recipe says we should find stationary points of
$Q - \lambda A - \mu B$
where $\lambda$ and $\mu$ are Lagrange multipliers. The Lagrange multiplier $\lambda$ is really interesting. It’s analogous to ‘coolness’, $\beta = 1/T,$ so our analogy chart suggests that
$\lambda = 1/i\hbar$
This says that when $\lambda$ gets big our system becomes close to classical. So, we could call $\lambda$ the classicality of our system. The Lagrange multiplier $\mu$ is less interesting—or at least I haven’t thought about it much.
So, we’ll follow the usual Lagrange multiplier recipe and look for amplitudes for which
$0 = \displaystyle{ \frac{\partial}{\partial a_x} \left(Q - \lambda A - \mu B \right) }$
holds, along with the constraint equations. We begin by computing the derivatives we need:
$\begin{array}{cclcl} \displaystyle{ \frac{\partial}{\partial a_x} Q } &=& - \displaystyle{ \frac{\partial}{\partial a_x} \; a_x \ln(a_x)} &=& - \ln(a_x) - 1 \\ \\ \displaystyle{ \frac{\partial}{\partial a_x}\; A } &=& \displaystyle{ \frac{\partial}{\partial a_x} a_x A_x} &=& A_x \\ \\ \displaystyle{ \frac{\partial}{\partial a_x} B } &=& \displaystyle{ \frac{\partial}{\partial a_x}\; a_x } &=& 1 \end{array}$
Thus, we need
$0 = \displaystyle{ \frac{\partial}{\partial a_x} \left(Q - \lambda A - \mu B \right) = -\ln(a_x) - 1- \lambda A_x - \mu }$
or
$\displaystyle{ a_x = \frac{\exp(-\lambda A_x)}{\exp(\mu + 1)} }$
The constraint
$\sum_x a_x = 1$
then forces us to choose:
$\displaystyle{ \exp(\mu + 1) = \sum_x \exp(-\lambda A_x) }$
so we have
$\displaystyle{ a_x = \frac{\exp(-\lambda A_x)}{\sum_x \exp(-\lambda A_x)} }$
Hurrah! This is precisely Feynman’s sum over histories formulation of quantum mechanics if
$\lambda = 1/i\hbar$
We could go further with the calculation, but this is the punchline, so I’ll stop here. I’ll just note that the final answer:
$\displaystyle{ a_x = \frac{\exp(iA_x/\hbar)}{\sum_x \exp(iA_x/\hbar)} }$
does two equivalent things in one blow:
• It gives a stationary point of quantropy subject to the constraints that the amplitudes sum to 1 and the expected action takes some fixed value.
• It gives a stationary point of the free action:
$A - i \hbar Q$
subject to the constraint that the amplitudes sum to 1.
In case the second point is puzzling, note that the ‘free action’ is the quantum analogue of ‘free energy’, $E - T S.$ It’s also just $Q - \lambda A$ times $-i \hbar,$ and we already saw that finding stationary points of $Q - \lambda A$ is another way of finding stationary points of quantropy with a constraint on the expected action.
Note also that when $\hbar \to 0$, free action reduces to action, so we recover the principle of least action—or at least stationary action—in classical mechanics.
Summary. We recover Feynman’s sum over histories formulation of quantum mechanics from assuming that all histories have complex amplitudes, that these amplitudes sum to one, and that the amplitudes give a stationary point of quantropy subject to a constraint on the expected action. Alternatively, we can assume the amplitudes sum to one and that they give a stationary point of free action.
That’s sort of nice! So, here’s our analogy chart, all filled in:
Statics Dynamics
statistical mechanics quantum mechanics
probabilities amplitudes
Boltzmann distribution Feynman sum over histories
energy action
temperature Planck’s constant times i
entropy quantropy
This entry was posted on Thursday, December 22nd, 2011 at 3:56 am and is filed under information and entropy, physics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
101 Responses to Quantropy (Part 1)
1. [...] Baez introduces a notion of “quantropy” which is supposed to be a quantum-dynamical analogue to entropy in statistical [...]
2. Theo says:
The weirdest part of this story for me is not the notion of “quantropy”. Rather, it’s that in statistical mechanics, one sometimes treats the temperature T is a dynamical variable itself. I don’t know of any context in quantum mechanics / field theory where $\hbar$ is a dynamical variable. A variable, sure, but not one that varies with other dynamical variables.
Of course, I’d probably need an extra time dimension.
• John Baez says:
I agree, one of the peculiar things about this analogy is that temperature is something we can control, but not Planck’s constant… except for mathematical physicists, who casually use their superhuman powers to “set Planck’s constant to one” or “let Planck’s constant go to zero”.
There are some rather strange papers that treat Planck’s constant as a variable and even quantize it, but I can’t find them now—all I can find are some crackpot websites that discuss the quantization of Planck’s constant. The difference between ‘strange papers’ and ‘crackpot websites’ is that the former do mathematically valid things without making grandiose claims about their physical significance, while the latter make grandiose claims without any real calculations to back them up. Anyway, all this is too weird for me, at least today.
Somewhat less weird, but still mysterious to me, is the analogy between canonically conjugate variables in classical mechanics, and thermodynamically conjugate variables in thermodynamics. Both are defined using Legendre transforms, but I want to figure out more deeply what’s going on here. I mention this only because it might shed light on the idea of temperature as a dynamical variable.
• Mark Hopkins says:
Consider the following sequence of steps: (0) conjugate pair (q,p); (1) canonical 1-form p dq; (2) “kinematic law” v = dq/dt; (3) “dynamic law” f = dp/dt; (4) Lagrangian form as Lie derivative dL = Lie_{d/dt} (p dq) = f dq + p dv; (5) select out a subset (F, Q) of (f, q) coordinates; (6) lump the average of the remaining (f,q)’s and all the (p,v)’s into T dS to get the thermodynamic form T dS + F dQ; (8) for the p-V systems Q: {V}, F: {-p} this reduces to T dS – p dV.
For the Legendre transform (9) take the canonical 2-form dq dp (wedge products denoted by juxtaposition); (10) contract d/dt with this to obtain v dp – dq f = dH … the Hamiltonian form; (11) the formula for the Lie derivative is one and the same as the Legendre transform; (12) to apply this directly to the reduction done in (5) would require a time integral U for the temperature T, if treating S as one of the Q’s. Then the analogue of the canonical 1-form would be U dS + P dQ, with “dynamic law” dU/dt = T, dP/dt = F.
3. Nick says:
Slightly off topic but those notes on classical mechanics are fantastic. Thanks! I wish I had seen explanations that clear the first, or 2nd or 3rd, times I was taught Hamiltonian/Lagrangian Mechanics.
• John Baez says:
Thanks a million!
I wish I had seen explanations that clear the first, or 2nd or 3rd, times I was taught Hamiltonian/Lagrangian Mechanics.
Me too!
It’s taken me decades to understand this stuff. I guess I really should finish writing this book.
• Florifulgurator says:
Yeah. I’ve almost forgotten about these yummy lectures – They are in my huge pile of undone homework, much of it inspired by grandmaster John… I need to get 100 years old, it looks.
4. wolfgang says:
I am surprised you did not mention Wick rotations.
5. Suresh Venkat says:
In quantum information theory, there’s already a notion of “quantum” entropy, aka the von Neumann entropy, defined as the “entropy function” applied to the set of eigenvalues of a density matrix. How does that compare to what you describe here ?
• John Baez says:
Yes, there’s a perfectly fine concept of entropy for quantum systems, the von Neumann entropy, which is utterly different from ‘quantropy’. Quantropy is not entropy for quantum systems!
In my analogy charts I’m comparing
• statics at nonzero temperature and zero Planck’s constant (‘classical equilibrium thermodynamics’)
to
• dynamics at zero temperature and nonzero Planck’s constant (‘quantum mechanics’)
Entropy has a starring role in the first subject, and quantropy seems to be its analogue in the second.
Von Neumann entropy shows up as soon as we study
• statics at nonzero temperature and nonzero Planck’s constant (‘quantum equilibrium thermodynamics’)
Just as a classical system in equilibrium at nonzero temperature maximizes entropy subject to a constraint on the expected energy, so too a quantum system in equilibrium at nonzero temperature maximizes von Neumann entropy subject to a constraint on the expected energy.
So, one interesting question is how the analogy I described might fit in a bigger picture that also includes
• dynamics at nonzero temperature and nonzero Planck’s constant (‘quantum nonequilibrium thermodynamics’)
But, I don’t know the answer!
One small clue is that my formula for the Boltzmann distribution
$\displaystyle{ p_x = \frac{\exp(-E_x/T)}{\sum_{x \in X} \exp(- E_x/T)}}$
while phrased in terms of classical mechanics, also works in quantum mechanics if $X$ is the set of energy eigenstates and $E_x$ are the energy eigenvalues. The probabilities $p_x$ are then the diagonal entries of a density matrix, and its von Neumann entropy is just what I wrote down:
$\displaystyle{ S = - \sum_x p_x \ln(p_x) }$
So, the first column in my analogy chart, which concerns classical equilibrium thermodynamics, already contains some of the necessary math to handle quantum equilibrium thermodynamics.
If you think this is a bit confusing, well, so do I. I don’t think we’ve quite gotten to the bottom of this business yet.
• Suresh Venkat says:
Ah I see. Clearly I didn’t understand the original post, and your comment helps clarify the differences very nicely.
6. Hal Swyers says:
Great post, I am sure it will spark some discussion on the merits and history of quantropy and related thoughts over the holidays, so the timing is great. The anology between temperature and planck’s constant is a fun one to play with, bringing up thoughts of equilibrium conditions. A lot of fun thought to be had on this one.
7. gowers says:
Just noticed a typo on page 3 of the Lectures on Classical Mechanics: near the bottom it says $q(t_0)=b$ when it means $q(t_1)=b$.
• John Baez says:
Thanks, I fixed that! If anyone else spots typos or other mistakes, please let me know and I’ll fix them.
By the way, the tone of voice of this book is one thing I want to work on in future draft, since while it’s based on notes from my lectures, most of the actual sentences were written by Blair Smith, who LaTeXed it. It doesn’t sound like me—so sometime I’ll need to change it so it does.
• Garrett says:
The \$F_{\mu \nu}\$ in (3.4) should be a \$F_{ij}\$. (Nice book BTW!)
• John Baez says:
Damn, now I gotta TeX the whole thing again! I just did it 10 minutes ago.
But thanks!
8. Peter Morgan says:
Very nice. I’ve pointed you to my http://arxiv.org/abs/quant-ph/0411156v2, doi:10.1016/j.physleta.2005.02.019 before (http://johncarlosbaez.wordpress.com/2010/11/02/information-geometry-part-5/#comment-2316), where section 4 establishes that, for the free KG field, we can think of Planck’s constant as a measure of the amplitude of Lorentz invariant fluctuations — in contrast to the temperature, which we can think of as a measure of the amplitude of Aristotle group invariant fluctuations (of the Aristotle Group, see your comments at the link).
So, quantropy, which is a nice coining, is a measure of Lorentz invariant fluctuations, where entropy is a measure of Aristotle group invariant fluctuations (which is a nicely abstract enough definition to encourage me to hope that the free field case will extend to the interacting case). However, in my thinking it has been hard to see the relationship between quantropy and entropy as straightforward because of the appearance of the factor $\tanh{(\hbar\omega/kT)}$ in a presentation of the thermal state of a free quantum field; whereas I could see your extremization approach yielding a more natural relationship through the relationship between two group structures.
Although Feynman’s path integral approach has ruled QFT for so long, it can be understood to be no more than a way to construct a generating function for VEVs, which are more-or-less closely related to observables. Nothing says that a generating function has to be complex, even though there are certainly advantages to taking that step. My feeling is that if we use some other type of transform that the one introduced by Feynman (the Feynman VEV transform?), your relationship would look different. In particular, we could hope that we could write $T\rightarrow\hbar$ instead of $T\rightarrow\mathrm{i}\hbar$.
9. Garrett says:
This analogy works perfectly, provided one is willing to swallow complex probabilities for paths — which requires a lot of chewing. I think the most interesting aspects are how the wavefunction arises as the square root of that probability, due to time reversibility of the action, and the fact that you can explicitly write down the probability distribution over paths, and not just the partition function, and use it to calculate expectation values.
I wrote up a description in 2006, and nobody, including me, has talked about it much:
http://arxiv.org/abs/physics/0605068
• John Baez says:
Thanks, Garrett! I hadn’t known about that paper—it looks more like what I’m talking about than anything I’ve ever seen! If I ever publish this stuff I’ll definitely cite it. I see nice phrases like ‘expected path action’ and:
The resulting Lagrange multiplier value, $\alpha = \frac{1}{i \hbar}$, is an intrinsic quantum variable directly related to the average path action, $S$, of the universal reservoir. Planck’s constant is analogous to the thermodynamic temperature of a canonical ensemble, $i \hbar \leftrightarrow k_BT$.
My own attitude is that it’s more useful to treat amplitudes as analogous to probabilities than one would at first think (since probabilities are normally computed as the squares of absolute values of amplitudes), and that this is yet another bit of evidence for that. After my recent talk about this analogy people asked:
• What are some ways you can use your analogy to take ideas from quantum mechanics and turn them into really new ideas in stochastic mechanics?
and
• What are some ways you can use it in reverse, and take ideas from stochastic mechanics and turn them into really new ideas in quantum mechanics?
and I think this ‘quantropy’ business is an example of the second thing.
• Garrett says:
Thanks, John, I’d be delighted if you get something out of these ideas. You’d be the first to cite that paper of mine. I consider it to be based on kind of a crazy idea, but maybe the kind of crazy that’s true.
The biggest weirdness is allowing probabilities (in this case, of paths) to be complex. Once you do that, and allow your system paths to be in an action bath, described by Planck’s constant in the same way that a canonical ensemble is in a temperature bath, then everything follows from extremizing the entropy subject to constraints.
I had the same exciting idea: lots of interesting stat mech techniques could be brought to bare on questions in quantum mechanics. And I still think that’s true. I would have worked on this more, but got distracted with particle physics unification stuff. The most exciting thing, I think, is having a direct expression for the probability of a path (eq 2 in the paper), and not just having to deal with the usual path integral partition function.
There’s a lot of neat stuff here. I hadn’t even thought of classical, $h \to 0$ physics as being analogous to the zero temperature limit. Cool.
But, although I believe our thinking on this is based on the same basic analogy, we seem to be departing on our interpretation of what the quantum amplitude (wavefunction) is and where it is coming from. For me, I’m extremizing the usual entropy as a functional of the probability of paths to get the (complex, bizarrely) probability distribution. This is not the usual quantum amplitude, but the actual probability distribution. When one tries to use this to calculate an expectation value, or the probability of a physical outcome, one gets a real number. And when one looks at a system with time independence, the probability of an event breaks up into the amplitude of incoming paths and outgoing paths, multiplied. So that is the usual quantum amplitude (wavefunction) squared to get probability.
So… I guess we differ in that I think the only really weird thing one needs to do is accept the idea of complex probabilities of paths, and then use entropy extremization in the usual way to determine the probability distribution (finding the probability distribution compatible with our ignorance), rather than defining quantropy to determine amplitudes. It’s currently too late here in Maui for me to figure out to what degree quantropy will give equivalent results… but I suspect only for time independent Lagrangians, if those. Also, quantropy and amplitudes require some new rules for calculating things, whereas we know how to use a probability distribution to calculate. In any case though, whichever approach is correct, I agree this is a fascinating analogy that warrants more attention.
• Jim says:
Hi Garret,
In your paper, in the 5th equation on p.3, if the lower limit of the first integral is $-\infty$, then the upper limit of the second integral should also be $-\infty$. Similarly in the product of integrals in the 6th equation, it seems both the lower limit of the first and the upper limit of the second should be $q(-\infty)$. But this seems to conflict with your interpretation of the second integral being associated with paths for $t>t'$. Is this why you require the system to be time-symmetric?
• Garrett says:
Jim, ironically enough, there’s no reply button beneath your comment, so this reply appears time reversed. Yes, for this to work, $L(q,\dot{q})$ must be time independent. Then the action of paths coming in to some point, $q(t')$, is equal to the negative of the action of paths leaving it.
• Jim says:
This reminds me of a reformulation of the path integral formulation given by Sinha and Sorkin
(www.phy.syr.edu/~sorkin/some.papers/63.eprb.ps, eq.(2.4) and preceding text). They rewrite the absolute square of the sum over paths, which gives the total probability for some position measurement, as a sum of products of amplitudes with complex-conjugated amplitudes. They then interpret the complex conjugates as being associated with time-reversed, incoming paths, as opposed to your time-forward, outgoing paths; but both interpretations should be equally valid for a time-independent Lagrangian. Their amplitudes also seem more properly interpreted as probabilities, albeit complex, with their products representing conjunction.
• Garrett says:
Jim, yes, it does appear to be compatible with the forward and backwards histories approach in Sinha and Sorkin’s paper. Thanks for the link.
• Jim says:
I wonder whether the concept of complex probability can be made rigorous.
• John Baez says:
Jim wrote:
I wonder whether the concept of complex probability can be made rigorous.
Everything I’m doing in my blog article is perfectly rigorous, and it involves a bunch of complex numbers $a_x$ that sum to one. But I prefer not to call them ‘probabilities’, because probability theory is an established subject, and we’d be stretching the word in a drastic way.
But the terminology matters less than the actual math. A lot of new problems show up. For example, quantropy is not well-defined until we choose a branch for the logarithm function in this expression:
$\displaystyle{ - \sum_x a_x \ln a_x }$
After we do this, everything in this blog article works fine, but it’s still unnerving, and I’m not quite sure what the best way to proceed is. One possibility is to decree from the start that \$s_x = \ln a_x\$ rather than $a_x$ is the fundamentally important quantity, and then define quantropy by
$\displaystyle{ - \sum_x e^{s_x} s_x }$
This amounts to picking a logarithm for each number $a_x$ once and for all from the very start. To handle the possibility that $a_x = 0$, we have to say that $s_x = -\infty$ is allowed.
• Jim says:
I guess I was really wondering whether we could consider this a complex generalization of conventional probability theory. Another paper suggests this is possible:
http://www.bidabad.com/doc/complex-prob.pdf
They define complex probability in the context of a classical Markov chain. Their complex probabilities also sum to 1.
• John Baez says:
Hmm – thanks for that reference, Jim! I’ve seen work on ‘quantum random walks’ but not on ‘complex random walks’ where the complex probabilities sum to 1!
• Garrett says:
Jim, some things to consider: John and my descriptions differ slightly. I use the usual entropy, in terms of a (weird) complex probability over paths, in the presences of an h background. John instead defines a new thing, quantropy, in terms of amplitudes. I don’t know how rigorous one can make complex probabilities. Good question. I find it somewhat reassuring that when calculating the probability of any physical event from these complex probabilities, the result is real.
• reperiendi says:
There’s also Scott Aaronson’s great article on various reasons why complex numbers show up in QM.
http://www.scottaaronson.com/democritus/lec9.html
• John Baez says:
Btw, I think it’s a bit suboptimal for you to post comments as “repieriendi” instead of Mike Stay, especially comments that would help build the “Mike Stay” brand (knowledgeable about quantum theory, etc.).
Best, jb
• Mike Stay says:
Thanks—I just discovered that I can change the “display name” on my WordPress account so it shows my name instead of my username.
• Jon Rowlands says:
This question is pure crackpottery, but it’s not like I was fooling anyone anyway so here goes.
If time’s arrow is also the arrow of thermodynamics, and if the second law is routinely “violated” at small scales subject to the fluctuation theorem, doesn’t that practically beg that causality can also be violated at those scales? It makes me wonder whether these complex probabilities actually represent the combined real probabilities of casual and anti-casual paths. In this case the difference between stochastic and quantum mechanics would be whether to consider such paths.
One day I should learn math. Thanks for the blog.
10. David Corfield says:
Do you see the account here fitting in with the matrix mechanics over a rig we used to talk about?
• David Corfield says:
Perhaps here is the best place to see that conversation.
• John Baez says:
All my recent work on probabilities versus amplitudes is about comparing matrix mechanics over the ring of complex numbers to matrix mechanics over the rig of nonnegative real numbers. The first is roughly quantum mechanics, the second roughly stochastic mechanics—but this only becomes true when we let our matrices act as linear transformations of Hilbert spaces in the first case and $L^1$ spaces in the second. In other words, what matters is not just the rig but the extra structure with which we equip the modules over this rig.
I’ve been spending a lot of time taking ideas from quantum mechanics and transferring them to stochastic mechanics. But now, with this ‘quantropy’ business, I’m going the other way.
Thinking of the the principal of least action in terms of matrix mechanics over the tropical rig, which has + as its ‘multiplication’ and min as its ‘addition’—that’s another part of the picture. Maybe that’s what you’re actually asking about. But as you know, the tropical rig only covers the $T \to 0$ limit of equilibrium thermodynamics. Here I’m trying to think about the $T > 0$ case and also the imagary-$T$ case all in terms of ‘minimum principles’, or at least ‘stationary principles’.
I suppose more focused questions might elicit more coherent answers!
• David Corfield says:
Something I’m a little unclear on is how you view the relationship between statistical mechanics and stochastic mechanics. Are they just synonyms?
And then there’s the need for two parameters. Remember once you encouraged me to think of temperatures living on the Riemann sphere.
• John Baez says:
David wrote:
Something I’m a little unclear on is how you view the relationship between statistical mechanics and stochastic mechanics. Are they just synonyms?
No, not for me.
I use ‘statistical mechanics’ as most physicists do: it’s the use of probability theory to study classical or quantum systems for which one has incomplete knowledge of the state.
So, for example, if one has a classical system whose phase space is a symplectic manifold $X$, we use a point in $X$ to describe the system’s state when we have complete knowledge of it—but when we don’t, we resort to statistical mechanics and use a probability distribution on $X$, typically the probability distribution that maximizes entropy subject to the constraints provided by whatever we know. A typical example would be a box of gas, where instead of knowing the positions and velocities of all the atoms, we only know a few quantities that are easy to measure. The dynamics is fundamentally deterministic: if the system is in some state $x \in X$ at some initial time, it’ll be in some state $f_t(x) \in X$ at time $t$, where $f_t: X \to X$ is a function from $X$ to $X$. But if we only know a probability distribution to start with, that’s the best we can hope to know later.
There is also quantum version of the last paragraph: statistical mechanics comes in classical and quantum versions, and the latter is what we need when we get really serious about understanding matter as made of zillions of atoms, or radiation as made of zillions of photons.
Stochastic mechanics, on the other hand, is a term I use to describe systems where time evolution is fundamentally nondeterministic. More precisely, in stochastic mechanics time evolution is described by a Markov chain (if we think of time as coming in discrete steps) or Markov process (if we think of time as a continuum). So, the space of states can be any measure space $X$, and if we start the system in a state $x \in X$ at some initial time, the state will be described by a probability measure on $X$.
I introduced the term stochastic mechanics in my network theory course because I wanted to spend a lot of time discussing a certain analogy between quantum mechanics and stochastic mechanics—so I wanted similar-sounding names for both subjects. Other people may talk about ‘stochastic mechanics’, but I don’t take any responsibility for knowing what they mean by that phrase.
Since they both involve probability theory, statistical mechanics and stochastic mechanics are related in certain ways (which I haven’t tried very hard to formalize). But I think of them as different subjects.
• David Corfield says:
But why then up above do you say that quantropy is an example of an answer to
What are some ways you can use it in reverse, and take ideas from stochastic mechanics and turn them into really new ideas in quantum mechanics?
when the whole post was on an analogy between statistical mechanics and quantum mechanics?
Jaynesian/de Finettians would not see much of a difference, since for them probabilities only emerge due to our ignorance. In the stochastic mechanics case, when you specify a state, that’s really a macrostate covering a huge number of microstates. In that different microstates will evolve into non-equivalent microstates, there’s your nondeterministic evolution. But presumably in statistical mechanics, microstates of the same macrostate can diverge into different macrostates too.
• John Baez says:
David wrote:
But why then up above do you say that quantropy is an example of an answer to
What are some ways you can use it in reverse, and take ideas from stochastic mechanics and turn them into really new ideas in quantum mechanics?
when the whole post was on an analogy between statistical mechanics and quantum mechanics?
You’re right, I could have phrased this discussion in terms of stochastic mechanics. I guess I should try it! But in this blog post I preferred to talk about statistical mechanics.
What’s the difference?
In this blog post you’ll see there’s no mention of dynamics, i.e., time evolution, in my discussion of the left side of the chart: the statistical mechanics side. I am doing statics on the left side of the chart, but dynamics on the right side of the chart: the quantum side. We’re seeing an analogy between statics at nonzero temperature and zero Planck’s constant, and dynamics at nonzero Planck’s constant and zero temperature.
On the other hand, my analogy between stochastic mechanics and quantum mechanics always involves comparing stochastic dynamics, namely Markov processes, to quantum dynamics, namely one-parameter unitary groups.
So I think of these as different stories. But your words are making me ashamed of not trying to unify them into a single bigger story. And indeed this must be possible.
One clue, which you mentioned already, is that we need to allow both temperature and Planck’s constant be nonzero to see the full story.
There are lots of other clues.
• David Corfield says:
I guess statistical mechanics is the kind of dynamics where because of a good choice of equivalence relation, change is largely confined to movement within one class, hence it appears to be a statics. Your stochastic dynamics doesn’t typically respect the equivalence classes of a certain number of rabbits and wolves being alive.
• John Baez says:
David wrote:
I guess statistical mechanics is the kind of dynamics where because of a good choice of equivalence relation, change is largely confined to movement within one class, hence it appears to be a statics.
I wouldn’t say that. I don’t want to say what I would say, because it’d be long. But:
1) For a certain class of stochastic dynamical systems, entropy increases as time runs, and the state approaches a ‘Gibbs state’: a state that has maximum entropy subject to the constraints provided by the expected values of the conserved quantities. Gibbs states are a big subject in statistical mechanics, and the Boltzmann distribution I’m discussing here is a Gibbs state where the only conserved quantity involved is energy.
2) On the other hand, statistical mechanics often studies Gibbs states, not for stochastic dynamical systems, but for deterministic ones, like classical mechanics.
11. Uwe Stroinski says:
In your first box you mention the analogy between energy (statistical mechanics) and action (quantum theory). At first glance (and as a wild guess), that looks like some sort of Legendre transform. Can one can get from one to the other by a certain Legendre transform? That would be nice.
Doing a quick search I see that you mention Legendre transforms in response to Theo’s comment. So maybe I am not too far off. On the other hand you might have considered and discarded that already.
• John Baez says:
12. Linas Vepstas says:
Merry Christmas!
OK, so I read this, and thought, “oh John is jumping to conclusions, what he should have done is this: normalize with $1 = \sum_x |a_x|^2$ and take $A= \sum_x A_x |a_x|^2$ just like usual in QM, and then he should derive Q…” and so I sat down to do this myself, and quickly realized that, to my chagrin, that Feynman’s path amplitude doesn’t obey that sum-of-squares normalization. Which I found very irritating, as I always took it for granted, and now suddenly it feels like a very strange-looking beast.
Any wise words to explicate this? Clearly, what I tried to do fails because I’m mixing metaphors from first & second quantization. But why? Seems like these metaphors should have been more compatible. I’m not even sure why I’m bothering to ask this question…
• John Baez says:
Merry Christmas, Linas! I successfully made it to Luang Prabang.
I don’t have any wise words to explicate why the Feynman path integral is normalized so that the amplitudes of histories sum to 1:
$\sum_x A_x = 1$
instead of having
$\sum_x |A_x|^2 = 1$
I just know that this is how it works, and this is how it has always worked. But I agree that it seems weird, and I want to understand it better. It’s yet another interesting example of how sometimes it makes sense to treat amplitudes as analogous to probabilities, without the absolute value squared getting involved. This is a theme I’ve been pursuing lately, but mainly to take ideas from quantum mechanics and apply them to probability theory. This time, with ‘quantropy’, I’m going the other way—and at some point I realized that the path integral approach is perfectly set up for this.
Clearly, what I tried to do fails because I’m mixing metaphors from first & second quantization.
I wouldn’t say that. I might say you’re mixing metaphors from the Hamiltonian (Hilbert space) approach to quantization and the Lagrangian (path integral) approach. Both can be applied to first quantization, e.g. the quantization of particle on a line! But somehow states like to have amplitudes whose absolute values squared sum to one, while histories like to have amplitudes that sum to one.
• Florifulgurator says:
the path integral approach is perfectly set up for this.
Now I dare ask about that little distraction: Doing the path integral in general. I found this quite a fascinating problem in a former life, but never made it to any closer inspection. It smells like quite a fundamental thing.
• Linas Vepstas says:
Wandering off-topic a bit further, I’d like to mention that probabilities & amplitudes generalize to geometric values (points in symmetric spaces) in general. Some years ago, I had fun drafting the Wikipedia article http://en.wikipedia.org/wiki/Quantum_finite_automata when a certain set of connections gelled (bear with me here). A well-known theorem from undergrad comp-sci courses is that deterministic finite automata (DFA) and probabilistic finite automata (PFA) are completely isomorphic. In a certain sense, the PFA is more-or-less a set of Markov chains. What’s a Markov chain? Well, a certain class of matricies that act on probabilities; err, a vector of numbers totaling to one, err, a simplex, viz an N-dimensional space such that
$\sum_{i=1}^N p_i=1$
Some decades ago, someone clever noticed that you could just replace the simplex by $\mathrm{CP}^n$ and the Markov matrix by elements taken from $\mathrm{SU}(n)$ while leaving the rest of the theory untouched, and voila, one has a “quantum finite automaton” (QFA). This generalizes obviously: replace probabilities by some symmetric space in general, and replace the matrices by automorphisms of that space (the “geometric FA” or GFA). Armed with this generalization, one may now ask the general question: how do the usual laws & equations of stat-mech and QM and QFT generalize to this setting?
A few more quick remarks: what are the elements remaining in common across the PFA/QFA/GFA? Well, one picks an initial state vector from the space, and one picks out a hand-full of specific automorphisms from the automorphism group. Label each automorphism by a symbol (i.e. index) Then one iterates on these (a la the Barnsley fractal stuff!) There’s also a “final state vector”. If the initial state vector, after being iterated on by a finite number of these xforms, matches the final vector, then the automaton “stops”, and the string of symbols belongs to the “recognized language” of the automaton. (The Barnsley IFS stuff has a ‘picture’ as the final state, and the recognized language is completely free in the N symbols: all possible sequences of the iterated matrixes are allowed/possible in IFS).
You also wrote about graph theory/network theory (which I haven’t yet read) but I should mention that one may visualize some of the above via graphs/networks, with edges being automorphisms, etc. And then there are connections to model theory… Anyway, I find this stuff all very fascinating, wish I had more time to fiddle with it. I’m mentioning this cause it seems to overlap with some of your recent posts.
OH, and BTW, as far as I can tell, this is an almost completely unexplored territory; there are very few results. I think that crossing over tricks from physics and geometry to such settings can ‘solve’ various unsolved problems, e.g. by converting iterated sequences into products of operators. and back, and looking for the invariants/conserved quantities associated with the self-similarity/translation-invariance. Neat stuff, I think…
• Jesse C. McKeown says:
Oh! It’s a guess, but probably the difference arises because what matters in a state is the measurements you can subject it to, but when taking sum over histories, we’re applying linear operators and not measuring anything until all the interactions are turned off.
• John Baez says:
If you want to learn about path integrals, Florifulgurator, I suggest Barry Simon’s book Functional Integration and Quantum Physics. I wouldn’t suggest this for most people, but I get the impression you like analysis and like stochastic processes! This features both. And it’s well-written, too, though it assumes the reader has taken some graduate-level courses on real analysis and functional analysis. It focuses on what we can do rigorously with path integrals, which is just a microscopic part of the subject, but still very interesting. The rest is ‘mathemagical’ technology that I hope will be made rigorous sometime in this century.
13. Mark Hopkins says:
There is a vivid geometric realization of complex time that enters in through the back door by way of considering the question of how relativity and non-relativistic theory are related to one another.
The question is not merely academic. The FRW metric, for instance, has the form $dt^2 - \alpha dr^2$ where $\alpha$ approaches 0 as we approach the Big Bang singularity. This is nothing less than a cosmological realization of the Galilean limit. Thus, all three issues are intertwined: complex time, the Big Bang and the Galilean limit.
So, consider the relativistic mass shell invariant $\alpha E^2 - P^2 = (mc)^2$. Replace the total energy $E$ and invariant mass \$m\$ by the kinetic energy $H = E - mc^2$ and relativistic mass $M = \alpha E$. Then the invariant becomes $\lambda = P^2 - 2MH + \alpha H^2$ and the mass shell constraint reduces to the form $\lambda = 0$. This is a member of the family
[...]
invariants parametrized by $\alpha$; where $\alpha = (1/c)^2 > 0$ for relativity, and $\alpha = 0$ for non-relativistic theory and $\alpha 0$, Galilean when $\alpha = 0$ and locally Euclidean when $\alpha < 0$), while $t$ shadows the flow of absolute time on the 4-D manifold itself. For instance, a 5-D worldline is projected onto each 4-D layer as an ordinary worldline. But there is one additional feature: the intersection of the projected worldline with the actual worldline singles out a single instant in time: a "now".
• Mark Hopkins says:
Sorry, the reply got cut off in mid-section and restitched, with most of the body lost. I’ll try this again later.
• John Baez says:
The above comment can also be read here, nicely formatted:
http://www.docstoc.com/docs/109661975/ThermoTime
Everyone please remember: on all WordPress blogs, LaTeX is done like this:
\$latex E = mc^2\$
with the word ‘latex’ directly following the first dollar sign, no space. Double dollar signs and other fancy stuff don’t work here, either!
• Mark Hopkins says:
Thanks for your help. It’s probably best to go to the web link. The sentence starting out “This is a member” in the reply above is chopped off and ends with a fragment that comes from the end of the reply, with the middle 6-7 paragraphs lost. It may be a coincidence that the Frankenedited sentence almost makes sense — or it may be the blog-compiler is starting to understand language.
• John Baez says:
I’ve inserted a
[...]
in your post, Mark, to make it obvious that it’s not supposed to make sense around there. If you email the TeX I’ll be happy to fix the darn thing, since I like having the conversation here rather than dispersed across the web, and I like having comments that make sense!
(Perhaps emboldened by your fractured comment, but more likely just by the silly word ‘quantropy’ and the grand themes we’re discussing here, I’ve gotten a few comments that were so visionary and ahead of their time I’ve had to reject them.)
Your actual comment seems quite neat, but it’s raising a tangential puzzle in my mind, which is completely preventing me from understanding what you’re saying.
You start by pointing out that the speed of light essentially goes to infinity as we march back to the Big Bang, making special relativity reduce to Galilean physics. But ‘the speed of light’ here is a rather tricky coordinate-dependent concept: you’re defining it to be $c/a(t)$ in coordinates where the metric looks like this:
$-c^2 dt^2 + a(t)^2 (dx^2 + dy^2 + dz^2)$
Then, since $a(t) \to 0$ as $t \to 0$ in the usual Big Bang solutions, we get $c/a(t) \to +\infty$.
On the other hand, there’s a fascinating line of work going back to Belinskii, Khalatnikov and Lifshitz which seems to present an opposite picture: one in which each point of space becomes essentially ‘isolated’, decoupled from all the rest, as we march backwards in time to the Big Bang and the fields at each point have had less time to interact with the rest. I’ll just quote a bit of this:
• Axel Kleinschmidt and Hermann Nicolai, Cosmological quantum billiards.
[...] in the celebrated analysis of Belinskii, Khalatnikov and Lifshitz (BKL) of the gravitational field equations in the vicinity of a generic space-like (cosmological) singularity [...] the causal decoupling of spatial points near the spacelike singularity effectively leads to a dimensional reduction whereby the equations of motion become ultralocal in space, and the dynamics should therefore be describable in terms of a (continuous) superposition of one-dimensional systems, one for each spatial point.
In this paper it’s claimed that this BKL limit be seen as a limit where the speed of light goes to zero:
• T. Damour, M. Henneaux and H. Nicolai E10 and a “small tension expansion” of M theory.
So, I’m puzzled! They say the speed of light is going to zero; you’re saying it goes to infinity. Since this speed is a coordinate-dependent concept, there’s not necessarily a contradiction, but still I have trouble reconciling these two viewpoints.
I’ll add that the line of work Hermann Nicolai is engaged in here is quite fascinating. The idea is that if we consider a generic non-homogeneous cosmology and run it back to the big bang, the shape of the universe wiggles around faster and faster, and in the $t \to 0$ limit it becomes mathematically equivalent to a billiard ball bouncing chaotically within the walls of a certain geometrical shape called a ‘Weyl chamber’, which plays an important role in Lie theory.
For a less stressful introduction to these ideas, people can start here:
• Mixmaster universe, Wikipedia.
and then go here:
• BKL singularity, Wikipedia.
14. Linas Vepstas says:
The exp() map is well-known to convert infinitesimals to geodesics, e.g. elts of a Lie algebra into elts of a Lie group. Jürgen Jost has a nice book <i.Riemannian Geometry wherein he shows how to turn Lie derivatives into geodesics using the exp map. What’s keen is he does it twice: once using the usual Lagrangian variational principles on a path, and then again using a Hamiltonian formulation. I thought it was neat, as it mixed together the standard mathematical notation for geometry (index-free notation), with the standard physics explanation / derivation / terminology, a mixture I’d never seen before. (Its a highly readable book, if anyone is looking for a strong yet approachable treatment of the title topic — strongly recommended.)
Anyway… Seeing the exp() up above suggests that we are looking at a relationship between “infinitesimals” and “geodesics” on a “manifold”. What, then is the underlying “manifold”? Conversely, in Riemannian geometry, one may talk about the “energy” of a geodesic. But what is the analogous “entropy” of a geodesic? If its not generalizable, why not?
I’m being lazy here; I could/should scurry off to work out the answer myself, but in the spirit of Erdös-style collaborative math, I’ll leave off with the question for now.
15. Uwe Stroinski says:
Quantum mechanics as an isothermal process at high imaginary temperature?
Maybe it is time to give an example, e.g. to compute the quantropy of hydrogen. Or is this too complicated because of ‘sum over histories’ issues?
16. Barry Adams says:
Spotted a nasty mistake in your normalization of the amplitudes.
Its not
$\sum_x a_x = 1$
Its
$\sum_x |a_x|^2 = \sum_x a^*_x a_x= 1$
And you seem to carry the amplitude on through the calculation like its a probability.
I like also like to how temperature versus time comes into the calculation in general. I regularly see wick rotations, swapping time and a spacial forth dimension, $w = it$, and temperature swapped for time as Some sum $e^{-ikt}$ = Some other sum $e^{-kT}$, but never see the exact thermodynamic or maths of the trick.
• John Baez says:
Barry wrote:
Spotted a nasty mistake in your normalization of the amplitudes.
This is not a mistake! I know it looks weird, but if this stuff weren’t weird I wouldn’t bother talking about it. This is how amplitudes are actually normalized in the path integral formulation of quantum mechanics! I am not considering a wavefunction $\psi$ on some set $X$ of states; that clearly must be normalized to achieve
$\sum_{x \in X} |\psi_x|^2 = 1$
Instead, I’m considering a path integral, where $X$ is the set of histories. Here each history $x$ gets an amplitude $a_x$ that’s proportional to $\exp(i S(x) / \hbar)$ where $S(x)$ is the action of that history… but these amplitudes are normalized to sum to 1:
$\sum_{x \in X} a_x = 1$
To achieve this, we need to divide the phases $\exp(i S(x) / \hbar)$ by the so-called partition function:
$\sum_{x \in X} \exp(i S(x) / \hbar)$
Of course, I’m treating a baby example here: in full-fledged quantum field theory, we replace this sum by an integral over the space of paths. These integrals are difficult to make rigorous, and people usually proceed by doing a Wick rotation, which amounts to replacing $i /\hbar$ by a real number $-\beta$, and replacing time by imaginary time, so the action $S$ becomes a positive quantity. Then the amplitudes become probabilities… and this “explains” why I was treating the amplitudes like probabilities all along.
However, there are cases where you can make the path integral rigorous without going to imaginary time, and then we can see directly why we need to normalize the amplitudes for histories so they sum to 1. Namely, you can use a path integral to compute a vacuum-vacuum transition amplitude, and get the partition function, which therefore must equal 1.
17. Mike Stay says:
Your table shows that energy and action are analogous; this seems to be part of a bigger picture that includes at least entropy as analogous to both of those, too. I think that just about any quantity defined by an integral over a path would behave similarly.
I see four broad areas to consider, based on a temperature parameter:
1. $T = 0$: statics, or “least quantity”
2. Real $T > 0$: statistical mechanics
3. Imaginary $T$: a thermal ensemble gets replaced by a quantum superposition
4. Complex $T$: ensembles of quantum systems, like NMR
I’m not going to get into the last of these in what follows.
“Least quantity”
Lagrangian of a thrown particle
$K$ is kinetic energy, i.e. “action density” due to motion
$V$ is potential energy, i.e. “action density” due to position, e.g. gravitational
$\begin{array}{rcl}\int (K-V) \, d t & = & \int \left[m\left(\frac{d q(t)}{d t}^2 - V(q(t)\right)\right] d t \\ & = & A\, \mbox{(the action of the path)}\end{array}$
We get the principle of least action by setting $\partial A = 0$.
“Static” systems related by a Wick rotation
Substitute q(s = iz) for q(t) to get a “springy” static system.
In your homework A Spring in Imaginary Time, you guide students through a Wick-rotation-like process that transforms the Lagrangian above into the Hamiltonian of a springy system. (I say “springy” because it’s not exactly the Hamiltonian for a hanging spring: here each infinitesimal piece of the spring is at a fixed horizontal position and is free to move only vertically.)
$\kappa$ is the potential energy density due to stretching.
$\upsilon$ is the potential energy density due to position, e.g. gravitational.
$\displaystyle \begin{array}{rcl}\int(\kappa-\upsilon) dz & = & \int\left[k\left(\frac{dq(iz)}{dz}\right)^2 - \upsilon(q(iz))\right] dz\\ & = & -i\int\left[-k\left(\frac{dq(iz)}{diz}\right)^2 - \upsilon(q(iz))\right] diz\\ & = & i \int\left[k\left(\frac{dq(iz)}{diz}\right)^2 + \upsilon(q(iz))\right] diz\\ \mbox{Let }s = iz.\\ & = & i\int\left[k\left(\frac{dq(s)}{ds}\right)^2 + \upsilon(q(s))\right] ds\\ & = & iE\,\mbox{(the potential energy in the spring)}\end{array}$
We get the principle of least energy by setting $\partial E = 0$.
Substitute q(β = iz) for q(t) to get a thermometer system.
We can repeat the process above, but use inverse temperature, or “coolness”, instead of time. Note that this is still a statics problem at heart! We’ll introduce another temperature below when we allow for multiple possible q‘s.
$K$ is the potential energy due to rate of change of $q$ with respect to $\beta$. (This has to do with the thermal expansion coefficient: if we fix length of the thermometer and then cool it, we get “stretching” potential energy.)
$V$ is any extra potential energy due to $q.$
$\displaystyle \begin{array}{rcl}\int(K-V) dz & = & \int\left[k\left(\frac{dq(iz)}{dz}\right)^2 - V(q(iz))\right] dz\\ & = & -i\int\left[-k\left(\frac{dq(iz)}{diz}\right)^2 - V(q(iz))\right] diz\\ & = & i \int\left[k\left(\frac{dq(iz)}{diz}\right)^2 + V(q(iz))\right] diz\\ \mbox{Let }\beta = iz.\\ & = & i\int\left[k\left(\frac{dq(\beta)}{d\beta}\right)^2 + V(q(\beta))\right] d\beta\\ & = & iS_1\,\mbox{(the entropy lost as the thermometer is cooled)}\end{array}$
We get the principle of “least entropy lost” by setting $\partial S_1 = 0$.
Substitute q(T₁ = iz) for q(t).
We can repeat the process above, but use temperature instead of time. We get a system whose heat capacity is governed by a function $q(T)$ and its derivative. We’re trying to find the best function $q$, the most efficient way to raise the temperature of the system.
$C$ is the heat capacity (= entropy) proportional to $(dq/dT_1)^2$.
$V$ is the heat capacity due to $q.$
$\displaystyle \begin{array}{rcl}\int(C-V) dz & = & \int\left[k\left(\frac{dq(iz)}{dz}\right)^2 - V(q(iz))\right] dz\\ & = & -i\int\left[-k\left(\frac{dq(iz)}{diz}\right)^2 - V(q(iz))\right] diz\\ & = & i \int\left[k\left(\frac{dq(iz)}{diz}\right)^2 + V(q(iz))\right] diz\\ \mbox{Let }T_1 = iz.\\ & = & i\int\left[k\left(\frac{dq(T_1)}{dT_1}\right)^2 + V(q(T_1))\right] dT_1\\ & = & iE\,\mbox{(the energy required to raise the temperature)}\end{array}$
We again get the principle of least energy by setting $\partial E = 0.$
Statistical mechanics
Here we allow lots of possible $q$‘s, then maximize entropy subject to constraints using the Lagrange multiplier trick.
Thrown particle
For a thrown particle, we choose a real measure $a_x$ on the set of paths. For simplicity, we assume the set is finite.
Normalize so $\sum a_x = 1.$
Define entropy to be $S = - \sum a_x \ln a_x.$
Our problem is to choose $a_x$ to minimize the “free action” $F = A - \lambda S$, or, what’s equivalent, to maximize $S$ subject to a constraint on $A.$
To make units match, λ must have units of action, so it’s some multiple of ℏ. Replace λ by ℏλ so the free action is
$F = A - \hbar\lambda\, S.$
The distribution that minimizes the free action is the Gibbs distribution $a_x = \exp(-A/\hbar\lambda) / Z,$ where $Z$ is the usual partition function.
However, there are other observables of a path, like the position $q_{1/2}$ at the halfway point; given another constraint on the average value of $q_{1/2}$ over all paths, we get a distribution like
$a_x = \exp(-\left[A + pq_{1/2}\right]/\hbar\lambda) / Z.$
The conjugate variable to that position is a momentum: in order to get from the starting point to the given point in the allotted time, the particle has to have the corresponding momentum.
$dA = \hbar\lambda\, dS - p\, dq.$
Other examples from Wick rotation
Introduce a temperature T [Kelvins] that perturbs the spring.
We minimize the free energy $F = E - kT\, S,$ i.e. maximize the entropy $S$ subject to a constraint on the expected energy
$\langle E\rangle = \sum a_x E_x.$
We get the measure $a_x = \exp(-E_x/kT) / Z.$
Other observables about the spring’s path give conjugate variables whose product is energy. Given constraint on the average position of the spring at the halfway point, we get a conjugate force: pulling the spring out of equilibrium requires a force.
$dE = kT\, dS - F\, dq.$
Statistical ensemble of thermometers with ensemble temperature T₂ [unitless].
We minimize the “free entropy” $F = S_1 - T_2S_2$, i.e. we maximize the entropy $S_2$ subject to a constraint on the expected entropy lost
$\langle S_1\rangle = \sum a_x S_{1,x}.$
We get the measure $a_x = \exp(-S_{1,x}/T_2) / Z.$
Given a constraint on the average position at the halfway point, we get a conjugate inverse length $r$ that tells how much entropy is lost when the thermometer shrinks by $dq.$
$dS_1 = T_2\, dS_2 - r\, dq.$
Statistical ensemble of functions q with ensemble temperature T₂ [Kelvins].
We minimize the free energy $F = E - kT_2\, S,$ i.e. we maximize the entropy $S$ subject to a constraint on the expected energy
$\langle E\rangle = \sum a_x E_x.$
We get the measure $a_x = \exp(-E_x/kT_2) / Z.$
Again, a constraint on the position would give a conjugate force. It’s a little harder to see how here, but given a non-optimal function $q(T),$ we have an extra energy cost due to inefficiency that’s analogous to the stretching potential energy when pulling a spring out of equilibrium.
Thermo to quantum via Wick rotation of Lagrange multiplier
We allow a complex-valued measure $a$ as you did in the article above. We pick a logarithm for each $a_x$ and assume they don’t go through zero as we vary them. We also choose an imaginary Lagrange multiplier.
Normalize so $\sum a_x = 1.$
Define quantropy $Q = - \sum a_x \ln a_x.$
Minimize the free action $F = A - \hbar\lambda\, Q.$
We get $a_x = \exp(-A_x/\hbar\lambda).$ If $\lambda = -i,$ we get Feynman’s sum over histories. Surely something like the 2-slit experiment considers histories with a constraint on position at a particular time, and we get a conjugate momentum?
von Neumann Entropy
Again allow complex-valued $a_x.$ However, this time we normalize so $\sum |a_x|^2 = 1.$
Define von Neumann entropy $S = - \sum |a_x|^2 \ln |a_x|^2.$
Allow quantum superposition of perturbed springs.
$\langle E\rangle = \sum |a_x|^2 E_x.$ Get $a_x = \exp(-E_x/kT) / Z.$ If $T = -i\hbar/tk,$ we get the evolution of the quantum state $|q\rangle$ under the given Hamiltonian for a time $t.$
Allow quantum superpositions of thermometers.
$\langle S_1\rangle = \sum |a_x|^2 S_{1,x}.$ Get $a_x = \exp(-S_{1,x}/T_2) / Z.$ If $T_2 = -i,$ we get something like a sum over histories, but with a different normalization condition that converges because our set of paths is finite.
Allow quantum superposition of systems.
$\langle E \rangle = \sum |a_x|^2 E_x.$ Get $a_x = \exp(-E_x/kT_2) / Z.$ If $T_2 = -i\hbar/tk,$ we get the result of “Measure E, then heat the superposition T₁ degrees in a time much less than t seconds, then wait t seconds.” Different functions q in the superposition change the heat capacity differently and thus the systems end up at different energies.
So to sum up, there’s at least a three-way analogy between action, energy, and entropy depending on what you’re integrating over. You get a kind of “statics” if you extremize the integral by varying the path; by allowing multiple paths and constraints on observables, you get conjugate variables and “free” quantities that you want to minimize; and by taking the temperature to be imaginary, you get quantum systems.
• John Baez says:
I’ll make a little comment on this before I try hard to understand what you’re actually doing: your definition of ‘von Neumann entropy’ here looks wrong, or at least odd:
Again allow complex-valued $a_x$. However, this time we normalize so $\sum |a_x|^2 = 1$.
Define von Neumann entropy $S = - \sum |a_x|^2 \ln |a_x|^2$.
In quantum mechanics a mixed state—that is, a state in which we may have ignorance about the system—is described by a density matrix. This is a bounded linear operator $\rho$ on a Hilbert space that’s nonnegative and has
$\mathrm{tr}(\rho) = 1$
Here the trace is the sum of the diagonal entries in any orthonormal basis. The von Neumann entropy or simply entropy of such a mixed state is given by
$- \mathrm{tr}(\rho \, \ln \rho)$
We can find a basis in which $\rho$ is diagonal, and then $\rho_{i i}$ is the probability of the mixed state being in the $i$th pure state, and
$- \mathrm{tr}(\rho \, \ln \rho) = - \sum_i \rho_{i i} \ln(\rho_{i i})$
is given in terms of these probabilities in a way that closely resembles classical entropy.
When $\psi \in L^2(X)$ is a pure state, the corresponding density matrix $\rho$ is the projection onto the vector $\psi$, given by
$\rho \phi = \langle \psi, \phi \rangle \, \psi$
If we diagonalize this $\rho$ we get a matrix with one 1 on the diagonal and the other entries zero. So, the von Neumann entropy of a pure state is zero! not something like
$-\sum_x |\psi_x|^2 \, \ln |\psi_x|^2$
It makes sense that it’s zero, since we know as much as can be known about the system when it’s in a pure state!
On the other hand, if we take the pure state $\psi$ and ‘collapse’ it with respect to the standard basis of $L^2$, we get a mixed state whose von Neumann entropy is
$-\sum_x |\psi_x|^2 \, \ln |\psi_x|^2$
• Uwe Stroinski says:
Three of your formulas do not parse (at least on my system).
• John Baez says:
Whoops, I forgot that LaTeX has a command \det but not a command \tr!
18. Uwe Stroinski says:
To get some intuition about quantropy we could try a ‘divide and conquer’ strategy. That means to investigate how quantropy of a ‘larger’ system comes from the quantropy of its ‘parts’. Without being precise of what ‘large’ and ‘part’ means at that point of the argument.
For entropy the situation is well-known. The entropy $S$ of two independent systems $X$ and $Y$ satisfies
$S(X\otimes Y) = S(X)+S(Y)$.
Independence is crucial and the proof follows from the definition of entropy $S(X):=-\sum_x p_x \log p_x$ and observation that the combined system is in a state $x\otimes y$ with probability $p_x p_y$ where $p_x$ (resp. $p_y$) denotes the probability that $X$ (resp. $Y$) is in state $x$ (resp. $y$).
To derive a quantropy counterpart we remember that we are in a context of histories. Simply tensoring two systems does not seem adequate. We rather have to ‘glue’ them together. If we do this in an appropriate way (and my memory serves me well) the amplitude $a_{x+y}$ of a combined history then satisfies
$a_{x+y}=a_x a_y$.
Formally we can proceed as in the case of entropies to obtain
$Q(X\times Y)=Q(X)+Q(Y)$.
Thus we have encountered another entry in your analogy chart.
What I find remarkable is that the above equation of quantropy (contrary to the one for entropy) is indexed by histories. Thus one might be able to get some time evolution equation for quantropy (at least in the above case of independent histories) and thereby getting rid of your finiteness assumptions on $X$.
• John Baez says:
My intuition tells me that quantropy should add both for ‘tensoring’ histories (i.e. setting two systems side by side and considering a history of the joint system made from a history of each part) and also for ‘composing’ histories (i.e. letting a system carry out a history for some interval of time and then another history after that).
My finiteness assumption on $X$ was mainly to sidestep the difficulties people always face with real-time path integrals (and secondarily to simplify the problem of choosing a branch for the logarithm when defining quantropy). I would like to try some examples where it’s not finite.
Gotta run!
19. [...] The table in John’s post on quantropy shows that energy and action are analogous [...]
20. John Baez says:
At the Universiti Putra Malaysia, Saeid Molladavoudi pointed me to this interesting paper, which claims to derive first the classical Hamilton–Jacobi equation and then Schrödinger’s equation from variational principles, where the action for the latter is obtained from the action for the former by adding a term proportional to a certain Fisher information:
• Marcel Reginatto, Derivation of the equations of nonrelativistic quantum
mechanics using the principle of minimum Fisher information
.
I’ll have to check the calculations to see if they’re right! Then I can worry about what they actually mean, and if they’re related to the ‘principle of stationary quantropy’.
21. [...] my post on quantropy I explained how the first three principles fit into a single framework if we treat Planck’s constant as an imaginary temperature [...]
22. AzimuthReader says:
From Marco Frasca at The Gauge Connection on 1/25/2012:
Quantum mechanics and the square root of Brownian motion
23. amarashiki says:
John, an off-topic question. What LaTeX editor do you use in your blog? Any nice free alternative? My blog on Physics,Mathematics and more is to be launched soon. But I need suggestions on how to implement nice LaTeX code here.
Turning to your quantropy issue…The thermodynamics analogy can be something else. Indeed, the stuff related to entropic gravity, the rôle of entropy in General Relativity and the quantum/classical information theory strongly point out in that direction. Moreover, could $k_B$, the Boltzmann’s constant, play some deeper fundamental aspect in the foundations of Quantum Physics than the own Planck’s constant? Recently, a group also suggests that Quantum Mechanics is “emergent”. The question I would ask next is…What are the most general entropy/quantropy functions/functionals that are mathematically and physically allowed? I just tend to think about Tsallis and non-extensive entropies as a big hint into the essential nature of entropy in the physical theories, maybe quantum gravity too whatever it is? A.Zeilinger himself told once than the key-word to understand QM and quantization itself was that information itself is quantized.
• Mike Stay says:
Latex math support is turned on by default for the free blogs on WordPress. http://en.support.wordpress.com/latex/
• John Baez says:
… and that’s what I use.
24. Richard Kleeman says:
This is an interesting idea. I was thinking while I was reading it that it would be nice to have some kind of more intuitive understanding of what “quantropy” might be. One place to look for this might be Shannon’s information theory axioms.
There are three of these and they allow one to derive the functional form of entropy up to a multiplicative contant (or logarithm base). The meaty axiom is the second which just states that if one subdivides an outcome of a random variable into suboutcomes then the entropy increases by the new subsystem entropy weighted by the outcome probability. My intuitive view of this is that it relates entropy to coarse/fine graining which of course is central to what it means physically.
It might be interesting to start with Shannon’s axioms as applied to a “complex probability” i.e. quantum amplitude and see whether the functional form is essentially determined in the manner you are suggesting i.e. taking an appropriate branch of the complex logarithm. I started looking at Shannon’s original proof and it may need significant work to do this. You would also need to make some assumption about how a complex probability might work conditionally.
I wonder then if that works whether there is a relation between this axiom and the superposition princple in quantum mechanics….
Anyway enough idle speculation….
25. Uwe Stroinski says:
These are some sloppy thoughts towards a definition of quantropy based on your ideas so far. Under the assumption that quantropy is stationary we know that there are Lagrange multipliers $\lambda, \mu\in\mathbb{C}$ such that
$\log a_x = -\lambda A_x - \mu -1$
and thus
$a_x= \exp\left(\frac{-\lambda A_x}{\mu+1}\right).$
We plug these two equations into the formal definition of quantropy
$Q=-\sum_X a_x \log a_x$
and together with the constraint $\sum_{X}a_x=1$ this yields
$Q=\mu + 1 + \lambda \sum_X A_x\exp\left(\frac{-\lambda A_x}{\mu +1}\right)$
with
$\mu+1=\log\left(\sum_X \exp\left(-\lambda A_x\right)\right).$
In the stationary situation $\mu$ can, at least formally, be interpreted as a zero-point quantropy. Albeit the zero-point (ground state) is not physical with its classicality 0 ($\mu=1+\lim_{\lambda\rightarrow 0} Q$).
Let now $A_x$ be the classical action associated with a history $x$
$A_x=\int_0^t\frac{m}{2}\left(\frac{d x}{d s}\right)^2-V(x(s))ds$
where $m$ is the mass of a particle and $V$ its potential energy. Feynman’s heuristic expression for the transition amplitude of the particle then is
$\psi(0,t,u,v) = K \int_{C^{0,t}_{u,v}}\exp\left(\frac{i}{\hbar}A_x\right){\cal D}x.$
It is tempting to define the transition amplitude of quantropy in the stationary situation as
$\psi(0,t,u,v) = L - L K \int_{C^{0,t}_{u,v}}A_x\exp\left( K A_x\right){\cal D}x.$
for some suitable constants $L,K \in \mathbb{C}$. This is not completely satisfactory from a foundational perspective, it might however be helpful in delivering some first examples. There are essentially two strategies to make sense of the above path integrals. One can apply the Trotter-Kato product formula or (due to Kac) do a Wick rotation and analytically extend a Wiener integral to the imaginary axis. Both ways are clustered with technicalities and thus, as a first approach, one could use a heuristic originally due to Feynman. He approximates continuous paths by polygonal paths with finitely many edges and uses a limit argument. As far as I can see one might approach some difficulties with the domain of the action that are not present in Feynman’s situation, however that seems to be more manageable than trying to define a logarithm of the amplitudes as requested in the formal definition of quantropy.
• John Baez says:
Uwe wrote:
… one could use a heuristic originally due to Feynman. He approximates continuous paths by polygonal paths with finitely many edges and uses a limit argument.
Interesting that you say that: I’ve done a calculation like this, and I’ll present it in two blog posts here!
There are certainly lots of technical issues of mathematical rigor to consider. However, I think it’s even more important at this stage to get some physical intuition for quantropy. If Feynman had worried a lot about rigor we might never have gotten Feynman path integrals.
• Uwe Stroinski says:
Excellent. I will certainly read this upcoming blog article.
26. In my first post in this series, we saw that rounding off a well-known analogy between statistical mechanics and quantum mechanics requires a new concept: ‘quantropy’. To get some feeling for this concept, we should look at some examples. But to do that, we need to develop some tools to compute quantropy. That’s what we’ll do today.
27. [...] Go check out John Baez on the remarkable analogies between statistical mechanics and quantum mechanics… and the idea of quantropy. [...]
28. Garrett says:
John, if you’d like to probe the depths of complex ignorance, I can help, or unhelp, depending on point of view.
• John Baez says:
Okay. I guess my main point is that I don’t see any difference between my ‘quantropy’ and your ‘entropy of a complex probability distribution’—except for words and perhaps motivation. As far as I can tell, they’re equal. I talk about finding a ‘critical point’ of quantropy given a constraint on the expected action, while you seem to talk about ‘maximizing complex ignorance’. I don’t know what it means to ‘maximize’ a complex-valued function; ‘critical point’ seems like the mathematically correct term here—but in terms of what you actually do, it seems to be the same thing I do.
But maybe I’m wrong. Can we throw out the words for a bit and focus on the math, and see what if any difference there is between our procedures?
• amarashiki says:
Dear John. Maybe you and your readers could be interested in my post on Entropy in my blog. Comments, possible mistakes and suggestions are welcome:
http://thespectrumofriemannium.wordpress.com/2012/02/07/log003-entropy/
• Garrett says:
John,
There is an important difference, and not just with words. You are primarily dealing with the amplitude, $a$, while I am primarily dealing with the probability distribution, $p$, which I allow to be complex. You are inventing a new functional, quantropy, $\int a ln(a)$, while I am extremizing the usual entropy, or Ignorance, $\int p ln(p)$, extended for complex $p$. One should raise an eyebrow at a complex probability. But under usual circumstances (a time independent Lagrangian), one gets $p=\psi^* \psi$ for some amplitude, $\psi$. The probabilities are real for observable outcomes. I’m not sure yet precisely how our two formulations are related, though they’re quite close. My formulation follows the principal of extremized entropy directly, while yours gives a more direct route to the amplitude, so I’m not sure which is better. I wonder if there’s a way to differentiate the two formulations as matching up with known physics or not.
• John Baez says:
Garrett wrote:
There is an important difference, and not just with words. You are primarily dealing with the amplitude, $a$, while I am primarily dealing with the probability distribution, $p$, which I allow to be complex. You are inventing a new functional, quantropy, $\int a ln(a)$, while I am extremizing the usual entropy, or Ignorance, $\int p ln(p)$, extended for complex $p$.
As far as I can tell, a lot of these differences are just words. Let me use slightly different words to say the same thing:
I am primarily dealing with a complex-valued function $p$ satisfying
$\int p = 1$
while you are primarily dealing with a complex-valued function $p$ satisfying
$\int p = 1$
I am finding critical points of a function I call ‘quantropy’
$\int p ln(p)$
while you are finding critical points of the function you call the ‘usual entropy extended for complex $p$‘:
$\int p ln(p)$
These seem like suspiciously similar activities, no?
However, there seems to be some difference, because when I find my critical point, $p$ is not real-valued! So, maybe we’re finding critical points subject to different constraints, or something.
(By the way, I refuse to talk about ‘extremizing’ a complex-valued quantity, because ‘extremizing’ means ‘maximizing or minimizing’, and this is customarily used only for real-valued quantities. However the concept of ‘finding a critical point’—finding a place where the derivative of some quantity is zero—still makes sense for complex-valued quantities, and I believe that’s what you’re doing. But if you want to call this ‘extremizing’, I don’t really care too much, as long as I know what you’re doing.)
• Garrett says:
Heh. They are the same up to where you apparently stopped reading! But I’m calculating a probability, which happens to be equal to an amplitude times its conjugate under usual circumstances, $p = \psi^* \psi$. This is NOT the case for what you’re calculating, which is an amplitude, $a$, that can be used to calculate probabilities. Maybe we call it criticalizing?
• John Baez says:
So, I guess I need to figure out what you’\re ‘criticizing’ (that’s my own jokey phrase for it), and what constraints you’re imposing, which gives an answer of the form
$p = \psi^* \psi$
It seems we’re both criticizing the exact same thing:
$\int p \ln p$
where $p$ is a complex function on the set of paths constrained to have
$\int p = 1$
But I’m imposing a further constraint on the expected action:
$\int p A = \mathrm{const}$
where $A$ is the action of a path. And, I’m considering solutions where the Lagrange multiplier for this constraint is
$\lambda = 1/i \hbar$
This does not give me a real answer for $p$. It gives me the usual path integral prescription
$\displaystyle{ p = \frac{\exp(i A/ \hbar)}{Z} }$
where $Z$ is a number chosen to ensure $\int p = 1$.
What assumptions are you using, to get $p$ real? I know, I should reread your paper!
• Garrett says:
I also impose the constraint $\int p S = \mathrm{const}$, and obtain the same expression as you for $p$. Since my $p$ is probability, this constraint has the physical interpretation of a universal action reservoir. This probability, $p$, of a path is in general complex, but when we calculate a physical probability we find a real result. An example is the probability of a particle being seen at point$q'$ at time $t'$. If the Lagrangian is time independent then the action of a path coming to this point from the past will be negative of the action of a path leaving this point into the future, so the probability factors into two multiplied parts, $\psi$ and its conjugate. In this way, the amplitude of paths from the past converging on$q'$ at $t'$ is defined as
$\psi(q',t') = \frac{1}{\sqrt{Z}} \int_{q(t')=q'} Dq \, e^{-\alpha S^{t'}}$
If you like, you can think of this as saying that the probability of seeing a particle at a point is the amplitude of paths coming to that point from the past times the amplitude of paths leaving that point into the future, which is the conjugate provided the dynamics is time reversible.
• Garrett says:
Ah, lovely, perhaps this will be more parsimonious:
$\psi(q',t') = \frac{1}{\sqrt{Z}} \int_{q(t')=q'} Dq \, e^{-\alpha S^{t'}}$
• John Baez says:
Okay, thanks Garrett. I’m now convinced that you’re doing exactly what I’m doing, except:
1) you’re assuming the action of each path coming into a point from the past is the negative of the action of some path leaving this point into the future—or more precisely, the integral of the action over all paths going through that point is zero.
2) you’re using different words to describe what you’re doing,
3) you did it first.
• Garrett says:
Hmm, OK, but what I’d like to convince you of is that the probability,$p$, is a different animal, and directly criticizing the complex ignorance is different then criticizing the quantropy. Also, I don’t need to assume that the probability factors as $p=\psi^* \psi$, but it’s nice that it does in usual cases. Also, it would be neat if someone could figure out the relationship between your path amplitude and my probability, and between the complex ignorance and quantropy, as I’m not sure precisely how they’re related.
• John Baez says:
That’s what I’m trying to figure out: how they’re related. So far it seems that mathematically they are identical except that at some point you impose the further assumption that ‘the action of a path coming to this point from the past will be negative of the action of a path leaving this point into the future’. This is why I’m trying to strip away the verbiage and look at just the math. I don’t always do that, but right now I’m trying to spot a mathematical difference, and I haven’t seen one.
• Garrett says:
That assumption is needed to show that, in that case, the probability can be written as the product of an amplitude and its conjugate. The probability derivation is fine without that assumption though.
• Garrett says:
I was reflecting this morning on what I think is the crux of the matter: “Why is the probability of a measured event equal to a squared quantum amplitude?” In the usual approach, one constructs or derives (as you do) the quantum amplitude, and then blithely squares it to get the probability. What I’ve tried to do is start with the fact that we’re dealing with a probability distribution, used MaxEnt to derive what it should be, and then show it’s the square of an amplitude. Although the two approaches are mathematically similar, I like being able to answer the question of why $p = \psi* \psi$.
29. Jim says:
One tangible advantage of calling the numbers at issue “probabilities”, as opposed to “amplitudes”, may be that the former opens up the possibility of deriving the Born rule (as Garrett seems to do in the 2nd to last equation of his paper arXiv:physics/0605068v1), instead of having to postulate it.
• Garrett says:
Thanks Jim, that’s right. And it’s not just a difference of what we call things, but what we do with them. John needs to square his amplitude to get a probability, whereas my probability happens to factor into an amplitude squared.
30. daniel tung says:
Interesting…I suspect quantum mechanics and statistical mechanics has a deeper analogy besides the mathematical one..I wrote an article few years back: http://arxiv.org/abs/0712.1634
The idea: Quantum observables are analogous to Thermodynamic quantities
31. Scott says:
Hi There,
I don’t know if this article is still active (given its been almost a year). But, I found your analysis of quantropy quite interesting. It seems to have a context in Schwinger’s variational principle and the associated quantum effective action. For example, in a following post you express the quantropy exclusively in terms of $\ln Z$ where $Z$ is the partition function. Well, the quantum effective action is similarly expressed in terms of $\ln Z$ but as the Legendre transform wrt the external source field.
So, have you had a chance to look at the quantropy in the context of Schwinger’s variational principle and/or the quantum effective action (which relates to the Schwinger Dyson equations)? It seems these formulations are related.
Cheers,
Scott
• John Baez says:
I haven’t had a chance to look at quantropy since my last post here on the subject. But now that I’m back at U. C. Riverside accruing grad students, I’d like to write a paper on it… and your idea sounds very very helpful. I’m not familiar with Schwinger’s variational principle, but I’ve certainly seen quantum field theory calculations that use $\ln Z$ and take derivatives with respect to external source fields. So, I should expand my horizons a bit and connect this quantropy idea with those other ideas. They’re all part of a package of ideas that work both for quantum theory and statistical mechanics.
• Garrett says:
Good to hear that these ideas were not dead but only sleeping.
32. Scott says:
I was thinking a bit more about all of this and had a couple additional thoughts:
1. Supposedly, Schwinger was motivated to formulate his variational principle ($\delta <b> = i </b><b>$ which describes the variation of the transition amplitude between the states $</b><b>$ in terms of the classical action $S$) as the dynamical principle of quantum mechanics inspired by Feynmans path integral. Both Schwinger’s variational principle and Feynmans path integral can be used to derive Schrodingers equation, so they are alternate formulations of quantum mechanics that use the classical action. Bryce Dewitt advocated that infact Feynmans path integral is the $\it solution$ of Schwinger’s variation (which itself was expressive of Peierl’s bracket). When I saw your derivation of Feynmans path integral from a stationary principle it reminded me of Schwinger’s variational principle because I know that that principle allows a sort of reconstruction of the path integral from the stationary principle. Hence the variation of the quantropy should somehow be related to Schwinger’s formulation. It doesn’t seem to be a trivial relation since the formulations though similar are quite different.
2. I was also reminded how in his ‘Statistical Physics’ text Feynman endorses the partition function and describes how in Stat Mech everything ‘builds up to or descends from it’. I suppose because the quantum path integral has its own partition function interpretation, similar arguments are applicable?
Very interesting stuff! Looking forward to thinking and learning more about all of this!
Cheers,
Scott
• Scott says:
Sorry, having trouble with iPad. Should read $\delta \langle b\|a\rangle= i \langle b \| \delta S \| a \rangle$ for Schwinger’s variation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 392, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918217658996582, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/86803-need-some-help-method.html | # Thread:
1. ## Need some help with the method
I've got an equation: 4x^3 - 24x^2 + 45 x - 23
A tangent is drawn at point P. (1,2)
dy/dx = 12x^2 - 48x +45
At f´(1) = 12 -48 + 45 = 9
2 = 9(1) +c
y = 9x - 7
Show that the x coordinate of point q is 4.
9x-7 = 4x^3 - 24x^2 + 45 x - 23
0 = 4x^3 - 24x^2 + 36 x - 16
I found x = 4 using polyrootfinder on my gdc and was wondering if there is any other way of coming to that answer?
2. I imagine $q$ is the point where the tangent cross the curve, doesn't it ?
Your method is good, but you can solve the equation if you notice that $x=1$ is solution, then put $(x-1)$ in factor..
3. That is correct. I see what you mean however I'm quite unfamiliar with factorisation with 3rd degree polynomials (I'm still a middle school student :S). Could you care to elaborate?
4. Of course !
$P(x)=2x^3-12x^2+18x-8$
You have to test some values such as 1, 2, 3, -1, -2, -3.
Indeed here : $P(1)=0$ so we can put $(x-1)$ in factor :
$P(x)=(x-1)(ax^2+bx+c)$
which we develop : $P(x)=ax^3+(b-a)x^2+(c-b)x-c$
And we identify each coefficient $\left\{\begin{array}{l}a=2\\b-a=-12\\c-b=18\\c=8\end{array}\right.$
After resolution we get : $\left\{\begin{array}{l}a=2\\b=-10\\c=8\end{array}\right.$
Hence $P(x)=(x-1)(2x^2-10x+8)$ and you know the factorisation 2nd degree polynomials.
Is my english correct ?
#### Search Tags
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171979427337646, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Scalar_potential | # Scalar potential
This article is about a general description of a function used in mathematics and physics to describe conservative fields. For the scalar potential of electromagnetism, see electric potential. For all other uses, see potential.
Scalar potential, simply stated, describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.
gravitational potential well of an increasing mass where $\mathbf{F} = -\nabla P$
A scalar potential is a fundamental concept in vector analysis and physics (the adjective scalar is frequently omitted if there is no danger of confusion with vector potential). The scalar potential is an example of a scalar field. Given a vector field F, the scalar potential P is defined such that:
$\mathbf{F} = -\nabla P = - \left( \frac{\partial P}{\partial x}, \frac{\partial P}{\partial y}, \frac{\partial P}{\partial z} \right),$[1]
where ∇P is the gradient of P and the second part of the equation is minus the gradient for a function of the Cartesian coordinates x,y,z.[2] In some cases, mathematicians may use a positive sign in front of the gradient to define the potential.[3] Because of this definition of P in terms of the gradient, the direction of F at any point is the direction of the steepest decrease of P at that point, its magnitude is the rate of that decrease per unit length.
In order for F to be described in terms of a scalar potential only, the following have to be true:
1. $-\int_a^b \mathbf{F}\cdot d\mathbf{l} = P(\mathbf{b})-P(\mathbf{a})$, where the integration is over a Jordan arc passing from location a to location b and P(b) is P evaluated at location b .
2. $\oint \mathbf{F}\cdot d\mathbf{l}=0$, where the integral is over any simple closed path, otherwise known as a Jordan curve.
3. ${\nabla}\times{\mathbf{F}} =0.$
The first of these conditions represents the fundamental theorem of the gradient and is true for any vector field that is a gradient of a differentiable single valued scalar field P. The second condition is a requirement of F so that it can be expressed as the gradient of a scalar function. The third condition re-expresses the second condition in terms of the curl of F using the fundamental theorem of the curl. A vector field F that satisfies these conditions is said to be irrotational (Conservative).
Scalar potentials play a prominent role in many areas of physics and engineering. The gravity potential is the scalar potential associated with the gravity per unit mass, i.e., the acceleration due to the field, as a function of position. The gravity potential is the gravitational potential energy per unit mass. In electrostatics the electric potential is the scalar potential associated with the electric field, i.e., with the electrostatic force per unit charge. The electric potential is in this case the electrostatic potential energy per unit charge. In fluid dynamics, irrotational lamellar fields have a scalar potential only in the special case when it is a Laplacian field. Certain aspects of the nuclear force can be described by a Yukawa potential. The potential play a prominent role in the Lagrangian and Hamiltonian formulations of classical mechanics. Further, the scalar potential is the fundamental quantity in quantum mechanics.
Not every vector field has a scalar potential. Those that do are called conservative, corresponding to the notion of conservative force in physics. Examples of non-conservative forces include frictional forces, magnetic forces, and in fluid mechanics a solenoidal field velocity field. By the Helmholtz decomposition theorem however, all vector fields can be describable in terms of a scalar potential and corresponding vector potential. In electrodynamics the electromagnetic scalar and vector potentials are known together as the electromagnetic four-potential.
## Integrability conditions
If F is a conservative vector field (also called irrotational, curl-free, or potential), and its components have continuous partial derivatives, the potential of F with respect to a reference point $\mathbf r_0$ is defined in terms of the line integral:
$V(\mathbf r) = -\int_C \mathbf{F}(\mathbf{r})\cdot\,d\mathbf{r} = -\int_a^b \mathbf{F}(\mathbf{r}(t))\cdot\mathbf{r}'(t)\,dt,$
where C is a parametrized path from $\mathbf r_0$ to $\mathbf r,$
$\mathbf{r}(t), a\leq t\leq b, \mathbf{r}(a)=\mathbf{r_0}, \mathbf{r}(b)=\mathbf{r}.$
The fact that the line integral depends on the path C only through its terminal points $\mathbf r_0$ and $\mathbf r$ is, in essence, the path independence property of a conservative vector field. The fundamental theorem of calculus for line integrals implies that if V is defined in this way, then $\mathbf{F}= -\nabla V,$ so that V is a scalar potential of the conservative vector field F. Scalar potential is not determined by the vector field alone: indeed, the gradient of a function is unaffected if a constant is added to it. If V is defined in terms of the line integral, the ambiguity of V reflects the freedom in the choice of the reference point $\mathbf r_0.$
## Altitude as gravitational potential energy
uniform gravitational field near the Earth's surface
Plot of a two-dimensional slice of the gravitational potential in and around a uniform spherical body. The inflection points of the cross-section are at the surface of the body.
An example is the (nearly) uniform gravitational field near the Earth's surface. It has a potential energy
$U = m g h$
where U is the gravitational potential energy and h is the height above the surface. This means that gravitational potential energy on a contour map is proportional to altitude. On a contour map, the two-dimensional negative gradient of the altitude is a two-dimensional vector field, whose vectors are always perpendicular to the contours and also perpendicular to the direction of gravity. But on the hilly region represented by the contour map, the three-dimensional negative gradient of U always points straight downwards in the direction of gravity; F. However, a ball rolling down a hill cannot move directly downwards due to the normal force of the hill's surface, which cancels out the component of gravity perpendicular to the hill's surface. The component of gravity that remains to move the ball is parallel to the surface:
$F_S = - m g \ \sin \theta$
where θ is the angle of inclination, and the component of FS perpendicular to gravity is
$F_P = - m g \ \sin \theta \ \cos \theta = - {1 \over 2} m g \sin 2 \theta.$
This force FP, parallel to the ground, is greatest when θ is 45 degrees.
Let Δh be the uniform interval of altitude between contours on the contour map, and let Δx be the distance between two contours. Then
$\theta = \tan^{-1}\frac{\Delta h}{\Delta x}$
so that
$F_P = - m g { \Delta x \, \Delta h \over \Delta x^2 + \Delta h^2 }.$
However, on a contour map, the gradient is inversely proportional to Δx, which is not similar to force FP: altitude on a contour map is not exactly a two-dimensional potential field. The magnitudes of forces are different, but the directions of the forces are the same on a contour map as well as on the hilly region of the Earth's surface represented by the contour map.
## Pressure as buoyant potential
In fluid mechanics, a fluid in equilibrium, but in the presence of a uniform gravitational field is permeated by a uniform buoyant force that cancels out the gravitational force: that is how the fluid maintains its equilibrium. This buoyant force is the negative gradient of pressure:
$\mathbf{f_B} = - \nabla p. \,$
Since buoyant force points upwards, in the direction opposite to gravity, then pressure in the fluid increases downwards. Pressure in a static body of water increases proportionally to the depth below the surface of the water. The surfaces of constant pressure are planes parallel to the ground. The surface of the water can be characterized as a plane with zero pressure.
If the liquid has a vertical vortex (whose axis of rotation is perpendicular to the ground), then the vortex causes a depression in the pressure field. The surfaces of constant pressure are parallel to the ground far away from the vortex, but near and inside the vortex the surfaces of constant pressure are pulled downwards, closer to the ground. This also happens to the surface of zero pressure. Therefore, inside the vortex, the top surface of the liquid is pulled downwards into a depression, or even into a tube (a solenoid).
The buoyant force due to a fluid on a solid object immersed and surrounded by that fluid can be obtained by integrating the negative pressure gradient along the surface of the object:
$F_B = - \oint_S \nabla p \cdot \, d\mathbf{S}.$
A moving airplane wing makes the air pressure above it decrease relative to the air pressure below it. This creates enough buoyant force to counteract gravity.
## Calculating the scalar potential
Given a vector field E, its scalar potential Φ can be calculated to be
$\phi(\mathbf{R_0}) = {1 \over 4 \pi} \int_\tau {\nabla \cdot \mathbf{E}(\tau) \over \| \mathbf{R}(\tau) - \mathbf{R_0} \|} \, d\tau$
where τ is volume. Then, if E is irrotational (Conservative),
$\mathbf{E} = -\nabla \phi = - {1 \over 4 \pi} \nabla \int_\tau {\nabla \cdot \mathbf{E}(\tau) \over \| \mathbf{R}(\tau) - \mathbf{R_0} \|} \, d\tau.$
This formula is known to be correct if E is continuous and vanishes asymptotically to zero towards infinity, decaying faster than 1/r and if the divergence of E likewise vanishes towards infinity, decaying faster than 1/r2.
## References
1. Herbert Goldstein. Classical Mechanics (2 ed.). pp. 3–4. ISBN 978-0-201-02918-5.
2. See [1] for an example where the potential is defined without a negative. Other references such as Louis Leithold, The Calculus with Analytic Geometry (5 ed.), p. 1199 avoid using the term potential when solving for a function from its gradient. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911697506904602, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/44541/thermal-expansion-of-sphere | # Thermal expansion of Sphere
How would one go about writing an expression of the expansion of the volume of a sphere of a given material? I noticed a few sources give it as
$\Delta V= 3\gamma V\Delta T$
where V is the initial volume; $\gamma$ is the expansivity coefficient and $\Delta T$ is change in temperature of sphere.
Other texts leave out the 3, but with everything else the same.
Any suggestions?
-
## 1 Answer
Other texts leave out the 3, but with everything else the same.
Presumably because ...
"For exactly isotropic materials, and for small expansions, the linear thermal expansion coefficient is one third the volumetric coefficient."
-
How small are we talking? I'm looking at the change in sea level by thermal expansion and so the change in V is kinda big. The section you linked to talks about using it for a cube, not a sphere. Does the shape make a difference? – kuantumbro Nov 18 '12 at 22:46
@kuantumbro No, the shape doesn't matter. The link given here explains where the factor of 3 emerges from. If you understand the mathematics behind that, you should be able to resolve the discrepancy between books that you ran into (look at the coefficient's definition, units, so on). You could even use that link's equations to give numerical metrics for "how small" of a change the approximation is valid within your desired tolerance. – AlanSE Nov 19 '12 at 0:57
## protected by Qmechanic♦Mar 20 at 13:20
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927384614944458, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/53130-solved-i-need-help-proving-cross-product-2-countable-sets-countable.html | # Thread:
1. ## [SOLVED] I need help proving that the cross product of 2 countable sets is countable.
Statement to prove:
If A and B are countable sets, prove A x B is countable.
My work so far:
I've thought of 2 ways to approach proving this.
(1) I read that "Every subset of a countable set is again countable."
So my first choice of proving this would be to state that it's given A and B are countable sets. Okay, so now A and B are subsets of A x B (would I have to prove that, or is it known already?). And then from there I can say that since A and B are countable and subsets of A x B, then A x B itself is countable by that statement that every subset of a countable set is again countable.
(2) The other way I thought of proving this was:
Let A and B be countable sets. This means A ~ N and B ~ N (N the set of naturals). A ~ N implies there is a bijection from A to N, and B ~ N implies there is a bijection from B to N. So now I have to show there is a bijection from A x B to N, right?
This is where I got stuck. There is a theorem that says N x N is countable, but I didn't think that helped for this particular proof since sets A and B are not specifically defined except that they are countable.
I did find this proof online, but I found it hard to understand with the notations and some of the wording.
Any help is greatly appreciated. Thank you for your time!
2. Originally Posted by ilikedmath
Statement to prove:
If A and B are countable sets, prove A x B is countable.
My work so far:
I've thought of 2 ways to approach proving this.
(1) I read that "Every subset of a countable set is again countable."
So my first choice of proving this would be to state that it's given A and B are countable sets. Okay, so now A and B are subsets of A x B (would I have to prove that, or is it known already?). And then from there I can say that since A and B are countable and subsets of A x B, then A x B itself is countable by that statement that every subset of a countable set is again countable.
no, this is wrong. it is like you are using the converse of an implication that is not true. for example, this proof could show that the real numbers are countable! since the naturals are a subset of the reals that are countable. be careful of how you read theorems, they are not always true going backwards. the theorem says if you are given a countable set, then all its subsets will be countable, not if given any set that is countable, then any set containing it will be as well.
(2) The other way I thought of proving this was:
Let A and B be countable sets. This means A ~ N and B ~ N (N the set of naturals). A ~ N implies there is a bijection from A to N, and B ~ N implies there is a bijection from B to N. So now I have to show there is a bijection from A x B to N, right?
right.
it suffices to show that $\mathbb{N} \times \mathbb{N}$ is countable (why?)
Now, construct the function $f : \mathbb{N} \times \mathbb{N} \mapsto \mathbb{N}$ given by
$f(m,n) = 2^m \cdot (2n + 1) - 1$
where $(m,n) \in \mathbb{N} \times \mathbb{N}$
now show that this function is a bijection (one-to-one and onto)
an alternate way would be to draw a grid (which is usually a nice way to visualize a cross-product of countable sets). where the elements of one set are the first column and the elements of the other form the first row, then the cells in between are the ordered pairs formed from the elements of their respective rows and columns. find a way to traverse this grid to come up with a function
3. Originally Posted by Jhevon
no, this is wrong. it is like you are using the converse of an implication that is not true. for example, this proof could show that the real numbers are countable! since the naturals are a subset of the reals that are countable. be careful of how you read theorems, they are not always true going backwards. the theorem says if you are given a countable set, then all its subsets will be countable, not if given any set that is countable, then any set containing it will be as well.
right.
it suffices to show that $\mathbb{N} \times \mathbb{N}$ is countable (why?)
Now, construct the function $f : \mathbb{N} \times \mathbb{N} \mapsto \mathbb{N}$ given by
$f(m,n) = 2^m \cdot (2n + 1) - 1$
where $(m,n) \in \mathbb{N} \times \mathbb{N}$
now show that this function is a bijection (one-to-one and onto)
Okay, thanks for pointing out my error in reading the converse of the theorem. I need to be more careful.
So since "N x N is countable" was already proven as a theorem, I don't need to prove it again,right? I can use that statement to justify that A x B ~ N since A x B goes to N because A goes to N and B goes to N?
4. Originally Posted by ilikedmath
So since "N x N is countable" was already proven as a theorem, I don't need to prove it again,right?
well, it would be reinventing the wheel if you did. i guess what you have to do, is show how the theorem relates to this problem and why you can use it. once you show that the proof must essentially be the same, then you can just quote the theorem and move on
I can use that statement to justify that A x B ~ N since A x B goes to N because A goes to N and B goes to N?
hmm, not exactly. i was thinking more along the lines of A and N as well as B and N are the same "size". so because the proof doesn't say anything about the particular elements of the sets, but rather how many there are, we can replace the sets with N, since as far as the number of elements go, there is no difference between the sets. or you can think of enumerating the elements of each of the sets with the naturals.
5. Suppose that $f:A \leftrightarrow \mathbb{Z}^ +$ counts $A$ and $g:A \leftrightarrow \mathbb{Z}^ +$ counts $B$.
Define $\phi <img src=$A \times B) \mapsto \mathbb{Z}^ + ,\;\phi (a,b) = \left( {2^{f(a)} } \right)\left( {3^{g(b)} } \right)" alt="\phi A \times B) \mapsto \mathbb{Z}^ + ,\;\phi (a,b) = \left( {2^{f(a)} } \right)\left( {3^{g(b)} } \right)" />.
By showing that $\phi$ is an injection, you have completed the problem.
It does not have to be a bijection.
If we can map any set injectively into a countable set the that set is countable.
6. ## Rethinking my approach
Thanks for the feedback. I could also go the other direction, right?
Since A and B are countable, that also means N ~ A and N ~ B.
Which means for functions f and g where f: N → A and g: N → B, the functions are both 1-1 and onto. There are bijections from N to A and N to B. So I can show there is a bijection from N to A x B, right? This seems 'easier' in a way rather than the original way I was approaching the proof.
7. Originally Posted by ilikedmath
Thanks for the feedback. I could also go the other direction, right?
Since A and B are countable, that also means N ~ A and N ~ B.
Which means for functions f and g where f: N → A and g: N → B, the functions are both 1-1 and onto. There are bijections from N to A and N to B. So I can show there is a bijection from N to A x B, right? This seems 'easier' in a way rather than the original way I was approaching the proof.
if you can find the bijection, yes. i gave you a bijection and Plato gave you an injection already. your approach would present some unique challenges, i think. it is not impossible though, but i do not think it would be as "easy" as you think. i would follow my grid approach to come up with such a function
8. Originally Posted by ilikedmath
Thanks for the feedback. I could also go the other direction, right? Since A and B are countable, that also means N ~ A and N ~ B. Which means for functions f and g where f: N → A and g: N → B, the functions are both 1-1 and onto. There are bijections from N to A and N to B. So I can show there is a bijection from N to A x B, right? This seems 'easier' in a way rather than the original way I was approaching the proof.
I would like to see how you would define such a bijection.
I think it would be difficult.
Take another at my suggestion.
9. Originally Posted by Jhevon
if you can find the bijection, yes. i gave you a bijection and Plato gave you an injection already. your approach would present some unique challenges, i think. it is not impossible though, but i do not think it would be as "easy" as you think. i would follow my grid approach to come up with such a function
Okay, totally my bad. I got help from another source which told me
that I could use:
f: N to A and g: N to B are bijections so consider h: N to A x B where
h(n) = (f(n), g(n)). And show that h is a bijection from N to A x B.
Is that a possible route to go to approaching this proof?
10. Originally Posted by ilikedmath
Okay, totally my bad. I got help from another source which told me
that I could use:
f: N to A and g: N to B are bijections so consider h: N to A x B where
h(n) = (f(n), g(n)). And show that h is a bijection from N to A x B.
Is that a possible route to go to approaching this proof?
yes, that would work. a very nice approach!
i don't suppose you have problems showing that you have a bijection here? there is a little snag about showing it is onto that's bothering me, but it's probably nothing
11. Originally Posted by Jhevon
yes, that would work. a very nice approach!
i don't suppose you have problems showing that you have a bijection here? there is a little snag about showing it is onto that's bothering me, but it's probably nothing
We did something similar to this in class except the claim was:
If A ~ X and B ~ Y, then (A x B) ~ (X ~ Y).
A ~ X implied g: A to X is a bijection.
B ~ Y implied h: B to Y is a bijection.
We needed (A x B) ~ (X ~ Y) which implied f: (A x B) to (X ~ Y) a bijection.
Consider f(a, b) = (g(a), h(b))
1-1: Assume f(a1, b1) = f(a2, b2).
That implies (g(a1), h(b1)) = (g(a2), h(b2)). For ordered pairs to be equal the 1st elements must be equal and the 2nd elements are equal. That is:
g(a1) = g(a2) and h(b1) = h(b2)
Since g is 1-1 then a1 = a2. Since h is 1-1 then b1 = b2.
So (a1, b1) = (a2, b2) and f is 1-1.
Onto: Let (x, y) be in X x Y. g is onto which implies there is an a in A such that g(a) = x. h is onto which implies there is a b in B such that h(b) = y. Therefore (a, b) in A x B and f(a, b) = (g(a), h(b)) = (x, y). So f is onto.
Thus f is a bijection and (A x B) ~ (X ~ Y).
---
That was the proof we did in class. So now I'm trying to "see" how I can use that for this proof.
12. Originally Posted by ilikedmath
We did something similar to this in class except the claim was:
If A ~ X and B ~ Y, then (A x B) ~ (X ~ Y).
what does that mean? (A x B) ~ (X ~ Y) ?? what is X ~ Y describing?
1-1: Assume f(a1, b1) = f(a2, b2).
That implies (g(a1), h(b1)) = (g(a2), h(b2)). For ordered pairs to be equal the 1st elements must be equal and the 2nd elements are equal. That is:
g(a1) = g(a2) and h(b1) = h(b2)
Since g is 1-1 then a1 = a2. Since h is 1-1 then b1 = b2.
So (a1, b1) = (a2, b2) and f is 1-1.
yup, proving injectivity for your problem is pretty much identical to this proof. so no worries here.
Onto: Let (x, y) be in X x Y. g is onto which implies there is an a in A such that g(a) = x. h is onto which implies there is a b in B such that h(b) = y. Therefore (a, b) in A x B and f(a, b) = (g(a), h(b)) = (x, y). So f is onto.
this is the proof i thought of to prove surjectivity. but as you can see, there is a snag. a might not be equal to b in this proof. so our function h(n) = (f(n), g(n)) might not work.
and i don't think defining $h : \mathbb{N} \mapsto A \times B$ by $h(n) = (f(n), g(m))$ for some $m \in \mathbb{N}$ works either
hmmm
EDIT: hey, maybe $h(n) = (f(n + j), g(n + k))$ for some $j,k \in \mathbb{Z}$ works! i think so. yeah, i think that does it
of course, you would have to say specifically how you would pick j and k. but i think you can figure that out | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.970922589302063, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/312107/find-point-x-such-that-line-through-plane-e-and-sphere-s-meet-at-0-0-1 | # Find point $X$ such that line through plane $E$ and sphere $S$ meet at $(0,0,1)$ (stereographic projection)
Find the point $X$ such that the line going through the plane $E$ and sphere $S$ meet at the point $(0,0,1)$ (stereographic projection).
Let $S$ denote the unit sphere
$$S = \{(x,y,z) \in \mathbb{R}^3 \mid x^2 + y^2 + z^2 = 1\}$$
and $E$ denote the plane in $\mathbb{R}^3$ given by $z = 0$
$$E = \{(x,y,z) \in \mathbb{R}^3 \mid z = 0\}.$$
If $(u,v,0)$ is a point of $E$ then the line joining $(u,v,0)$ to $(0,0,1)$ meets $S$ in a point other than $(0,0,1)$. Denote this point by $X(u,v)$.
1) Compute the formula for $X$. [HINT: Any point on the line joining $(u,v,0)$ and $(0,0,1)$ is of the form $\lambda \cdot (u,v,0) + (1 - \lambda) \cdot (0,0,1)$ for some $\lambda \in \mathbb{R}$. We need to determine such $\lambda_0 \in \mathbb{R}$ that $X(u,v) = \lambda_0 \cdot (u,v,0) + (1 - \lambda_0) \cdot (0,0,1)$ lies on $S$.]
For this bit, I said using the fact that
$$\pmatrix{x\\y\\x} = \lambda_0 \cdot \pmatrix{u \\ v \\ 0} + (1 - \lambda_0)\cdot \pmatrix{0 \\ 0 \\ 1},$$
we get
$$x = \lambda_0 \cdot u$$ $$y = \lambda_0 \cdot v$$ $$z = 1 - \lambda_ 0.$$
Putting it in $x^2 + y^2 + z^2 = 1$ and solving gives us $\lambda_0 = 0, \frac{2}{1 + u^2 + v^2}$. When $\lambda_0 = 0$, we get the point $(0,0,1)$, and according to the question the point that meets $S$ but isn't $(0,0,1)$ and so we pick $\lambda_0 = \frac{2}{1 + u^2 + v^2}$. So we end up getting
$$X(u,v) = \left( \frac{2u}{1 + u^2 + v^2}, \frac{2v}{1 + u^2 + v^2}, 1 - \frac{2}{1 + u^2 + v^2}\right).$$
2) Show that the map $X: \mathbb{r}^2 \rightarrow \mathbb{R}^3$ determines a refular surface patch. [HINT: Prove that $X_u \circ X_v = 0$, then show that $a \circ b =$ implies that the vectors $a$ and $b$ are linearly independent]
Here $a \circ b$ is the dot product between $a$ and $b$ and $X_u$ is the partial derivative of $X$ with respect to $u$. So the first thing to do was the partial derivatives and I got them to be
$$X_u = \left(\frac{2}{1 + u^2 + v^2} - \frac{4u^2}{(1 + u^2 + v^2)^2}, -\frac{4uv}{(1 + u^2 + v^2)^2}, \frac{4u}{(1 + u^2 + v^2)^2} \right)$$ $$X_v = \left( -\frac{4uv}{(1 + u^2 + v^2)^2}, \frac{2}{(1 + u^2 + v^2)} - \frac{4v^2}{(1 + u^2 + v^2)^2}, \frac{4v}{(1 + u^2 + v^2)^2} \right)$$
but when I then do the dot product, I don't get them to be $0$. I get it to be
$$\left(\frac{2}{1 + u^2 + v^2} - \frac{4u^2}{(1 + u^2 + v^2)^2} \right) \cdot \left( -\frac{4uv}{(1 + u^2 + v^2)^2} \right) + \left(-\frac{4uv}{(1 + u^2 + v^2)^2} \right) \cdot \left(\frac{2}{(1 + u^2 + v^2)} - \frac{4v^2}{(1 + u^2 + v^2)^2} \right) + \left(\frac{4u}{(1 + u^2 + v^2)^2} \right) \cdot \left(\frac{4v}{(1 + u^2 + v^2)^2} \right),$$
which gives me
$$\frac{16u^3v}{(1 + u^2 + v^2)^4} - \frac{8uv}{(1 + u^2 + v^2)^3} + \frac{16v^3u}{(1 + u^2 + v^2)^4} - \frac{8uv}{(1 + u^2 + v^2)^3} + \frac{16uv}{(1 + u^2 + v^2)^4}$$ $$= \frac{16u^3v}{(1 + u^2 + v^2)^4}+ \frac{16v^3u}{(1 + u^2 + v^2)^4} - \frac{16uv}{(1 + u^2 + v^2)^3} + \frac{16uv}{(1 + u^2 + v^2)^4} \neq 0$$
Where am I making my mistake?
3) How much of the sphere is covered by the parametrization $X$?
I also haven't got a clue how to do this bit. Maybe once I sort the first two parts out, I might get an idea.
EDIT: Also, I'm thinking that my $X$ is wrong as when I do $x^2 + y^2 + z^2$, I don't get it to equal $1$. I'm not sure if it should or not. If it did, then it would lie on the surface of the sphere and not inside it. The question doesn't necasarrily say that it needs to lie on the surface, but I thought if I used the constraint $x^2 + y^2 + z^2 = 1$ then I've moved to the point on the line that does lie on the surface and so I should get $x^2 + y^2 + z^2 = 1$ for my $x,y,z$ coordinates for $X$, right?
-
## 1 Answer
I do calculate zero: \begin{align} &\frac{16u^3v}{(1 + u^2 + v^2)^4}+ \frac{16v^3u}{(1 + u^2 + v^2)^4} - \frac{16uv}{(1 + u^2 + v^2)^3} + \frac{16uv}{(1 + u^2 + v^2)^4} \\ =&\frac{16u^3v +16uv^3 + 16uv}{(1 + u^2 + v^2)^4} - \frac{16uv(1 + u^2 + v^2)}{(1 + u^2 + v^2)^4} \\ =&\frac{16u^3v +16uv^3 +16uv - 16uv - 16u^3v - 16uv^3}{(1 + u^2 + v^2)^4}\\ =&\frac{0}{(1 + u^2 + v^2)^4}\\ =& 0 \end{align}
Calculating the norm squared of $X(u,v)$: \begin{align} \Vert X(u,v) \Vert^2 &= \frac{4u^2}{(1 + u^2 + v^2)^2} + \frac{4v^2}{(1 + u^2 + v^2)^2}+ \left(\frac{1 + u^2 + v^2}{1 + u^2 + v^2} - \frac{2}{1 + u^2 + v^2}\right)^2 \\ &= \frac{4u^2}{(1 + u^2 + v^2)^2} + \frac{4v^2}{(1 + u^2 + v^2)^2} + \frac{(-1 + u^2 + v^2)^2}{(1 + u^2 + v^2)^2} \\ &= \frac{4n_2 +(- 1 +n_2)^2}{(1 + n_2)^2}\quad\text{(Let $n_2 = u^2 + v^2$)} \\ &= \frac{4n_2 + 1 - 2n_2 + n_2^2}{(1 + n_2)^2} \\ &= \frac{2n_2 + 1 + n_2^2}{1+2n_2 + n_2^2} \\ &= 1 \\ \end{align}
-
Oh dang! I just assumed it would've all cancelled out really easily and didn't think about it properly, rushing too much :( Thanks for the help :) Do you know anything about the third question? – Kaish Feb 23 at 19:37
$x^2 + y^2 + z^2 = 1$ should indeed be true as you say. It should be as it is on the sphere. I imagine it is nice and complicated thus easy to make a mistake, but I will give it a shot in an edit. As far as the parameterization goes, it covers the entire sphere, except for $(0,0,1)$ unless you allow for either $u$ or $v$ to go to infinity. Proving it though, I don't know off-hand, though it should be easy enough. – adam W Feb 23 at 21:19
Edit finished, I do calculate $1$ as well... the key that avoided many possible mistakes there was to substitute $n_2= u^2 + v^2$ so that there were not too many terms to manipulate. – adam W Feb 23 at 21:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147156476974487, "perplexity_flag": "head"} |
http://mathhelpforum.com/statistics/99691-probability-rolling-die.html | Thread:
1. Probability--Rolling a die
This probably should be very easy but i am not good at probability.
Given a fair die, what is the probability of rolling at least 2 sixes if the die is rolled 12 times.
I have tried a little bit, and this is what I have.
First of all, i know the universe is equal to 6^12. And the hint is that it is easier to compute the probability of the event not happening.
If A is the event of getting at least 2 sixes, B is getting 1 or no sixes.
P(B)=?? Therefore, P(A)=1-P(B)
Thanks for any help.
2. Hello, dude15129!
You say you're not good at probability,
. . but you have a good understanding of it.
Your game plan is absolutely correct!
Number of ways to get no 6's: . $\left(\frac{5}{6}\right)^{12}$
Number of ways to get one 6: . ${12\choose1}\left(\frac{1}{6}\right)\left(\frac{5} {6}\right)^{11}$
Hence: . $P(\text{no 6 or one 6}) \:=\:\frac{\left(\frac{5}{6}\right)^2 + 12\left(\frac{1}{6}\right)\left(\frac{5}{6}\right) ^{11}} {6^{12}} \;=\;\frac{830,\!078,\!125}{2,\!176,\!782,\!336}$
Therefore: . $P(\text{two or more 6s}) \;\;=\;\;1 - \frac{830,\!078,\!125}{2,\!176,\!782,\!336} \;\;=\;\;\frac{1,\!346,\!704,\!211}{2,\!176,\!782, \!336}$
but can you explain the way you got the number of ways to get 1 six? i don't understand the formula/(12 1) thing.
Also, the next part asks about the probability of rolling at least 3 sixes if the die is rolled 18 times. So, following what you did.... it would be....
P(rolling 0,1,2 sixes)= [(5/6)^18 + 18(1/6)(5/6)^17 + 18(1/6)(5/6)^16]/6^18
Thanks.
4. Originally Posted by dude15129
Also, the next part asks about the probability of rolling at least 3 sixes if the die is rolled 18 times. So, following what you did.... it would be....
[FONT=monospace]P(rolling 0,1,2 sixes)= [(5/6)^18 + 18(1/6)(5/6)^17 + 18(1/6)(5/6)^16]/6^
Are you sure that this is not fully explained in your text material?
If we roll a die 18 times, the the probability of getting exactly $k\text{ sixes },~0\le k \le 18$ is given using the binominal formula.
$\binom{18}{k}\left(\frac{1}{6}\right)^k\left(\frac {5}{6}\right)^{18-k}$
That must be somewhere in your notes. If not, then you are not ready for this problem.
5. haha....well actually we haven't had any notes really and our textbook isn't like a real textbook.
but thanks for the help. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899016261100769, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/204787-calculate-volume-using-method-cylindrical-shells-urgent.html | 1Thanks
• 1 Post By skeeter
# Thread:
1. ## Calculate Volume using the Method of Cylindrical Shells - Urgent
Here are some things I have done. I already found the volume since the other part of the question said to use the method of discs/washers. It is 54.0633 cubic units (correct answer).
I found by trial and error that (since this is an online assignment) that a= 9/sqrt(82) and c =9 .
What do I do? Why should there be two integrals, I don't understand.
I have tried the following.
y = 9^x becomes x = lny/ln9
Formula
h =
lny/ln9
r = y
So I got
integral from 9/sqrt(82) to 9 of
dy
But that's just one integral and perhaps even incorrect. Please show me what to do.
Thanks
2. ## Re: Calculate Volume using the Method of Cylindrical Shells - Urgent
the sketch of the graph is misleading ...
note that $\frac{9}{\sqrt{82}} < 1$ , so there is a very small region between the curve $y = \frac{9}{\sqrt{x^2+81}}$ and $x=1$ from $y = \frac{9}{\sqrt{82}}$ to $y = 1$.
3. ## Re: Calculate Volume using the Method of Cylindrical Shells - Urgent
To evaluate the volume of your region, you need to do
$\displaystyle \begin{align*} \int_1^9{ \pi \cdot 1^2 \, dy } - \int_1^9{ \pi \cdot \left( \log_9{y} \right)^2 \,dy } + \int_{ \frac{9}{ \sqrt{82} } }^1{ \pi \cdot 1^2 \, dy } - \int_{ \frac{9}{ \sqrt{82} } }^1{ \pi \cdot \left[ \sqrt{ \left( \frac{y}{9} \right)^{ -2 } - 81 } \right]^2 \,dy } &= \pi \int_1^9{ 1 \, dy } - \frac{\pi}{\left( \ln{9} \right)^2} \int_1^9{ \left( \ln{y} \right)^2 \,dy } + \pi \int_{ \frac{9}{\sqrt{82}} }^1{ 1\, dy } - 81\pi \int_{\frac{9}{\sqrt{82}}}^1{ y^{-2} - 1 \, dy } \end{align*}$
The only one of those integrals which may cause you any trouble is the second, so do
$\displaystyle \begin{align*} \int{\left( \ln{y} \right)^2 \, dy} &= \int{ \frac{y\left( \ln{y} \right)^2}{y} \, dy } \\ &= \int{ u^2 \, e^u \, du } \textrm{ after making the substitution } u = \ln{y} \end{align*}$
which you would then evaluate using integration by parts twice. See how you go
4. ## Re: Calculate Volume using the Method of Cylindrical Shells - Urgent
Is this the method of cylindrical shells? I am obliged to use cylindrical shells. Can you use r (radius), h(height)? And why will my answer be in the form of a sum of two integrals (see the question -top of image)?
5. ## Re: Calculate Volume using the Method of Cylindrical Shells - Urgent
using the method of cylindrical shells ...
$V = 2\pi \int_{\frac{9}{\sqrt{82}}}^1 y \left(1 - \frac{9\sqrt{1-y^2}}{y} \right) \, dy + 2\pi \int_1^9 y \left(1 - \frac{\ln{y}}{\ln{9}}\right) \, dy$
It worked. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083927869796753, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/4862/interference-of-polarized-light | # Interference of polarized light
Does polarized light interfere?
-
4
Yes, it does. Could you be more specific, why do you think it would not? – gigacyan Feb 9 '11 at 14:11
Question is unclear but can be useful. Don't think need to close -- just downvote. – Kostya Feb 10 '11 at 9:36
3
@Kostya: the problem is, if you down-vote, will you actually return to unvote if the question is improved? I think a closed question is more likely to become "rehabilitated" – Tobias Kienzler Feb 10 '11 at 10:53
## 5 Answers
Let's do some math in order not to be unsubstantiated.
## 1. Perpendicular polarizations.
First wave $E_{1x} = E_0\,\cos\omega t$,second wave $E_{2y} = E_0\,\cos(\omega t+\Delta)$.
Where $\Delta$ -- is a phase difference between waves.
Total field: $\vec{E} = E_0\left(\vec{i}\cos\omega t+\vec{j}\cos(\omega t+\Delta)\right)$.
Intensity: $I_\perp\sim \langle|\vec{E}|^2\rangle = E_0^2\langle\cos^2\omega t+\cos^2(\omega t+\Delta)\rangle$
where average means: $\langle f(t)\rangle = \frac{1}{T}\int_0^{T}\,dt\,f(t)$, so that $\langle cos^2(\omega t+\Delta)\rangle = \frac{1}{2}$ for any $\Delta$
Finally we got $I_\perp\sim E_0^2$, which is independent on the phase difference between the waves.
## 2. Parallel polarizations.
First wave $E_{1x} = E_0\,\cos\omega t$,second wave $E_{2x} = E_0\,\cos(\omega t+\Delta)$.
Total field: $\vec{E} = \vec{i}E_0\left(\cos\omega t+\cos(\omega t+\Delta)\right)$.
Intensity: $I_\parallel\sim E_0^2\langle\cos^2\omega t+2\cos\omega t\cos(\omega t+\Delta)+\cos^2(\omega t+\Delta)\rangle=E_0^2(1+\cos\Delta)$, which nicely depends on the phase shift between the waves.
-
Nice way of explaining this, definitely more thorough than my answer. – Colin K Feb 9 '11 at 17:19
Agreed, +1 for content and style. (tiny typo in the first line though: "no**t** to be unsubstantiated") – qftme Feb 9 '11 at 18:06
Thank you all for feedback! By my answer should be considered as a continuation of @Colin K. – Kostya Feb 9 '11 at 18:21
Kostya, when you add two perpendicular fields, do you think you consider an interference? Any interference is by definition made of the same polarization! You consider a one-arm interferometer! – Vladimir Kalitvianski Feb 9 '11 at 20:54
That is what he said, and he explained why it is the case as well. – Colin K Feb 10 '11 at 4:00
show 7 more comments
Yes. In fact, light will only interfere with light of the same polarization. If you take a Mach–Zehnder interferometer, for example, and put a polarization rotating optic (a waveplate) in one of the arms, the interference pattern will lose contrast. If the polarization is rotated 90 degrees, the pattern will vanish completely.
-
I can only add that quantum mechanically a photon interferes with itself, i.e., the resulting field is always of the same polarization. If you intervene with the polarization rotation optic, you break the main rule - not to intervene in the interferometer arms. Any such intervention (polarization, intensity, etc.) spoils the patter to this or that extent. – Vladimir Kalitvianski Feb 9 '11 at 19:35
you should mention something like coherence and same-wavelength-interference-only – Tobias Kienzler Feb 10 '11 at 7:29
As others have noted, you will not get any intensity modulation from the interference of two linearly polarized light beams with orthogonal polarizations. It's worth noting, though, that this does not mean that beams with perpendicular polarizations don't affect each other. In fact, a counter-propagating pair of beams with orthogonal linear polarizations-- the so-called "lin-perp-lin" configuration-- is the best system for understanding the Sisyphus cooling effect, the explanation of which was a big part of the 1997 Nobel Prize in Physics.
The superposition of two counter-propagating linearly polarized beams with orthogonal polarizations doesn't give you any modulation of intensity, but does create a polarization gradient. For the lin-perp-lin configuration, you get alternating regions of left- and right-circular polarization, and combined with optical pumping this lets you set up a scenario where you can cool atomic vapors to extremely low temepratures. This makes laser cooling vastly more useful than it would be otherwise, and allows all sorts of cool technologies like atomic fountain clocks.
It's not interference in the sense that is usually meant, but it is a cool phenomenon that results from overlapping beams with different polarizations. So you shouldn't think that just because it doesn't produce a pattern of bright and dark spots it's not interesting.
-
Your question is quite vague, but in short, the answer is: yes, look it up on wikipedia. But let's be more precise:
As long as the light intensity is low enough to not obtain nonlinear effects, the superposition principle of linear optics is valid. That means, the amplitudes of two electromagnetic (EM) fields sum up, thus yielding interference.
However, since the amplitudes are vectors (while the intensity, being related to the absolute squares of the amplitudes, is a scalar), the interference depends on the relative polarization, the total intensity for two linearly polarized EM waves is
$I = I_1 + I_2 + 2\sqrt{I_1I_2}\cos(\Delta\varphi)$
where $\Delta\varphi$ denotes the angle between the two polarizations. You see that for perpendicular polarization the cosine term vanishes, the intensities just add up and you obtain no interference, while for antiparallel polarization ($\Delta\varphi = 180°$) you obtain destructive interference since the cosine becomes -1. In case you wonder about energy conservation (being proportional to the intensity) keep in mind that only the global energy is conserved while local fluctations are ok.
One final note: The whole thing only works for a well-defined phase relation between two EM waves. That is, only spectral components of the same wavelength can interfere, and the coherence length and time need to be large enough - you won't obtain perfect interference if your light source flickers due to heat for example.
-
Yes, it does and this property is independent from a particular polarization. So non polarized light gives the same interference pattern.
-
1
-2 for what? Didn't I answer the question? – Vladimir Kalitvianski Feb 9 '11 at 18:59
because it's wrong or at least very misleading. While stating "interference always occurs" is kind of valid despite omitting the dependence on coherence and phase difference, your statement `non polarized light gives the **same** interference pattern` is really wrong – Tobias Kienzler Feb 10 '11 at 7:28
Ah yes, I implied the same frequency, of course, and interferometer conditions. It is especially evident when I speak of interference of photon with itself. – Vladimir Kalitvianski Feb 10 '11 at 11:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035165309906006, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/45483?sort=oldest | ## Word maps on compact Lie groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $w=w(a,b)$ be a non-trivial word in the free group $F_2 = \langle a,b \rangle$ and $w_G \colon G \times G \to G$ be the induced word map for some compact Lie group $G$.
Murray Gerstenhaber and Oskar Rothaus showed in 1962 (see M. Gerstenhaber and O.S. Rothaus, The solution of sets of equations in groups, Proc. Nat. Acad. Sci. U.S.A. 48 (1962), 1531–1533.) that $w_G$ is surjective in a very strong sense for $G=U(n)$ whenever the exponent-sum of $a$ or $b$ in $w$ is non-zero. Using degree-theory and the calculations of the homology of compact Lie groups due to Heinz Hopf, they showed that if the exponent sum of $a$ is non-zero, then $u \mapsto w_{U(n)}(u,v_0)$ is surjective for any fixed $v_0 \in U(n)$.
Earlier, in 1949, Morikuni Gotô (J. Math. Soc. Japan, 1949 vol. 1 pp. 270-272) already showed that the commutator $w=[a,b]$ induces a surjective map $w \colon G \times G \to G$ for any simple compact Lie group $G$. His proof proceeded as follows: Since the commutators are a conjugation invariant set, one can assume that $u$ is in a fixed maximal torus. We can now take a suitable $a$ in that torus and $b$ an element the Weyl group $W_G$ of this torus, such that $b$ does not have $+1$ as an eigenvalue (when acting on the universal cover of the maximal torus). This implies that the action of $1-b$ on the universal covering is an isomorphism. Hence, every element in the maximal torus can be obtained as $a \cdot ba^{-1}b^{-1}$ for some $a$. This strategy can be used for a few other words but does not lead any further.
Question: Are there any other techniques to show that $w_{G}$ is surjective for some $w$ and $G$?
The easiest word for which I cannot answer this is $w= [[a,b],[a^2,b^2]]$.
Question: Is the word-map $w_{PSU(n)}\colon PSU(n) \times PSU(n) \to PSU(n)$ surjective for the word $$w = [[a,b],[a^2,b^2]]?$$
-
4
Second question yes for $PSU(2)$ because yes for $SU(2)$, but by a very ad hoc method: The image of this map is a connected subset of $SU(2)$ and closed under conjugation, so it must be the whole group if it contains $-I$ as well as $I$. It does, because in the interesting subgroup of order $24$ I can think of a pair $a,b$ of elements of order $6$ such that $[[a,b],[a^2,b^2]]$ is the element of order $2$. – Tom Goodwillie Nov 9 2010 at 22:44
That is a nice argument. – Andreas Thom Nov 10 2010 at 6:43
## 1 Answer
In this paper of Borel it is shown that any non-trivial word is a dominant map from $G \times G$ to $G$ whenever $G$ is a compact connected semisimple Lie group. So the answer to your second question is "yes". I don't recall offhand exactly what techniques are used, but I vaguely recall that one starts with the $SL_2$ case (which contains free subgroups that one can play with) and builds up from there.
EDIT: Dominance would imply surjectivity (or at least that the image is Zariski dense) in an algebraically closed field, but I didn't realise that the question is over the reals, and so Borel's result does not fully resolve the question.
-
I do not understand, how this answers the second question. For any fixed $G$, there are words $w$ such that the image of $w_G$ in $G$ is small in the euclidean topology. – Andreas Thom Nov 10 2010 at 9:33
@Andreas : what would be an example ? – BS Nov 10 2010 at 10:17
This is Corollary 3.3 in my paper (arxiv.org/abs/1003.4093). For fixed $n$ and $\varepsilon>0$, there exists $w \in F_2$, such that $\|1 - w(u,v)\|< \varepsilon$ for all $u,v \in U(n)$. The word is some iterated commutator of powers of $a$ and $b$. However, for fixed $w$ and $n$ large enough, $w_{PSU(n)}$ or $w_{SU(n)}$ seems to be surjective in many cases. – Andreas Thom Nov 10 2010 at 14:06
Ah, I'm sorry, you're not working over an algebraically closed field. I guess one has to use either real algebraic geometry or homology then, neither of which I can help you with... – Terry Tao Nov 10 2010 at 17:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931127667427063, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/121510/list | ## Return to Answer
2 added 9 characters in body
The situation as I see it is as follows:
The first definition you give is the natural one. It implies that D-modules on the $X$ are given by sheaves of modules for the sheaf of algebras $f_\ast D_X$ on $Y$. However, I am curious if there are actually any interesting examples of such a morphism. Projective spaces and flag varieties don't live in interesting families... Is it then the case that any D-affine morphism is either affine or a product $X=Y\times Z$, where $Z$ is $D$-affine?
The second definition is far too weak. For example, under that definition, if a scheme $X$ is D-affine over a point then any D-module on $X$ is a local system. I think the only schemes that are D-affine in the second sense are finite collections of points.
As you remark, it is not true that D-affineness respects composition. For example, take $X$ to be the total space of $\mathcal O(1)$ living over $Y=\mathbb P^1$. Then $f:X\to Y$ is affine and $Y\to pt$ is D-affine. However, $f_\ast \mathcal O_X = \bigoplus _{n\geq 0} \mathcal O(-n)$, which has higher cohomologies. So $X$ is not D-affine (over a point).
To me D-affineness is a strange and mysterious thing. Flag varieties are D-affine for very different reasons than affine varieties. Perhaps it is not helpful to include both these things in the same definition. Being D-affine is somehow not a notion that is intrinsic to D-modules: I don't think it can be expressed just in terms of the de Rham stack $X_{dR}$ and D-module functors. It is defined in terms of the forgetful functor to $\mathcal O$-modules.
I hope this helps!
1
The situation as I see it is as follows:
The first definition you give is the natural one. It implies that D-modules on the $X$ are given by sheaves of modules for the sheaf of algebras $f_\ast D_X$ on $Y$. However, I am curious if there are actually any interesting examples of such a morphism. Projective spaces and flag varieties don't live in interesting families... Is it then the case that any D-affine morphism is either affine or a product $X=Y\times Z$, where $Z$ is $D$-affine?
The second definition is far too weak. For example, under that definition, if a scheme $X$ is D-affine over a point then any D-module on $X$ is a local system. I think the only schemes that are D-affine in the second sense are finite collections of points.
As you remark, it is not true that D-affineness respects composition. For example, take $X$ to be the total space of $\mathcal O(1)$ living over $Y=\mathbb P^1$. Then $f:X\to Y$ is affine and $Y\to pt$ is D-affine. However, $f_\ast \mathcal O_X = \bigoplus _{n\geq 0} \mathcal O(-n)$, which has higher cohomologies. So $X$ is not D-affine (over a point).
To me D-affineness is a strange and mysterious thing. Flag varieties are D-affine for very different reasons than affine varieties. Perhaps it is not helpful to include both these things in the same definition. Being D-affine is somehow not a notion that is intrinsic to D-modules: I don't think it can be expressed just in terms of the de Rham stack $X_{dR}$ and D-module functors. It is defined in terms of the forgetful functor to $\mathcal O$-modules.
I hope this helps! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428275227546692, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/21004/list | ## Return to Answer
2 added 48 characters in body
Edit[ The following is wrong ~ see comments]
I don't think so.
Suppose $f$ to be surjective. Let $x\mapsto 0$ and $y\mapsto 1$. Now consider two distinct paths $\gamma,\eta:[0,1]\to\mathbb Q\times\mathbb Q$ from $x$ to $y$. Since $f$ is continuous it maps these paths surjectively onto $[0,1]$ (more exactly $[0,1]\subset f\gamma([0,1])$ and $[0,1]\subset f\eta([0,1])$). Thus, $f$ cannot be injective.
1
I don't think so.
Suppose $f$ to be surjective. Let $x\mapsto 0$ and $y\mapsto 1$. Now consider two distinct paths $\gamma,\eta:[0,1]\to\mathbb Q\times\mathbb Q$ from $x$ to $y$. Since $f$ is continuous it maps these paths surjectively onto $[0,1]$ (more exactly $[0,1]\subset f\gamma([0,1])$ and $[0,1]\subset f\eta([0,1])$). Thus, $f$ cannot be injective. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96489417552948, "perplexity_flag": "head"} |
http://mathhelpforum.com/math-puzzles/156259-number-theory-puzzle.html | # Thread:
1. ## A number theory puzzle
I give you 7x - 17y = 1. Now give me back how many solution pairs (in natural numbers) there are for the equation? For example (5,2) and (22,9) are solutions (if you can't figure this one out in a week, I'll drop a hint and if you still can't solve it after another week, then I'll solve it for you).
2. Originally Posted by wonderboy1953
I give you 7x - 7y = 1. Now give me back how many solution pairs (in natural numbers) there are for the equation? For example (5,2) and (22,9) are solutions (if you can't figure this one out in a week, I'll drop a hint and if you still can't solve it after another week, then I'll solve it for you).
Typo? 7*5 - 7*2 most definitely does not equal 1.
In fact it is easily shown that 7x - 7y = 1 has no integer solutions; the LHS is a multiple of 7 while the RHS is not.
3. Originally Posted by undefined
Typo? 7*5 - 7*2 most definitely does not equal 1.
In fact it is easily shown that 7x - 7y = 1 has no integer solutions; the LHS is a multiple of 7 while the RHS is not.
Correction made.
4. Originally Posted by wonderboy1953
I give you 7x - 17y = 1. Now give me back how many solution pairs (in natural numbers) there are for the equation? For example (5,2) and (22,9) are solutions (if you can't figure this one out in a week, I'll drop a hint and if you still can't solve it after another week, then I'll solve it for you).
We have Bezout's identity with a sign change. There are infinite solutions, and a solution exists iff x > 4 and $x\equiv 5\pmod{17}$.
For $k \ge 0$ we solve for y
7*(17k + 5) - 17y = 1
7*17k + 35 - 17y = 1
17y = 7*17k + 34
y = 7k + 2
So the solution set is given by (x,y) = (5 + 17k, 2 + 7k) where k ranges over the non-negative integers.
5. That's the solution using parametric equations.
6. Methinks Sir Diophantine just turned over in his grave.
7. Originally Posted by Wilmer
Methinks Sir Diophantine just turned over in his grave.
I realize my solving for y could be shortened realizing that
7(x + 17) - 17(y + 7) = 7x - 17y
but what exactly are you referring to?
8. Originally Posted by Wilmer
Methinks Sir Diophantine just turned over in his grave.
Certainly not English.
9. Hello, wonderboy1953!
I have a very elementary solution . . .
$\text{Given: }\;7x - 17y \:=\: 1$
$\text{How many solutions in natural numbers are there for the equation?}$
Answer: there are a brizillian solutions.
Solve for $x\!:\;\;x \:=\:\dfrac{17y+1}{7} \;=\;2y + \dfrac{3y+1}{7}$
Since $\,x$ is a natural number, $(3y+1)$ must be divisible by 7.
We find that: . $y \;=\;2,\,9,\,16,\,\hdots,\,7t-5\;\text{ for }t \in N$
Then: . $x \;=\;2(7t-5) + \dfrac{3(7t-5)+1}{7} \;=\;17t - 12$
Solutions: . $\begin{Bmatrix}x &=& 17t - 12 \\ y &=& 7t-5 \end{Bmatrix}\;\;\text{ for }t \in N$
Of course, this is basicaly undefined's solution.
10. Originally Posted by undefined
I realize my solving for y could be shortened realizing that
7(x + 17) - 17(y + 7) = 7x - 17y
but what exactly are you referring to?
Nothing in particular UnD; I had just skimmed over:
Diophantine Equations
and saw WB's equation... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419740438461304, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/tagged/summation?sort=unanswered | # Tagged Questions
The summation tag has no wiki summary.
I am trying to get something in the form of a $\Sigma (\dots) * \alpha_i = (\dots)$ from the output of the code below. The thing is that I cannot figure out how to tell Mathematica to "collapse" down ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358534216880798, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/36136-isosceles-triangle-angle-area-triangle-inscribed-circle.html | # Thread:
1. ## Isosceles triangle, angle, area of triangle inscribed in a circle!
An isosceles triangle is inscribed in a circle of radius R. Determine the value of theta that maximizes the area of the triangle given that theta is the angle contained between the two equal sides.
2. Originally Posted by rawrzjaja
An isosceles triangle is inscribed in a circle of radius R. Determine the value of theta that maximizes the area of the triangle give that theta is the angle contained between the two equal sides.
Try thinking that the height of the triangle si equal to 2R..where would you go from there
3. ## Hmmmmmmmmmmmmm
well, ive drawn a diagram of the scenario
with the height 2r... i was thinking i split the triangles in symmetrically, making it a right angle triangle... with a height 2r... and that's as far as i've gone T_T
4. I'm not sure what suggesting a height of 2r does for solving the problem, since no triangle inscribed in a circle of radius r could have that height. I would try writing the height and length of the base in terms of trigonometric functions and the radius r.
5. I just followed my own suggestion. The area of the triangle is given by $(r sin \theta)(r + r cos \theta)$. Can you figure it out from there?
6. ## Hmmmmmmmmmmmmm
Can you elaborate on how you got to that stage please... step by step please =o.. i want to understand >.<
7. Originally Posted by rawrzjaja
Can you elaborate on how you got to that stage please... step by step please =o.. i want to understand >.<
Ok, first let's establish a name for the triangle. I'll call the circumscribed triangle PQR, where Q has angle theta, and the sides PQ and QR are of the same length. The circle is O. So the lengths of OP, OQ, and OR are all equal to the radius r.
This means that OPQ, OQR, and OPR are all isosceles triangles, with OPQ and OQR congruent. The altitude of triangle PQR divides the angle PQR in half, so angles PQO, RQO, OPQ and ORQ are all equal to one-half theta. This means angles POQ and ROQ are equal to $\pi-\theta$.
With me so far?
8. Ready or not:
Call the point where the altitude of PQR from Q hits PR point S.
Then angles POS and ROS are both equal to theta. From this, the length of PS is $r sin \theta$ and the length of OS is $r cos \theta$. So the length of altitude QS is $r + r cos \theta$. That's where my area formula comes from. It's the length of PS times the length of QS.
9. so when the altitude at Q to the base (line segment PR) is equal to the line segment PR?
where is the point S located? i am a little confused
10. ## Hmmmmmmmmmmmmm
Im just having trouble trying to understand where the point O would be? would the point O be in the middle of the circle?
11. Originally Posted by rawrzjaja
Im just having trouble trying to understand where the point O would be? would the point O be in the middle of the circle?
Point O is the center of the circle and Point S is the end of the altitude from Q, so it is halfway between P and R on the line segment PR.
12. yeah it should be there as far as i can tell
13. Q contains one angle, and the other point R contains the other equal angle? is that what your saying? cause that's the only way i can say the radius being equal
14. so are you getting the max area to be r^2? as setting theta to be 90? or...
15. P, Q, and R are all on the circle | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331663846969604, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/80367/primes-and-ackermanns-function/80445 | ## Primes and Ackermann’s function
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $A(m,n)$ is Ackermann's function, and $c$ is a fixed integer, are there any heuristics/conjectures/obvious things that can be said about primes of the form $A(m,n)+c$, $m \geq 4$,at all?
EDIT: I guess I should add that heuristically, for sufficiently large $m$, the set is so sparse that the expected number of primes of the form $A(m,n)+c$, for fixed $m$ and $c$, should be finite! So maybe I should ask if there is any reason why the number of primes of the form $A(m,n)+c$, for fixed $m$ and $c$, despite $m$ being large, might actually not be finite.
Thanks!
-
6
I can't imagine anything is known, as (1) the number-theoretic structure of Ackermann's function is not clear (2) the question does not seem amenable to experimentation (3) simpler questions are wide open (e.g. are there infinitely many primes of the form $x^2+1$). – BR Nov 8 2011 at 18:29
1
What definition of Ackermann's function are you using? There are several different versions, equivalent in terms of rates of growth, but they would differ for problems like the ones you are asking. – Andres Caicedo Nov 9 2011 at 3:34
I'm using the one which is called the Ackermann-P\'{e}ter function on Wikipedia (en.wikipedia.org/wiki/Ackermann_function). Thanks. – Timothy Foo Nov 10 2011 at 1:06
Anyway, thank you very much, BR and Andres, for helpful comments; I'm wondering what more can be asked or said about $\lim_{k \rightarrow \infty}2\uparrow\uparrow k \bmod N$ , since this limit seems to exist. It's a residue in $(\mathbb{Z}/N\mathbb{Z})^{*}$, and I'm now wondering about the fractions $\frac{\lim_{k \rightarrow \infty}2\uparrow\uparrow k \bmod N}{N}$. – Timothy Foo Nov 10 2011 at 4:24
1
Timothy, I think you'd enjoy looking at the paper Iterated exponents in number theory, by D.B. Shapiro and S.D. Shapiro: math.osu.edu/~shapiro/IE.pdf – Anonymous Dec 10 2011 at 5:31
## 1 Answer
Hartley (http://primes.utm.edu/curios/page.php/71.html) gives that 13 and 71 divide $A(m,n)$ for sufficiently large $m$.
Since ${A(m+1,n): n \geq N} \subset {A(m,n): n\geq A(m+1,N-1)}$, we need only consider smaller $m$.
The case $m=4$ definitely involves $2\uparrow\uparrow k \bmod p$. If, for all primes $p$, $2 \uparrow\uparrow a \equiv 2 \uparrow\uparrow b \bmod p$, for sufficiently large $a,b$, then the $A(m,n)+c$, for fixed $c$, will be composite for sufficiently large $m$, since some prime $p$ will divide $A(4,n)+c$ for some large $n$ and then it will also divide $A(4,r)+c$ for $r \geq n$ if $n$ is sufficiently large, if the condition is true.
But $2 \uparrow\uparrow a \equiv 2 \uparrow\uparrow b \bmod p$ if $2 \uparrow\uparrow (a-1) \equiv 2 \uparrow\uparrow (b-1) \bmod (p-1)$ which is true if $2 \uparrow\uparrow (a-1) \equiv 2 \uparrow\uparrow (b-1)$ mod each prime power, say $q$, dividing $p-1$. But then this reduces to considering $2 \uparrow\uparrow (a-2) \equiv 2 \uparrow\uparrow (b-2) \bmod \varphi(q)$. The point is that the modulus keeps shrinking and eventually we can check $2\uparrow\uparrow k$ modulo small primes.
For example, $2\uparrow\uparrow k \equiv 1 \bmod 3$ and $2\uparrow\uparrow k \equiv 1 \bmod 5$ for sufficiently large $k$.
It seems then that for sufficiently large $m$, and fixed $c$, $A(m,n)+c$ cannot be prime. Does this work?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300144910812378, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/45936/modal-logic-satisfiability | ## Modal logic - satisfiability
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi there,
Assuming X and Y are modal formulae and diamond X is satisfiable and diamond Y is satisfiable, how would one show that they X AND Y is satisfiable?
I don't think it requires much effort?
I think you need to choose one world and one model where X AND Y is true and that would mean it is satisfiable?
So assuming I'm going about it correctly, any ideas what model and world I should select to show this X AND Y is satisfiable?
Any advice would be great,
Thank you.
P.S. NO appropriate tags for this type of most, maybe someone should create a modal logic one (I can't as I'm a new user)
-
## 2 Answers
Your question as originally written (which Henry correctly diagnosed as problematic in two ways) does not match the more reasonable aim reflected in your comments to Henry's answer. Specifically, your comments make it sound like you want to show that the satisfiability of both $\diamond X$ and $\diamond Y$ implies the satisfiability of $\diamond X \wedge \diamond Y$, not the satisfiability of $X\wedge Y$ as your original phrasing states.
If your notion of satisfiability of a formula $Z$ is simply that there is some Kripke model $\mathcal{M}$ (with no restrictions on its accessibility relation) and some world $w$ in it such that $\mathcal{M},w\models Z$, then this weaker form of the question isn't too difficult to answer.
Let $\mathcal{M},w\models\diamond X$ and $\mathcal{N},v\models\diamond Y$. In particular, there is a world $u$ in $\mathcal{N}$ which is accessible from $v$ and satisfies $Y$. Now just form a new model $\mathcal{P}$ whose set of worlds is the union of those of $\mathcal{M},\mathcal{N}$, and whose accessibility relation is the union of those of $\mathcal{M},\mathcal{N}$, plus we set $u$ to be accessible from $w$. Then $\mathcal{P},w\models\diamond X \wedge \diamond Y$.
Henry's point about underspecification is still pertinent. I'm not sure I've gotten at what you want, and if you were to be limited to special kinds of Kripke frames, for instance, then the argument would need to say a bit more (ensuring we end up with an appropriate $\mathcal{P}$). I hope this is helpful.
-
Thanks Ed, I will reply properly and click solve when I've had time to think about this – vivid-colours Nov 13 2010 at 23:09
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Your question is quite underspecified; it's not clear what language you're talking about (my guess is propositional logic plus box and diamond, which is the most common modal logic, but by no means the only one). Even then, there are many different proposed semantics, corresponding to different interpretations of diamond. Even if, by "satisfiable", you mean satisfiable in a possible world semantics (which later parts of your question imply), there are still questions to be decided about the kinds of relationships between the possible worlds allowed. However I think the statement is untrue in just about any system.
To see this, let $X$ be any propositional formula which is neither a tautology nor the negation of a tautology, and let $Y$ be $\neg X$. Take a model with just two worlds, interpret $\diamond$ as meaning "true in either world", and make $X$ true in one and false in the other. Then $\diamond X$ and $\diamond Y=\diamond \neg X$ are both true in every world of this model, but clearly $X\wedge Y=X\wedge\neg X$ is unsatisfiable.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962027370929718, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/95092/linear-characterization-of-inverse-of-stieltjes-matrix/95098 | ## Linear characterization of inverse of Stieltjes matrix
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a linear characterization of being the inverse of a Stieltjes matrix? In other words, if $A$ is a $n \times n$ matrix over the reals, is there a set of linear equations in the entries of $A$ such that $A$ is a the inverse of a Stieltjes matrix if and only if these linear conditions are satisfied?
-
What is "Stieltjes matrix" ? – Alexander Chervov Apr 25 2012 at 18:55
## 2 Answers
As far as I know, an exact characterization of the form you want is unknown. A necessary condition, for an inverse M-matrix (weaker than inverse Stieltjes) is the so-called "path product condition" - see http://www.math.temple.edu/~abed/JS07.pdf.
Another necessary condition is that the principal minors be all positive (and they are multilinear in the entries): see for example: http://mathoverflow.net/questions/14987/inverse-m-matrix
ADDITION: There is an "if and only if" characterization of the sort you want for inverse M-matrices (well, almost, since it's multilinear, but maybe that's what you meant :). See Theorem 2.9.1 in the new survey by Johnson & Smith:
http://www.sciencedirect.com/science/article/pii/S0024379511001273 Charles R. Johnson & Ronald L. Smith, Inverse M-matrices, II, Linear Algebra and its Applications, Volume 435, Issue 5, 1 September 2011, Pages 953–983
Let $A \geq 0$. $A$ is an inverse $M$-matrix iff:
(a) $A$ has at least one diagonal positive entry
(b) all Schur complements of order 2 are nonnegative
(c) all Schur complements of order 1 are positive
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A partial answer is given in this paper.:
A linear algebra proof that the inverse of a strictly ultra metric matrix is a strictly diagonally dominant Stieltjes matrix (Nabben + Varga, SIAM journal of matrix analysis, 1994).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.879411518573761, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/83420?sort=oldest | ## algorithms for comparing two simplicial complexes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a set $A$ of subsets of ${1, \ldots n}$ which is closed under taking subsets, let $X(A)$ be the corresponding simplicial complex, i.e. simplices of $X(A)$ are elements of the set $\bar A$, and gluing is induced by containment of subsets)
Consider the following computational problem
Input: a natural number $n$ and two sets $A$ and $B$ of subsets of ${1,\ldots, n}$, closed under taking subsets.
Problem: Are $X(A)$ and $X(B)$ isomorphic as simplicial complexes? (i.e. is there a bijection of ${1,\ldots ,n}$ which bijectively sends faces of $X(A)$ to faces of $X(B)$?)
Question: I'm interested to know what algorithms are known for this problem. I'm specifically interested in worst running times in terms of $n$ alone. Please note that the size of the input can be exponential in $n$.
In principle $A$ and $B$ might consist of $2^n$ subsets, so this is a lower bound for the problem, because the algorithm needs to read the input.
On the other hand the trivial algorithm of checking each permutation takes at most $\mathcal O(2^n\cdot n!)$ steps.
-
## 2 Answers
An exp(O(n)) algorithm is given in "Hypergraph isomorphism and structural equivalence of boolean functions", Eugene M. Luks, STOC 1999.
-
Aha, great reference! Hm, $e^{O(n)}=2^{cn}$ for a suitable constant $c$, while the naive brute force approach in $O(2^n\cdot n!)$ would amount roughly to $2^{dn\log(n)}$ (for some other constant $d$), I think? – Max Dec 14 2011 at 16:45
Max: yes, exactly. – Łukasz Grabowski Dec 14 2011 at 17:30
@Colin: do you know anything has been done since about the problem described in the paragraph 7.1? – Łukasz Grabowski Dec 14 2011 at 18:40
@Łukasz: I have no idea, sorry. – Colin McQuillan Dec 14 2011 at 18:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A special case of this problem is the graph isomorphism problem. Interestingly, for this it is unknown whether it is solvable in polynomial time (relative to the number of vertices, so $n$ in your case), and also unknown whether or not it is $NP$-complete. As far as I know, Luks' algorithm is still state of the art (though I might be wrong), and that has runtime $O(2^{\sqrt{n \log(n)}})$.
Since this is a special case of your problem, its general worst case runtime will be unknown, too. Of course in this special case, one has only subsets of size 2 / simplices of dimension 1; as you point out, as soon as we allow arbitrary rank simplices, the above runtime cannot be achieved anymore, as the input alone has size $O(2^n)$.
EDIT: Actually, looking at the Wikipedia link I give, I discovered that there is a paper by Babai and Codenotti (2008), "Isomorhism of Hypergraphs of Low Rank in Moderately Exponential Time", where they give an algorithm that works in the same general time as Luks' algorithm for hypergraphs (and thus in particular simplical complexes) of bounded rank that has roughly the same general run time as Luks' algorithm for graph isomorphism. Of course that still does not answer the general question.
-
If the simplicial complexes are relatively sparse one could write them in $O(nk)$, where $k$ is the number of maximal simplices. $k$ is probably bounded by $n$ choose $n/2$, but could be much lower. – Will Sawin Dec 14 2011 at 17:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458622932434082, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/103355-subset-u-subset-not-subspace-superset.html | # Thread:
1. ## subset U subset is NOT subspace of Superset?
Hello,
I have an assignment due tomorrow, and Im stuck at this problem:
Let V be a vector space over any field F, and W1 and W2 be two subspaces of V.
b) Give examples for V, W1, W2 such that W1 U W2 is NOT a subspace of V.
Any ideas?
2. In $\mathbb{R}^2$ : $span\{(1,0)\} \cup span\{(0,1)\}$ is not a subspace - for instance it does not contain $(1,0)+(0,1)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8519464731216431, "perplexity_flag": "middle"} |
http://blog.pwnthesat.com/2011/03/plugging-in-ftw.html | ### Plugging In FTW
OK, so you know how I'm always saying that the SAT is not a math test? This is one of the primary reasons why. On the SAT, it's often completely unnecessary to do the math that's been so carefully laid out before you. A lot of the time (and on a lot of the most otherwise onerous problems), all you need to do is make up numbers.
Sounds crazy, right? Well it's not. It would be crazy to just make up numbers on just about any other pain-in-the-ass task (for instance, it would be bad just to make up numbers on your taxes), but you'll be dumbstruck by how often it works on the SAT. Of course, you have to practice doing it to get good at it, so that when an opportunity to do it on the real test pops up, you don't panic and blow it. That's what your old buddy Mike is here for.
I'm thinking we should start with a more obvious plug-in. If you would consider trying to solve this one with pure algebra, you're probably out of your mind. Still, it's a great illustration of the technique:
1. If m and n are divided by 8, the remainders are 3 and 5, respectively. What is the remainder when mn is divided by 8?
(A) 0
(B) 1
(C) 3
(D) 5
(E) 7
What we want to do with a question like this is plug in values for m and n so that we're not dealing with abstracts. Of course, there are infinite possibilities for the values of both m and n, but we're just going to pick values and stick with them.
Since the problem stipulates that m divided by 8 gives me a remainder of 3, and n divided by 8 gives me a remainder of 5, let's pick m = 11 and n = 13 (because 8 + 3 = 11, and 8 + 5 = 13). That will keep our numbers nice and low, and make the division we'll have to do in the next step less arduous.
Since I've plugged in 11 for m and 13 for n, I need to find the remainder when 11×13=143 is divided by 8. Remember long division? That's going to be the easiest way to calculate a remainder, so let's do it:
Bam. Remainder 7. That's choice (E). I feel so alive right now.
Note that if we picked different numbers for m and n (like, say, 83 and 85), we'd still get the same answer (try it yourself to see). That's the beautiful thing about plugging in!
Let's do another, slightly tougher one:
$\fn_phv&space;\frac{x}{3}+1\geq&space;k$
1. If the inequality above is true for the positive integer constant k, which of the following could be a value of x?
(A) k - 3
(B) k - 1
(C) k
(D) 3k - 4
(E) 3k - 2
OK. Forget for a minute that this can be solved with algebra and think about how to solve it by plugging in. Remember, if you don't practice plugging in on problems you know how to do otherwise, you won't be able to plug in well when you come to a problem you don't know how to solve otherwise!
We know k is a positive integer, so let's say it's 2. If k = 2, then we can do a little manipulation to see that x has to be greater than or equal to 3:
$\fn_phv&space;\frac{x}{3}+1\geq&space;2$
$\fn_phv&space;\frac{x}{3}\geq&space;1$
$\fn_phv&space;x\geq&space;3$
Note that we don't just make up a number for x! Once we've chosen a value for k, we've constrained the universe of possible x's. When we have an equation (or an inequality), we usually can't plug in values for both sides; we have to choose one side on which to plug in, and then see what effects our choices have on the other side.
So, which answer choice, given our plugged in value of k = 2, gives us a number greater than or equal to 3 for x?
(A) k - 3 = 2 - 3 = -1 (too low!)
(B) k - 1 = 2 - 1 = 1 (nuh-uh)
(C) k = 2 (nope)
(D) 3k - 4 = 6 - 4 = 2 (still no good)
(E) 3k - 2 = 6 - 2 = 4 (yes!)
Rock. On. Note again that if we had picked a different number for k, we still would have been OK. Try running through this with k = 10 to see for yourself.
##### So...when do you plug in?
• When you see variables in the question and the answer, you might want to try plugging in.
• On percent questions, you'll probably benefit from plugging in (and using 100 as your starting value).
• On triangle questions where no angles are given, you might try plugging in 60 for all angles.
• If you're plugging in on a geometry question, just make sure that all the angles in your triangles and straight lines add up to 180°.
• Anytime you don't know something that you think it would be helpful to know, try making it up!
##### Anything else I should know?
1. As a general rule, DO NOT plug in 0 because when you multiply things by 0, you always get 0, and when you add 0 to anything, it stays the same. Usually, that will make too many answer choices work.
2. Similarly, DO NOT plug in 1, since when you multiply things by 1, they don't change. This will also often make more than one choice seem correct.
3. DO try to keep your numbers small. There's no need to plug in 2545 when 2 will do.
4. DO think for a minute before picking your numbers. Will the numbers you're choosing result in messy fractions or negative numbers? We plug in to make our lives easier, so practice avoiding these scenarios!
5. You always have to check every single choice when there are variables in the answers and you plugged in, because there's a small chance that more than one answer will work. If that happens, don't panic...just try new numbers. You can greatly mitigate this by following rules #1 and 2 above.
##### Let's try some more problems!
Note: all of these problems can be solved without plugging in, but you're not here to do that right now, you're here to practice plugging in. Don't be intractable in your methods. If you're amenable to change, it's more feasible that you'll improve your scores.
1. If r + 9 is 4 more than s, then r - 11 is how much less than s?
(A) 9
(B) 11
(C) 16
(D) 20
(E) 24
2. If Brunhilda loses 40% of her money playing Pai Gow poker before doubling her remaining money playing roulette, the amount of money she has now is what percent of the amount of money she started with? (Stay away from casinos, kids. They are awful places.)
(A) 20%
(B) 40%
(C) 80%
(D) 100%
(E) 120%
3. If x3 = y, x6 is how much greater than x3, in terms of y?
(A) y3
(B) y2
(C) y(y - 1)
(D) 2y - y
(E) y - 1
1. What is the sum of the marked angles in the diagram above?
(A) 1080°
(B) 900°
(C) 720°
(D) 540°
(E) 360°
2. In a certain office, there are c chairs, d desks, and e employees. All but two of the employees have two chairs at their desks, and all the other desks, whether they are occupied or not, have one chair. If e > 2 and all but five desks are occupied by employees, then which of the following expressions is equal to c?
(A) 2(d - 5) + e
(B) d + e
(C) 2(d - e)
(D) 2(d - 2)
(E) 2e + 3
Want to see more plugging in? Browse the "plug in" tag on my Tumblr for some recently posted examples!
Answers:
10. (C)
12. (E)
14. (C)
16. (B)
20. (E)
Labels: Math, Plug In, Sample Question, strategy
#### 17 comments:
1. Echoyjeff222
are these actually questions from the SAT?
number 20 ... shoot me. :P
2. These questions are based on things I've seen on the SAT in the past. For #20, just pick one variable to get started, plug in, and carefully see where it leads you. Try plugging in for e, since that has to be greater than 2. See what happens when you make it 3. :)
3. can u please put the answer and the explanation for number 20?
4. This one is tough, no? Obviously, because of the name of this post, I think the best solution is to plug in.
Say e = 5, so there are 5 employees. That means there are 10 desks, since all but 5 desks are occupied -- d = 10. And, since all but 2 employees have 2 chairs at their desks, that means 3 of our employees have 2 chairs at their desks. In other words, there is a chair at each of 10 desks, plus 3 more for the 3 employees who have 2 chairs. So c = 13. See which answer choice gives you 13 when d = 10 and e = 5 and you're good to go!
(E) is the only one that works.
5. for question 15 you said you plugged in 11 for m and 13 for n because those divided by 8 leave remainders of 3 and 5, but isn't it that if you SUBTRACT THEM by 8? I'm a little confused
6. Thanks for the comment!
Yes, 11 - 8 = 3 and 13 - 8 = 5. That's true. But what that means is that 11 is 3 more than 8 times 1. So if you divide 11 by 8, you get 1 remainder 3. Same deal for 13. Divide it by 8, and you get 1 remainder 5. You can pick larger numbers if you want, but that's a shortcut I like to use.
7. May
wouldnt A work as well? i plugged in 3 for e and 8 for d. i got 9 on both A and E.
i tried plugging in 5 for e and 10 for d and STILL got 13 on both letter A and E.
8. Hmm...If you're using 5 for e and 10 for d, doesn't (A) give you 2(10 - 5) + 5? That's 15.
9. alfa
can you explain number 9 on the diagnostic drill 1?
Sure! This question is WAY easier if you plug in values for the sides of the original rectangle, and even more so if you're clever about which numbers you plug in. Since a square is a special case of a rectangle, I recommend plugging in 10 for the length, and 10 for the width. The area of the original rectangle, then, is 100.
Your new rectangle, with adjusted sides, will then be 9 by 11, with an area of 99. That's 99% of the area of the original rectangle.
11. alfa
Ohh that makes so much sense!!! *why didn't i think of that?*
Thanks!
12. alfa
How would you solve this question?
1. If sally drives m miles from her house to her office in f hours, and drives back to her house in g hours, what is her average speed of the entire trip, in miles per hour?
a) (f+g)/2
b)m/(f+g)
c)2m/9(f+g)
d)fg/(f+g)
e)2fg/ (f+g)
13. I'd definitely plug in here, but before I get into it let me point out for everyone else that you made a typo in answer choice (C). That 9 shouldn't be there...it probably came from a quick double tap when you were opening the parentheses. ::Spoiler alert:: This is pretty important, because that turns out to be the right answer.
Let's say m = 60, f = 1, and g = 2. That means that her total travel time was 3 hours, and her total distance traveled is 120 miles. That'd make her average speed 120m/3hr = 40mph.
Now just plug the numbers you chose into the answer choices to see which one gives you 40!
(A) 3/2 ← nope(B) 60/3 = 20 ← nope(C) 120/3 = 40 ← yes!(D) 2/3 ← nope(E) 4/3 ← nope
14. Anxious
if x + y = r and x - y = s, then in terms of x and y, r^2 - s^2 =
this is #16 on Math Drill #1! Please explain how I would do this problem. Thank you!
15. Plug in some simple numbers. Say x = 3 and y = 2. That way r = 5 and s = 1, and r^2 – s^2 = 24.
Then all you need to do is plug in 3 for x and 2 for y in the answer choices. One of them will give you 24!
16. kelseyb
can you explain how to do number 16 here? thanks!
17. Sure. When the SAT gives you a completely unmarked triangle like this, assume that they're getting at something that must be true about ALL triangles, and then plug in angle values for an easy triangle. In this case, it's totally fine to just assume the triangle is equilateral even though it isn't, and say all the interior angles are 60º. That the supplementary angles all 120º, so each marked angle is 120º + 60º + 120º = 300º. Add 'em up, and you get 900º.
Now here's the fun part: Try it with different angles. As long as the angles in your triangle add up to 180º, you'll get 900º for the answer. Why is that??? :)
Subscribe to: Post Comments (Atom)
## Get connected
Subscribe in a reader
• 9 hours ago
• 18 hours ago
• 19 hours ago
• 1 day ago
• 1 day ago
• 2 days ago
• 2 days ago
• 6 days ago
• 6 days ago
• 1 week ago
• 1 week ago
• 2 weeks ago
• 2 weeks ago
• 2 weeks ago
• 3 weeks ago
• 4 weeks ago
• 3 months ago
• 4 months ago
• 9 months ago
• 1 year ago
• 1 year ago
## Archive
• ► 2013 (16)
• ► May (1)
• ► April (3)
• ► March (4)
• ► February (2)
• ► January (6)
• ► 2012 (68)
• ► December (4)
• ► November (4)
• ► October (5)
• ► September (1)
• ► August (4)
• ► July (9)
• ► June (11)
• ► May (7)
• ► April (1)
• ► March (7)
• ► February (8)
• ► January (7)
• ▼ 2011 (147)
• ► December (7)
• ► November (9)
• ► October (11)
• ► September (10)
• ► August (8)
• ► July (11)
• ► June (14)
• ► May (13)
• ► April (14)
• ► February (25) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416324496269226, "perplexity_flag": "middle"} |
http://sagemath.org/doc/thematic_tutorials/numtheory_rsa.html | # Number Theory and the RSA Public Key Cryptosystem¶
Author: Minh Van Nguyen <[email protected]>
This tutorial uses Sage to study elementary number theory and the RSA public key cryptosystem. A number of Sage commands will be presented that help us to perform basic number theoretic operations such as greatest common divisor and Euler’s phi function. We then present the RSA cryptosystem and use Sage’s built-in commands to encrypt and decrypt data via the RSA algorithm. Note that this tutorial on RSA is for pedagogy purposes only. For further details on cryptography or the security of various cryptosystems, consult specialized texts such as [MenezesEtAl1996], [Stinson2006], and [TrappeWashington2006].
## Elementary number theory¶
We first review basic concepts from elementary number theory, including the notion of primes, greatest common divisors, congruences and Euler’s phi function. The number theoretic concepts and Sage commands introduced will be referred to in later sections when we present the RSA algorithm.
### Prime numbers¶
Public key cryptography uses many fundamental concepts from number theory, such as prime numbers and greatest common divisors. A positive integer $$n > 1$$ is said to be prime if its factors are exclusively 1 and itself. In Sage, we can obtain the first 20 prime numbers using the command primes_first_n:
```sage: primes_first_n(20)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]
```
### Greatest common divisors¶
Let $$a$$ and $$b$$ be integers, not both zero. Then the greatest common divisor (GCD) of $$a$$ and $$b$$ is the largest positive integer which is a factor of both $$a$$ and $$b$$. We use $$\gcd(a,b)$$ to denote this largest positive factor. One can extend this definition by setting $$\gcd(0,0) = 0$$. Sage uses gcd(a, b) to denote the GCD of $$a$$ and $$b$$. The GCD of any two distinct primes is 1, and the GCD of 18 and 27 is 9.
```sage: gcd(3, 59)
1
sage: gcd(18, 27)
9
```
If $$\gcd(a,b) = 1$$, we say that $$a$$ is coprime (or relatively prime) to $$b$$. In particular, $$\gcd(3, 59) = 1$$ so 3 is coprime to 59 and vice versa.
### Congruences¶
When one integer is divided by a non-zero integer, we usually get a remainder. For example, upon dividing 23 by 5, we get a remainder of 3; when 8 is divided by 5, the remainder is again 3. The notion of congruence helps us to describe the situation in which two integers have the same remainder upon division by a non-zero integer. Let $$a,b,n \in \ZZ$$ such that $$n \neq 0$$. If $$a$$ and $$b$$ have the same remainder upon division by $$n$$, then we say that $$a$$ is congruent to $$b$$ modulo $$n$$ and denote this relationship by
$a \equiv b \pmod{n}$
This definition is equivalent to saying that $$n$$ divides the difference of $$a$$ and $$b$$, i.e. $$n \;|\; (a - b)$$. Thus $$23 \equiv 8 \pmod{5}$$ because when both 23 and 8 are divided by 5, we end up with a remainder of 3. The command mod allows us to compute such a remainder:
```sage: mod(23, 5)
3
sage: mod(8, 5)
3
```
### Euler’s phi function¶
Consider all the integers from 1 to 20, inclusive. List all those integers that are coprime to 20. In other words, we want to find those integers $$n$$, where $$1 \leq n \leq 20$$, such that $$\gcd(n,20) = 1$$. The latter task can be easily accomplished with a little bit of Sage programming:
```sage: for n in range(1, 21):
... if gcd(n, 20) == 1:
... print n,
...
1 3 7 9 11 13 17 19
```
The above programming statements can be saved to a text file called, say, /home/mvngu/totient.sage, organizing it as follows to enhance readability.
```for n in xrange(1, 21):
if gcd(n, 20) == 1:
print n,
```
We refer to totient.sage as a Sage script, just as one would refer to a file containing Python code as a Python script. We use 4 space indentations, which is a coding convention in Sage as well as Python programming, instead of tabs.
The command load can be used to read the file containing our programming statements into Sage and, upon loading the content of the file, have Sage execute those statements:
```load("/home/mvngu/totient.sage")
1 3 7 9 11 13 17 19```
From the latter list, there are 8 integers in the closed interval $$[1, 20]$$ that are coprime to 20. Without explicitly generating the list
`1 3 7 9 11 13 17 19`
how can we compute the number of integers in $$[1, 20]$$ that are coprime to 20? This is where Euler’s phi function comes in handy. Let $$n \in \ZZ$$ be positive. Then Euler’s phi function counts the number of integers $$a$$, with $$1 \leq a \leq n$$, such that $$\gcd(a,n) = 1$$. This number is denoted by $$\varphi(n)$$. Euler’s phi function is sometimes referred to as Euler’s totient function, hence the name totient.sage for the above Sage script. The command euler_phi implements Euler’s phi function. To compute $$\varphi(20)$$ without explicitly generating the above list, we proceed as follows:
```sage: euler_phi(20)
8
```
## How to keep a secret?¶
Cryptography is the science (some might say art) of concealing data. Imagine that we are composing a confidential email to someone. Having written the email, we can send it in one of two ways. The first, and usually convenient, way is to simply press the send button and not care about how our email will be delivered. Sending an email in this manner is similar to writing our confidential message on a postcard and post it without enclosing our postcard inside an envelope. Anyone who can access our postcard can see our message. On the other hand, before sending our email, we can scramble the confidential message and then press the send button. Scrambling our message is similar to enclosing our postcard inside an envelope. While not 100% secure, at least we know that anyone wanting to read our postcard has to open the envelope.
In cryptography parlance, our message is called plaintext. The process of scrambling our message is referred to as encryption. After encrypting our message, the scrambled version is called ciphertext. From the ciphertext, we can recover our original unscrambled message via decryption. The following figure illustrates the processes of encryption and decryption. A cryptosystem is comprised of a pair of related encryption and decryption processes.
```+ ---------+ encrypt +------------+ decrypt +-----------+
| plaintext| -----------> | ciphertext | -----------> | plaintext |
+----------+ +------------+ +-----------+```
The following table provides a very simple method of scrambling a message written in English and using only upper case letters, excluding punctuation characters.
```+----------------------------------------------------+
| A B C D E F G H I J K L M |
| 65 66 67 68 69 70 71 72 73 74 75 76 77 |
+----------------------------------------------------+
| N O P Q R S T U V W X Y Z |
| 78 79 80 81 82 83 84 85 86 87 88 89 90 |
+----------------------------------------------------+```
Formally, let
$\Sigma = \{ \mathtt{A}, \mathtt{B}, \mathtt{C}, \dots, \mathtt{Z} \}$
be the set of capital letters of the English alphabet. Furthermore, let
$\Phi = \{ 65, 66, 67, \dots, 90 \}$
be the American Standard Code for Information Interchange (ASCII) encodings of the upper case English letters. Then the above table explicitly describes the mapping $$f: \Sigma \longrightarrow \Phi$$. (For those familiar with ASCII, $$f$$ is actually a common process for encoding elements of $$\Sigma$$, rather than a cryptographic “scrambling” process per se.) To scramble a message written using the alphabet $$\Sigma$$, we simply replace each capital letter of the message with its corresponding ASCII encoding. However, the scrambling process described in the above table provides, cryptographically speaking, very little to no security at all and we strongly discourage its use in practice.
## Keeping a secret with two keys¶
The Rivest, Shamir, Adleman (RSA) cryptosystem is an example of a public key cryptosystem. RSA uses a public key to encrypt messages and decryption is performed using a corresponding private key. We can distribute our public keys, but for security reasons we should keep our private keys to ourselves. The encryption and decryption processes draw upon techniques from elementary number theory. The algorithm below is adapted from page 165 of [TrappeWashington2006]. It outlines the RSA procedure for encryption and decryption.
1. Choose two primes $$p$$ and $$q$$ and let $$n = pq$$.
2. Let $$e \in \ZZ$$ be positive such that $$\gcd \big( e, \varphi(n) \big) = 1$$.
3. Compute a value for $$d \in \ZZ$$ such that $$de \equiv 1 \pmod{\varphi(n)}$$.
4. Our public key is the pair $$(n, e)$$ and our private key is the triple $$(p,q,d)$$.
5. For any non-zero integer $$m < n$$, encrypt $$m$$ using $$c \equiv m^e \pmod{n}$$.
6. Decrypt $$c$$ using $$m \equiv c^d \pmod{n}$$.
The next two sections will step through the RSA algorithm, using Sage to generate public and private keys, and perform encryption and decryption based on those keys.
## Generating public and private keys¶
Positive integers of the form $$M_m = 2^m - 1$$ are called Mersenne numbers. If $$p$$ is prime and $$M_p = 2^p - 1$$ is also prime, then $$M_p$$ is called a Mersenne prime. For example, 31 is prime and $$M_{31} = 2^{31} - 1$$ is a Mersenne prime, as can be verified using the command is_prime(p). This command returns True if its argument p is precisely a prime number; otherwise it returns False. By definition, a prime must be a positive integer, hence is_prime(-2) returns False although we know that 2 is prime. Indeed, the number $$M_{61} = 2^{61} - 1$$ is also a Mersenne prime. We can use $$M_{31}$$ and $$M_{61}$$ to work through step 1 in the RSA algorithm:
```sage: p = (2^31) - 1
sage: is_prime(p)
True
sage: q = (2^61) - 1
sage: is_prime(q)
True
sage: n = p * q ; n
4951760154835678088235319297
```
A word of warning is in order here. In the above code example, the choice of $$p$$ and $$q$$ as Mersenne primes, and with so many digits far apart from each other, is a very bad choice in terms of cryptographic security. However, we shall use the above chosen numeric values for $$p$$ and $$q$$ for the remainder of this tutorial, always bearing in mind that they have been chosen for pedagogy purposes only. Refer to [MenezesEtAl1996], [Stinson2006], and [TrappeWashington2006] for in-depth discussions on the security of RSA, or consult other specialized texts.
For step 2, we need to find a positive integer that is coprime to $$\varphi(n)$$. The set of integers is implemented within the Sage module sage.rings.integer_ring. Various operations on integers can be accessed via the ZZ.* family of functions. For instance, the command ZZ.random_element(n) returns a pseudo-random integer uniformly distributed within the closed interval $$[0, n-1]$$.
We can compute the value $$\varphi(n)$$ by calling the sage function euler_phi(n), but for arbitrarily large prime numbers $$p$$ and $$q$$, this can take an enormous amount of time. Indeed, the private key can be quickly deduced from the public key once you know $$\varphi(n)$$, so it is an important part of the security of the RSA cryptosystem that $$\varphi(n)$$ cannot be computed in a short time, if only $$n$$ is known. On the other hand, if the private key is available, we can compute $$\varphi(n)=(p-1)(q-1)$$ in a very short time.
Using a simple programming loop, we can compute the required value of $$e$$ as follows:
```sage: p = (2^31) - 1
sage: q = (2^61) - 1
sage: n = p * q
sage: phi = (p - 1)*(q - 1); phi
4951760152529835076874141700
sage: e = ZZ.random_element(phi)
sage: while gcd(e, phi) != 1:
... e = ZZ.random_element(phi)
...
sage: e # random
1850567623300615966303954877
sage: e < n
True
```
As e is a pseudo-random integer, its numeric value changes after each execution of e = ZZ.random_element(phi).
To calculate a value for d in step 3 of the RSA algorithm, we use the extended Euclidean algorithm. By definition of congruence, $$de \equiv 1 \pmod{\varphi(n)}$$ is equivalent to
$de - k \cdot \varphi(n) = 1$
where $$k \in \ZZ$$. From steps 1 and 2, we already know the numeric values of $$e$$ and $$\varphi(n)$$. The extended Euclidean algorithm allows us to compute $$d$$ and $$-k$$. In Sage, this can be accomplished via the command xgcd. Given two integers $$x$$ and $$y$$, xgcd(x, y) returns a 3-tuple (g, s, t) that satisfies the Bézout identity $$g = \gcd(x,y) = sx + ty$$. Having computed a value for d, we then use the command mod(d*e, phi) to check that d*e is indeed congruent to 1 modulo phi.
```sage: n = 4951760154835678088235319297
sage: e = 1850567623300615966303954877
sage: phi = 4951760152529835076874141700
sage: bezout = xgcd(e, phi); bezout # random
(1, 4460824882019967172592779313, -1667095708515377925087033035)
sage: d = Integer(mod(bezout[1], phi)) ; d # random
4460824882019967172592779313
sage: mod(d * e, phi)
1
```
Thus, our RSA public key is
$(n, e) = (4951760154835678088235319297,\, 1850567623300615966303954877)$
and our corresponding private key is
$(p, q, d) = (2147483647,\, 2305843009213693951,\, 4460824882019967172592779313)$
## Encryption and decryption¶
Suppose we want to scramble the message HELLOWORLD using RSA encryption. From the above ASCII table, our message maps to integers of the ASCII encodings as given below.
```+----------------------------------------+
| H E L L O W O R L D |
| 72 69 76 76 79 87 79 82 76 68 |
+----------------------------------------+```
Concatenating all the integers in the last table, our message can be represented by the integer
$m = 72697676798779827668$
There are other more cryptographically secure means for representing our message as an integer. The above process is used for demonstration purposes only and we strongly discourage its use in practice. In Sage, we can obtain an integer representation of our message as follows:
```sage: m = "HELLOWORLD"
sage: m = map(ord, m); m
[72, 69, 76, 76, 79, 87, 79, 82, 76, 68]
sage: m = ZZ(list(reversed(m)), 100) ; m
72697676798779827668
```
To encrypt our message, we raise $$m$$ to the power of $$e$$ and reduce the result modulo $$n$$. The command mod(a^b, n) first computes a^b and then reduces the result modulo n. If the exponent b is a “large” integer, say with more than 20 digits, then performing modular exponentiation in this naive manner takes quite some time. Brute force (or naive) modular exponentiation is inefficient and, when performed using a computer, can quickly consume a huge quantity of the computer’s memory or result in overflow messages. For instance, if we perform naive modular exponentiation using the command mod(m^e, n), where m, n and e are as given above, we would get an error message similar to the following:
```mod(m^e, n)
Traceback (most recent call last)
/home/mvngu/<ipython console> in <module>()
/home/mvngu/usr/bin/sage-3.1.4/local/lib/python2.5/site-packages/sage/rings/integer.so
in sage.rings.integer.Integer.__pow__ (sage/rings/integer.c:9650)()
RuntimeError: exponent must be at most 2147483647```
There is a trick to efficiently perform modular exponentiation, called the method of repeated squaring, cf. page 879 of [CormenEtAl2001]. Suppose we want to compute $$a^b \mod n$$. First, let $$d \mathrel{\mathop:}= 1$$ and obtain the binary representation of $$b$$, say $$(b_1, b_2, \dots, b_k)$$ where each $$b_i \in \ZZ/2\ZZ$$. For $$i \mathrel{\mathop:}= 1, \dots, k$$, let $$d \mathrel{\mathop:}= d^2 \mod n$$ and if $$b_i = 1$$ then let $$d \mathrel{\mathop:}= da \mod n$$. This algorithm is implemented in the function power_mod. We now use the function power_mod to encrypt our message:
```sage: m = 72697676798779827668
sage: e = 1850567623300615966303954877
sage: n = 4951760154835678088235319297
sage: c = power_mod(m, e, n); c
630913632577520058415521090
```
Thus $$c = 630913632577520058415521090$$ is the ciphertext. To recover our plaintext, we raise c to the power of d and reduce the result modulo n. Again, we use modular exponentiation via repeated squaring in the decryption process:
```sage: m = 72697676798779827668
sage: c = 630913632577520058415521090
sage: d = 4460824882019967172592779313
sage: n = 4951760154835678088235319297
sage: power_mod(c, d, n)
72697676798779827668
sage: power_mod(c, d, n) == m
True
```
Notice in the last output that the value 72697676798779827668 is the same as the integer that represents our original message. Hence we have recovered our plaintext.
## Acknowledgements¶
1. 2009-07-25: Ron Evans (Department of Mathematics, UCSD) reported a typo in the definition of greatest common divisors. The revised definition incorporates his suggestions.
2. 2008-11-04: Martin Albrecht (Information Security Group, Royal Holloway, University of London), John Cremona (Mathematics Institute, University of Warwick) and William Stein (Department of Mathematics, University of Washington) reviewed this tutorial. Many of their invaluable suggestions have been incorporated into this document.
## Bibliography¶
[CormenEtAl2001] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. The MIT Press, USA, 2nd edition, 2001.
[MenezesEtAl1996] (1, 2) A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone. Handbook of Applied Cryptography. CRC Press, Boca Raton, FL, USA, 1996.
[Stinson2006] (1, 2) D. R. Stinson. Cryptography: Theory and Practice. Chapman & Hall/CRC, Boca Raton, USA, 3rd edition, 2006.
[TrappeWashington2006] (1, 2, 3) W. Trappe and L. C. Washington. Introduction to Cryptography with Coding Theory. Pearson Prentice Hall, Upper Saddle River, New Jersey, USA, 2nd edition, 2006.
### Table Of Contents
#### Previous topic
Linear Programming (Mixed Integer)
#### Next topic
Tutorial: Programming in Python and Sage
### Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 101, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8711410760879517, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/torque?sort=unanswered&pagesize=50 | # Tagged Questions
The torque tag has no wiki summary.
2answers
91 views
### Thrust center in space
I have this dilemma: Suppose you have a space ship somewhere in deep space, where there is no drag force or substantial gravity. If the ship has a single engine situated in such a way that the center ...
1answer
106 views
### Force and Torque Question on an isolated system
If there's a rigid rod in space, and you give some external force perpendicular to the rod at one of the ends for a short time, what happens? Specifically: What dependence does the moment of inertia ...
0answers
146 views
### Why do control moment gyroscopes exhibit “torque amplification”?
There are a number of articles that describe the benefits of using control moment gyroscopes (CMGs) over reaction wheels in inertial navigation applications. One of the primary benefits of using a CMG ...
0answers
67 views
### Torque, lever and mass
The Force used in a catapult is exerted near its axis. If we double the length of the arm of the catapult, but still use the same Force at the same point as before near the same axis, does the ...
0answers
104 views
### Deriving torque from Euler-Lagrange equation
How could you derive an equation for the torque on a rotating (but not translating) rigid body from the Euler-Lagrange equation? As far as I know from my first class in Classical Mechanics, there is ...
0answers
37 views
### Truck driven from a small motor that can carry a heavy load, yet can travel fast.
I am building a truck from trash as materials. I have one small motor and a a few small gears, but no other engineered materials are allowed. The truck must carry a load for a distance of 3m. The ...
0answers
83 views
### How much force is required to hold an umbrella?
Assuming a small umbrella (e.g. http://www.amazon.com/ShedRain-WindPro-Umbrella-Close-Size/dp/B001DL5WN0). What are the factors that need to be considered? I assume where I hold the umbrella might ...
0answers
242 views
### Neglecting friction on a pulley?
So, this is how the problem looks: http://www.aplusphysics.com/courses/honors/dynamics/images/Atwood%20Problem.png Plus, the pulley is suspended on a cord at its center and hanging from the ceiling. ...
0answers
296 views
### How to find Rotational and Translational Equilibrium of Hanging Masses on a Bar?
I am making a hanging mobile which needs to be done mathematically by calculating torque. The problem is, I can't seem to figure out how to solve for the distance of the two masses from the pivot ...
0answers
125 views
### How do you determine the torque caused by the mass of a lever?
Suppose we have two objects sitting on two side of a lever, and the lever also has a mass, and those objects have masses. Then how we can balance $\sum τ$? This is what I have done: ...
0answers
129 views
### Torque required to spin a disk along its diameter
How would I calculate (or simulate) this? I am only interested in the aerodynamic drag caused by the surface moving, not any other forces. As far as I know, the only variables needed are the drag ...
0answers
25 views
### Determine the dilation temperature so as to double the speed
There's a metallic rod of length $l_1$ which is spinning around a vertical ax which passes through its center. The ends of the rod are spinning with $\omega_1$ angular speed. Determine the temperature ...
0answers
213 views
### How to decide velocity profile for two stepper motor in robot driving
A robot has 2 parallel driving wheel. I don't know the friction of the ground surface. However, I can set the acceleration, starting velocity, ending velocity. The velocity profile has to be ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335830807685852, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=4027917 | Physics Forums
Page 1 of 3 1 2 3 >
## Rolling friction on ice...
Hello Forum,
Consider a rigid disk that is rotating on a surface. If the surface is elastic but not symmetric, the rotating disk will eventually slow down.
If the surface was perfectly and symmetrically elastic the object will continue to rotate (the front deformation of the surface would hinder the motion and the rear deformation of the surface would help the rotation).
For an object to roll, does the surface need to have a nonzero coefficient of static friction?
I would think so since the contact point is instantaneously at rest....
But what would happen if the rotating disk was rotating on a rigid surface with nonzero coeff of static friction and moved into a region whose surface has zero coeff. of static friction?
I was told that the object would continue to move and rotate at the same rate suffering no torque that would increase or decrease its angular momentum.
But the fact the the new surface has zero coeff. of friction leads me to believe that the rotating disk has no grip.
Like a person that goes from the concrete to ice: it will not continue its motion. Why would the rotating disk continue to rotate and move forward instead?
thanks
fisico30
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions:
Homework Help
Science Advisor
Quote by fisico30 If the surface is elastic but not symmetric, the rotating disk will eventually slow down.
Are you sure you mean "elastic"? And symmetry of what? And how should this brake the disk?
For an object to roll, does the surface need to have a nonzero coefficient of static friction?
No, but with zero static friction the angular velocity and the linear velocity have to be fine-tuned ($v=\omega r$) to avoid slipping.
But what would happen if the rotating disk was rotating on a rigid surface with nonzero coeff of static friction and moved into a region whose surface has zero coeff. of static friction?
Rotating as in "rolling"? It would continue to roll.
Like a person that goes from the concrete to ice: it will not continue its motion.
It will. Jump from "not ice" on ice to test, as humans have no wheels or similar devices to roll.
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
hello fisico30!
Quote by fisico30 But the fact the the new surface has zero coeff. of friction leads me to believe that the rotating disk has no grip. Like a person that goes from the concrete to ice: it will not continue its motion. Why would the rotating disk continue to rotate and move forward instead?
because of good ol' newton's first law (linear and rotational versions) …
any body on which there is no external force will continue to move with constant velocity
any body on which there is no external torque will continue to rotate with constant angular velocity
Recognitions:
Homework Help
Science Advisor
## Rolling friction on ice...
Quote by tiny-tim any body on which there is no external torque will continue to rotate with constant angular velocity
... unless it changes its shape to change its moment of inertia. However, it will keep its angular momentum.
Thanks everyone. I am convinced that the wheel will continue to rotate at the same rate even when it passes on ice.... How about this: if the wheel rotates on a perfectly rigid surface (so no rolling friction, no air drag, etc....) some say that the wheel will never stop. Others say that the force of static friction, which arises from the wheel pushing backward on the surface, slowly reduces the angular velocity and translational velocity of the wheel by applying a torque... Is that true? thanks fisico30
If there is no rolling friction, no air drag, etc then why would you expect the wheel to be "pushing back on the surface".
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by fisico30 I am convinced that the wheel will continue to rotate at the same rate even when it passes on ice....
yes, if it's not being driven or braked, and if there's no friction, then it will rotate at the same rate
if the wheel rotates on a perfectly rigid surface (so no rolling friction, no air drag, etc....) some say that the wheel will never stop. Others say that the force of static friction, which arises from the wheel pushing backward on the surface, slowly reduces the angular velocity and translational velocity of the wheel by applying a torque... Is that true?
if the (linear) speed v, the angular speed ω, and the radius are related by the rolling constraint v = rω, and if there is no applied force or torque,* then there will be no friction force whatever the coefficient of friction is
(*in practice, there is always a small amount of air resistance, friction with the axle etc, which will prevent the acceleration being zero unless a small torque is supplied from the engine)
Thanks tiny-tim.... What do you think about this: if an object is rolling on a surface with a finite static coefficient of friction, zero rolling friction and not sliding, will the cylinder eventually slow down (decrease in translational velocity and angular velocity) or will it continue with its constant speed? Some told me that even if the rolling friction is zero, the static friction at the contact point will cause a torque that will eventually slow down the rotation and speed of the rolling cylinder... thanks fisico30
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
hi fisico30!
Quote by fisico30 … if an object is rolling on a surface with a finite static coefficient of friction, zero rolling friction and not sliding, will the cylinder eventually slow down (decrease in translational velocity and angular velocity) or will it continue with its constant speed?
(assuming no air resistance, and a horizontal surface) constant speed …
if it's already rolling, then the static friction will be zero, and the only external force is vertical (gravity)
Some told me that even if the rolling friction is zero, the static friction at the contact point will cause a torque that will eventually slow down the rotation and speed of the rolling cylinder...
tell them there's no static friction
static friction is less than or equal to µN …
it adjusts itself to fit the starting conditions, and the starting conditions are perfectly happy without it!
Maybe I am starting to get it: When a person walks (I know it is not rolling), the static friction is important because the foot pushes backward and the effect is forward motion, as long as the backward directed push is less that the max static frictional force... In the case of a cylinder rolling on a surface having nonzero coefficient of static friction, I would think that a static frictional force must exist at the contact point between the cylinder and the surface, since that point is at rest (instantaneously).... So static friction must exist for rolling to take place, doesn't it? The cylinder, in a sense, is trying to push the surface backward, but the static friction makes the cylinder roll forward instead... thanks fisico30
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by fisico30 In the case of a cylinder rolling on a surface having nonzero coefficient of static friction, I would think that a static frictional force must exist at the contact point between the cylinder and the surface, since that point is at rest (instantaneously).... So static friction must exist for rolling to take place, doesn't it?
no no no …
rolling will happen so long as the (linear) speed v and the angular speed ω are related by v = ωr
if they start like that, and if there are no external horizontal forces (or torques), then v and ω will stay the same (good ol' newton's first law) …
so the rolling automatically continues!
(of course, in practice there are losses to rolling resistance and air resistance, eg the net air resistance is almost exactly horizontal, and almost exactly through the centre of the cylinder, so it decreases v very slightly, but leaves ω the same … so there must be a very slight forward friction force to reduce ω, to compensate )
thanks tiny-tim. I see how v=omega r. It must be like that in the case of rolling. So the cylinder does not need to speed energy in the form of work against the force of friction at the contact point? Does that static friction not cause any torque? fisico30
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor friction at the contact point must cause torque, since it's not through the centre of mass if the rotation rate is not constant, then there must be net torque (so if all the other forces are through the centre of mass, then there must be friction at the contact point)
Ok , so static friction does cause torque. Does that torque not slow the rotation down? If rolling exists, like a cylinder on a surface (no rolling friction), that static friction torque must be there too...
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by fisico30 Ok , so static friction does cause torque. Does that torque not slow the rotation down?
if there is an applied horizontal force C through the centre, and a friction force F, and if the mass is m, and the "rolling mass" (I/r2) is mr, and if there is no slipping, then:
C + F = (m + mr)a
energy = 1/2 (m + mr)v2
F = -(I/r)α = -mra
so the work done is ∫ (C + F).dx …
yes, both the applied force C and the friction force F do work (and F is in the opposite direction to C, so yes it always reduces the good work being done by the applied force )
tiny-tim, I guess I am implying that there is no force F. The cylinder is given push and set into motion. Will it continue to roll at that translational velocity or will it slow down (rolling friction is zero)?
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by fisico30 The cylinder is given push and set into motion.
when it has stopped sliding, and started rolling, it will continue to roll forever at the same linear speed and angular speed, and there will be no friction force, if we assume there is no rolling resistance or applied force
Page 1 of 3 1 2 3 >
Thread Tools
| | | |
|-------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Rolling friction on ice... | | |
| Thread | Forum | Replies |
| | Classical Physics | 11 |
| | Classical Physics | 6 |
| | Introductory Physics Homework | 1 |
| | Materials & Chemical Engineering | 0 |
| | Classical Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141954183578491, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2011/09/30/inner-products-of-vector-fields/?like=1&source=post_flair&_wpnonce=d91e0e6d9a | # The Unapologetic Mathematician
## Inner Products of Vector Fields
Now that we can define the inner product of two vectors using a metric $g$, we want to generalize this to apply to vector fields.
This should be pretty straightforward: if $v$ and $w$ are vector fields on an open region $U$ it gives us vectors $v_p$ and $w_p$ at each $p\in U$. We can hit these pairs with $g_p$ to get $g_p(v_p,w_p)$, which is a real number. Since we get such a number at each point $p$, this gives us a function $g(v,w):U\to\mathbb{R}$.
That this $g$ is a bilinear function is clear. In fact we’ve already implied this fact when saying that $g$ is a tensor field. But in what sense is it an inner product? It’s symmetric, since each $g_p$ is, and positive definite as well. To be more explicit: $g_p(v_p,v_p)\geq0$ with equality if and only if $v_p$ is the zero vector in $\mathcal{T}_pM$. Thus the function $g(v,v)$ always takes on nonneative values, is zero exactly where $v$ is, and is the zero function if and only if $v$ is the zero vector field.
What about nondegeneracy? This is a little trickier. Given a nonzero vector field, we can find some point $p$ where $v_p$ is nonzero, and we know that there is some $w_p$ such that $g_p(v_p,w_p)\neq0$. In fact, we can find some region $U$ around $p$ where $v$ is everywhere nonzero, and for each point $q\in U$ we can find a $w_q$ such that $g_q(v_q,w_q)\neq0$. The question is: can we do this in such a way that $w_q$ is a smooth vector field?
The trick is to pick some coordinate map $x$ on $U$, shrinking the region if necessary. Then there must be some $i$ such that
$\displaystyle g_p\left(v_p,\frac{\partial}{\partial x^i}\bigg\vert_p\right)\neq0$
because otherwise $g_p$ would be degenerate. Now we get a smooth function near $p$:
$\displaystyle g\left(v,\frac{\partial}{\partial x^i}\right)$
which is nonzero at $p$, and so must be nonzero in some neighborhood of $p$. Letting $w$ be this coordinate vector field gives us a vector field that when paired with $v$ using $g$ gives a smooth function that is not identically zero. Thus $g$ is also nonzero, and is worthy of the title “inner product” on the module of vector fields $\mathfrak{X}(U)$ over the ring of smooth functions $\mathcal{O}(U)$.
Notice that we haven’t used the fact that the $g_p$ are positive-definite except in the proof that $g$ is, which means that if $g$ is merely pseudo-Riemannian then $g$ is still symmetric and nondegenerate, so it’s still sort of like an inner product, like an symmetric, nondegenerate, but indefinite form is still sort of like an inner product.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 2 Comments »
1. [...] next step after using a metric to define an inner product on the module of vector spaces over the ring of smooth functions is to flip it around to the [...]
Pingback by | October 1, 2011 | Reply
2. [...] Armstrong: (Pseudo)-Riemannian Metrics, Isometries, Inner Products on 1-Forms, The Hodge Star in Coordinates, The Hodge Star on Differential Forms, Inner Products on Differential [...]
Pingback by | October 8, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061597585678101, "perplexity_flag": "head"} |
http://terrytao.wordpress.com/tag/homogenisation/ | What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘homogenisation’ tag.
## Localisation and compactness properties of the Navier-Stokes global regularity problem
4 August, 2011 in math.AP, math.MP, paper | Tags: concentration compactness, energy estimates, global well-posedness, homogenisation, localisation, Navier-Stokes equations | by Terence Tao | 9 comments
I’ve just uploaded to the arXiv my paper “Localisation and compactness properties of the Navier-Stokes global regularity problem“, submitted to Analysis and PDE. This paper concerns the global regularity problem for the Navier-Stokes system of equations
$\displaystyle \partial_t u + (u \cdot \nabla) u = \Delta u - \nabla p + f \ \ \ \ \ (1)$
$\displaystyle \nabla \cdot u = 0 \ \ \ \ \ (2)$
$\displaystyle u(0,\cdot) = u_0 \ \ \ \ \ (3)$
in three dimensions. Thus, we specify initial data ${(u_0,f,T)}$, where ${0 < T < \infty}$ is a time, ${u_0: {\bf R}^3 \rightarrow {\bf R}^3}$ is the initial velocity field (which, in order to be compatible with (2), (3), is required to be divergence-free), ${f: [0,T] \times {\bf R}^3 \rightarrow {\bf R}^3}$ is the forcing term, and then seek to extend this initial data to a solution ${(u,p,u_0,f,T)}$ with this data, where the velocity field ${u: [0,T] \times {\bf R}^3 \rightarrow {\bf R}^3}$ and pressure term ${p: [0,T] \times {\bf R}^3 \rightarrow {\bf R}}$ are the unknown fields.
Roughly speaking, the global regularity problem asserts that given every smooth set of initial data ${(u_0,f,T)}$, there exists a smooth solution ${(u,p,u_0,f,T)}$ to the Navier-Stokes equation with this data. However, this is not a good formulation of the problem because it does not exclude the possibility that one or more of the fields ${u_0, f, u, p}$ grows too fast at spatial infinity. This problem is evident even for the much simpler heat equation
$\displaystyle \partial_t u = \Delta u$
$\displaystyle u(0,\cdot) = u_0.$
As long as one has some mild conditions at infinity on the smooth initial data ${u_0: {\bf R}^3 \rightarrow {\bf R}}$ (e.g. polynomial growth at spatial infinity), then one can solve this equation using the fundamental solution of the heat equation:
$\displaystyle u(t,x) = \frac{1}{(4\pi t)^{3/2}} \int_{{\bf R}^3} u_0(y) e^{-|x-y|^2/4t}\ dy.$
If furthermore ${u}$ is a tempered distribution, one can use Fourier-analytic methods to show that this is the unique solution to the heat equation with this data. But once one allows sufficiently rapid growth at spatial infinity, existence and uniqueness can break down. Consider for instance the backwards heat kernel
$\displaystyle u(t,x) = \frac{1}{(4\pi(T-t))^{3/2}} e^{|x|^2/(T-t)}$
for some ${T>0}$, which is smooth (albeit exponentially growing) at time zero, and is a smooth solution to the heat equation for ${0 \leq t < T}$, but develops a dramatic singularity at time ${t=T}$. A famous example of Tychonoff from 1935, based on a power series construction, also shows that uniqueness for the heat equation can also fail once growth conditions are removed. An explicit example of non-uniqueness for the heat equation is given by the contour integral
$\displaystyle u(t,x_1,x_2,x_3) = \int_\gamma \exp(e^{\pi i/4} x_1 z + e^{5\pi i/8} z^{3/2} - itz^2)\ dz$
where ${\gamma}$ is the ${L}$-shaped contour consisting of the positive real axis and the upper imaginary axis, with ${z^{3/2}}$ being interpreted with the standard branch (with cut on the negative axis). One can show by contour integration that this function solves the heat equation and is smooth (but rapidly growing at infinity), and vanishes for ${t<0}$, but is not identically zero for ${t>0}$.
Thus, in order to obtain a meaningful (and physically realistic) problem, one needs to impose some decay (or at least limited growth) hypotheses on the data ${u_0,f}$ and solution ${u,p}$ in addition to smoothness. For the data, one can impose a variety of such hypotheses, including the following:
• (Finite energy data) One has ${\|u_0\|_{L^2_x({\bf R}^3)} < \infty}$ and ${\| f \|_{L^\infty_t L^2_x([0,T] \times {\bf R}^3)} < \infty}$.
• (${H^1}$ data) One has ${\|u_0\|_{H^1_x({\bf R}^3)} < \infty}$ and ${\| f \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} < \infty}$.
• (Schwartz data) One has ${\sup_{x \in {\bf R}^3} ||x|^m \nabla_x^k u_0(x)| < \infty}$ and ${\sup_{(t,x) \in [0,T] \times {\bf R}^3} ||x|^m \nabla_x^k \partial_t^l f(t,x)| < \infty}$ for all ${m,k,l \geq 0}$.
• (Periodic data) There is some ${0 < L < \infty}$ such that ${u_0(x+Lk) = u_0(x)}$ and ${f(t,x+Lk) = f(t,x)}$ for all ${(t,x) \in [0,T] \times {\bf R}^3}$ and ${k \in {\bf Z}^3}$.
• (Homogeneous data) ${f=0}$.
Note that smoothness alone does not necessarily imply finite energy, ${H^1}$, or the Schwartz property. For instance, the (scalar) function ${u(x) = \exp( i |x|^{10} ) (1+|x|)^{-2}}$ is smooth and finite energy, but not in ${H^1}$ or Schwartz. Periodicity is of course incompatible with finite energy, ${H^1}$, or the Schwartz property, except in the trivial case when the data is identically zero.
Similarly, one can impose conditions at spatial infinity on the solution, such as the following:
• (Finite energy solution) One has ${\| u \|_{L^\infty_t L^2_x([0,T] \times {\bf R}^3)} < \infty}$.
• (${H^1}$ solution) One has ${\| u \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} < \infty}$ and ${\| u \|_{L^2_t H^2_x([0,T] \times {\bf R}^3)} < \infty}$.
• (Partially periodic solution) There is some ${0 < L < \infty}$ such that ${u(t,x+Lk) = u(t,x)}$ for all ${(t,x) \in [0,T] \times {\bf R}^3}$ and ${k \in {\bf Z}^3}$.
• (Fully periodic solution) There is some ${0 < L < \infty}$ such that ${u(t,x+Lk) = u(t,x)}$ and ${p(t,x+Lk) = p(t,x)}$ for all ${(t,x) \in [0,T] \times {\bf R}^3}$ and ${k \in {\bf Z}^3}$.
(The ${L^2_t H^2_x}$ component of the ${H^1}$ solution is for technical reasons, and should not be paid too much attention for this discussion.) Note that we do not consider the notion of a Schwartz solution; as we shall see shortly, this is too restrictive a concept of solution to the Navier-Stokes equation.
Finally, one can downgrade the regularity of the solution down from smoothness. There are many ways to do so; two such examples include
• (${H^1}$ mild solutions) The solution is not smooth, but is ${H^1}$ (in the preceding sense) and solves the equation (1) in the sense that the Duhamel formula
$\displaystyle u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} (-(u\cdot\nabla) u-\nabla p+f)(t')\ dt'$
holds.
• (Leray-Hopf weak solution) The solution ${u}$ is not smooth, but lies in ${L^\infty_t L^2_x \cap L^2_t H^1_x}$, solves (1) in the sense of distributions (after rewriting the system in divergence form), and obeys an energy inequality.
Finally, one can ask for two types of global regularity results on the Navier-Stokes problem: a qualitative regularity result, in which one merely provides existence of a smooth solution without any explicit bounds on that solution, and a quantitative regularity result, which provides bounds on the solution in terms of the initial data, e.g. a bound of the form
$\displaystyle \| u \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} \leq F( \|u_0\|_{H^1_x({\bf R}^3)} + \|f\|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)}, T )$
for some function ${F: {\bf R}^+ \times {\bf R}^+ \rightarrow {\bf R}^+}$. One can make a further distinction between local quantitative results, in which ${F}$ is allowed to depend on ${T}$, and global quantitative results, in which there is no dependence on ${T}$ (the latter is only reasonable though in the homogeneous case, or if ${f}$ has some decay in time).
By combining these various hypotheses and conclusions, we see that one can write down quite a large number of slightly different variants of the global regularity problem. In the official formulation of the regularity problem for the Clay Millennium prize, a positive correct solution to either of the following two problems would be accepted for the prize:
• Conjecture 1.4 (Qualitative regularity for homogeneous periodic data) If ${(u_0,0,T)}$ is periodic, smooth, and homogeneous, then there exists a smooth partially periodic solution ${(u,p,u_0,0,T)}$ with this data.
• Conjecture 1.3 (Qualitative regularity for homogeneous Schwartz data) If ${(u_0,0,T)}$ is Schwartz and homogeneous, then there exists a smooth finite energy solution ${(u,p,u_0,0,T)}$ with this data.
(The numbering here corresponds to the numbering in the paper.)
Furthermore, a negative correct solution to either of the following two problems would also be accepted for the prize:
• Conjecture 1.6 (Qualitative regularity for periodic data) If ${(u_0,f,T)}$ is periodic and smooth, then there exists a smooth partially periodic solution ${(u,p,u_0,f,T)}$ with this data.
• Conjecture 1.5 (Qualitative regularity for Schwartz data) If ${(u_0,f,T)}$ is Schwartz, then there exists a smooth finite energy solution ${(u,p,u_0,f,T)}$ with this data.
I am not announcing any major progress on these conjectures here. What my paper does study, though, is the question of whether the answer to these conjectures is somehow sensitive to the choice of formulation. For instance:
1. Note in the periodic formulations of the Clay prize problem that the solution is only required to be partially periodic, rather than fully periodic; thus the pressure has no periodicity hypothesis. One can ask the extent to which the above problems change if one also requires pressure periodicity.
2. In another direction, one can ask the extent to which quantitative formulations of the Navier-Stokes problem are stronger than their qualitative counterparts; in particular, whether it is possible that each choice of initial data in a certain class leads to a smooth solution, but with no uniform bound on that solution in terms of various natural norms of the data.
3. Finally, one can ask the extent to which the conjecture depends on the category of data. For instance, could it be that global regularity is true for smooth periodic data but false for Schwartz data? True for Schwartz data but false for smooth ${H^1}$ data? And so forth.
One motivation for the final question (which was posed to me by my colleague, Andrea Bertozzi) is that the Schwartz property on the initial data ${u_0}$ tends to be instantly destroyed by the Navier-Stokes flow. This can be seen by introducing the vorticity ${\omega := \nabla \times u}$. If ${u(t)}$ is Schwartz, then from Stokes’ theorem we necessarily have vanishing of certain moments of the vorticity, for instance:
$\displaystyle \int_{{\bf R}^3} \omega_1 (x_2^2-x_3^2)\ dx = 0.$
On the other hand, some integration by parts using (1) reveals that such moments are usually not preserved by the flow; for instance, one has the law
$\displaystyle \partial_t \int_{{\bf R}^3} \omega_1(t,x) (x_2^2-x_3^2)\ dx = 4\int_{{\bf R}^3} u_2(t,x) u_3(t,x)\ dx,$
and one can easily concoct examples for which the right-hand side is non-zero at time zero. This suggests that the Schwartz class may be unnecessarily restrictive for Conjecture 1.3 or Conjecture 1.5.
My paper arose out of an attempt to address these three questions, and ended up obtaining partial results in all three directions. Roughly speaking, the results that address these three questions are as follows:
1. (Homogenisation) If one only assumes partial periodicity instead of full periodicity, then the forcing term ${f}$ becomes irrelevant. In particular, Conjecture 1.4 and Conjecture 1.6 are equivalent.
2. (Concentration compactness) In the ${H^1}$ category (both periodic and nonperiodic, homogeneous or nonhomogeneous), the qualitative and quantitative formulations of the Navier-Stokes global regularity problem are essentially equivalent.
3. (Localisation) The (inhomogeneous) Navier-Stokes problems in the Schwartz, smooth ${H^1}$, and finite energy categories are essentially equivalent to each other, and are also implied by the (fully) periodic version of these problems.
The first two of these families of results are relatively routine, drawing on existing methods in the literature; the localisation results though are somewhat more novel, and introduce some new local energy and local enstrophy estimates which may be of independent interest.
Broadly speaking, the moral to draw from these results is that the precise formulation of the Navier-Stokes equation global regularity problem is only of secondary importance; modulo a number of caveats and technicalities, the various formulations are close to being equivalent, and a breakthrough on any one of the formulations is likely to lead (either directly or indirectly) to a comparable breakthrough on any of the others.
This is only a caricature of the actual implications, though. Below is the diagram from the paper indicating the various formulations of the Navier-Stokes equations, and the known implications between them:
The above three streams of results are discussed in more detail below the fold.
Read the rest of this entry »
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 91, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047560095787048, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/15729-leslie-matrix.html | Thread:
1. (Leslie) Matrix
I'm having difficulty trying to prove this theorem for my REU research.
I won't go into the full details of the thing I'm trying to prove as it's quite complicated. However, I am trying to solve the inverse of the following n x n matrix:
Code:
```1 0 0 0 0 ...
-s_1 1 . . . . . . .
. . 1 . . . . . .
. 0 0 . . . . .
. 0 0 -S_n (1-S_n)```
Yeah, so I am assuming the above won't come out correctly.
Any way, I want to find the inverse, and therefore the easiest thing to do would be to augment it with the identity n x n matrix.
What the above is supposed to be is (I - T), and thus we have 1's along the diagonal where the last term is (1 - S_n) and -s_1, -s_2, ..., -s_n along the sub-diagonal.
The reason for doing so is it helps me find R_0, the largest positive eigenvalue later.
As I see it, the general pattern is:
1's along the main diagonal, with s_1, s_1*s_2, s_1*s_2*s_3, ... along the sub-diagonal. The issue comes trying to determine the last elements in the matrix.
And then, perhaps the hardest part, would be trying to give a proof of why this is true. Using inducation, I would assume, would be extremely tedious and messy.
2. Originally Posted by AfterShock
I'm having difficulty trying to prove this theorem for my REU research.
I won't go into the full details of the thing I'm trying to prove as it's quite complicated. However, I am trying to solve the inverse of the following n x n matrix:
Code:
```1 0 0 0 0 ...
-s_1 1 . . . . . . .
. . 1 . . . . . .
. 0 0 . . . . .
. 0 0 -S_n (1-S_n)```
Yeah, so I am assuming the above won't come out correctly.
Any way, I want to find the inverse, and therefore the easiest thing to do would be to augment it with the identity n x n matrix.
What the above is supposed to be is (I - T), and thus we have 1's along the diagonal where the last term is (1 - S_n) and -s_1, -s_2, ..., -s_n along the sub-diagonal.
The reason for doing so is it helps me find R_0, the largest positive eigenvalue later.
As I see it, the general pattern is:
1's along the main diagonal, with s_1, s_1*s_2, s_1*s_2*s_3, ... along the sub-diagonal. The issue comes trying to determine the last elements in the matrix.
And then, perhaps the hardest part, would be trying to give a proof of why this is true. Using inducation, I would assume, would be extremely tedious and messy.
Your description of the inverse doesn't match what I get through row reducing the augmented matrix:
$A = \begin{bmatrix}<br /> 1 & 0 & 0 & 0 \\<br /> -s_1 & 1 & 0 & 0 \\<br /> 0 & -s_2 & 1 & 0 \\<br /> 0 & 0 & -s_3 & 1- s_4 \\<br /> \end{bmatrix}$
$A^{-1} = \begin{bmatrix}<br /> 1 & 0 & 0 & 0 \\<br /> s_1 & 1 & 0 & 0 \\<br /> s_1 s_2 & s_2 & 1 & 0 \\<br /> s_1 s_2 s_3/(1 - s_4)& s_2 s_3 /(1 - s_4)& s_3 /(1 - s_4)& 1/(1-s_4) \\<br /> \end{bmatrix}$
I don't think you'd have to prove this is the inverse. Just display a specific example like this and let the reader check that it works. It is pretty straightforward to check. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9666008353233337, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/243344/integrating-an-n-th-degree-polynomial/249947 | # Integrating an n-th degree polynomial
Integrating a polynomial over a fixed interval is usually very straightforward. However, I can't seem to get very far with an $n$-th degree polynomial:
$$\int_a^b \bigg(\sum_{i=0}^n q_i\,x^{n-i}\bigg)dx = \bigg[\sum_{i=0}^n \frac{q_i}{1+n-i}\,x^{1+n-i}\bigg]_a^b \\ = \bigg(\sum_{i=0}^n \frac{q_i}{1+n-i}\,b^{1+n-i}\bigg) - \bigg(\sum_{i=0}^n \frac{q_i}{1+n-i}\,a^{1+n-i}\bigg) \\ = \sum_{i=0}^n \bigg(\frac{q_i}{1+n-i}\,b^{1+n-i} - \frac{q_i}{1+n-i}\,a^{1+n-i}\bigg)$$
Am I doing this correctly? If so, what comes next? If not, what am I doing wrong?
-
3
Yes. You are done. What comes next depends on what you want? If you just want to integrate, then you are done. – user17762 Nov 23 '12 at 19:21
## 1 Answer
An answer to avoid bumping:
Yes. You are done. What comes next depends on what you want? If you just want to integrate, then you are done.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528826475143433, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/157002-help-solving-linear-eq-print.html | # Help Solving Linear Eq
Printable View
• September 21st 2010, 05:43 PM
quarinteen
Help Solving Linear Eq
I need help and instructions for solving this problem. If you could list the steps so I understand it would be appreciated.
-7/8c +5.6 = -5/8c - 3.3
• September 21st 2010, 07:16 PM
quarinteen
ok i got that one now im stuck again x-1/2 +x-2/2 =3
• September 21st 2010, 07:39 PM
Wilmer
Quote:
Originally Posted by quarinteen
ok i got that one now im stuck again x-1/2 +x-2/2 =3
Did you mean (x-1)/2 + (x-2)/2 = 3? Brackets are IMPORTANT.
• September 21st 2010, 08:10 PM
quarinteen
x - 1 over 2 + x - 2 over 2 = 3
• September 21st 2010, 08:23 PM
Educated
Is it:
$\dfrac{x-1}{2} + \dfrac{x-2}{2} = 3$
If it is, then you can multiply everything by 2 to get rid of the fractions.
Or is it:
$\dfrac{\frac{x-1}{2 + x-2}}{2} = 3$
If it is, you can cancel out the +2 and minus 2 in the middle. Then multiply everything to get rid of the first fraction. Then multiply everything by x to get rid of the other fraction.
EDIT: Nevermind about the x's and c's.
• September 21st 2010, 08:31 PM
quarinteen
its the top one. I dont know how to do the signs on the computer. Can you write out the steps for me so I understand it. Thank you very much.
• September 21st 2010, 08:35 PM
Educated
It is called LaTeX
Here's a link to the tutorial: LaTeX tutorial
I have told you how to solve it. Multiply everything by 2 to get rid of the fractions. After that, group similar terms and solve for x. Can you solve x yourself?
• September 21st 2010, 08:44 PM
quarinteen
I multiplied every thing by 2 i get a answer different then the book gives as an answer. I get
2x -2 + 2x - 4 = 6
4x - 6 = 3
4x = 9
x = 9 / 4
but the answer is 9/2
• September 21st 2010, 08:48 PM
Educated
$\dfrac{x-1}{2} + \dfrac{x-2}{2} = 3$
OK here's a hint: When you multiply the fraction by its denominator, in this case it is 2, you effectively cancel it out.
Example:
$\dfrac{a+b}{c} \times c = a+b$
Do you get it?
• September 21st 2010, 08:55 PM
quarinteen
No i don't get it. Can you please put out the steps. I am not in school I haven't been for a while. I was attempting to help a friend got to this problem and its bugging me. I just want to know the steps so I can understand. Unfortunately to say I graduated college already im in information security and never use math. My math skills have completely disappeared. Like I said this is just bugging the living daylights out of me...
• September 21st 2010, 09:02 PM
Educated
Here's a way that I hope you understand:
$\dfrac{x-1}{2}$ is the same as $(x-1) \div 2$ Correct?
We multiply it by 2. This will make it become: $(x-1) \div 2 \times 2$
Now can you see how they cancel eachother out? The multiply by 2 undoes the divided by 2, thus leaving only x-1.
Do you know how to solve it now?
All times are GMT -8. The time now is 11:38 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484285116195679, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/153436/open-sets-of-symplectic-manifolds/153449 | # Open sets of symplectic manifolds
Suppose I have a symplectic manifold $(\mathcal{M}, \omega)$. Does it hold that any open subset of $(\mathcal{M}, \omega)$ is a symplectic submanifold?
The statement trivially holds for smooth manifold but in the symplectic setting I am not sure.
-
## 1 Answer
It trivially holds for symplectic manifolds as well. Let $(M^{2n}, \omega)$ be a symplectic $2n$-manifold and $X \subset M$ be an open subset. Then we know that $X$ is a $2n$-dimensional smooth submanifold of $M$. This implies that for all $x \in X$, $T_x M = T_x X$ (a $2n$-dimensional subspace of a $2n$-dimensional vector space is the whole vector space). Therefore since $(T_x M, \omega_x)$ is a symplectic vector space by the fact that $(M,\omega)$ is a symplectic manifold, $(T_x X, \omega_x)$ is a symplectic vector space as well. Therefore $X$ is a symplectic submanifold of $(M,\omega)$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235660433769226, "perplexity_flag": "head"} |
http://mathhelpforum.com/math-topics/55289-find-number-elements-intersection-3-sets.html | # Thread:
1. ## Find the number of elements in the intersection of 3 sets
Let A={ $n \in N$ | 5 divides n}
B={ $n \in N$| both n and $\frac {n}{5}$ contain only odd digits}
and C={ $n \in N$ | both n and $\frac {n}{5}$ contain exactly 2006 digits} where N
denotes the set of all natural numbers.
Find the number of elements in $A \cap B \cap C$
2. Originally Posted by alexmahone
Let A={ $n \in N$ | 5 divides n}
B={ $n \in N$| both n and $\frac {n}{5}$ contain only odd digits}
and C={ $n \in N$ | both n and $\frac {n}{5}$ contain exactly 2006 digits} where N
denotes the set of all natural numbers.
Find the number of elements in $A \cap B \cap C$
Sorry this is a partial answer, but maybe it will help you.
Since the number needs to stay 2006 digits after dividing by five, the last digit is necessarily 5, 7, or 9. The first two digits need to be 75 in the case of n. 75 is the only last two digit pair that is both odd, is divisible by five, and leads to a five in the last digits place.
Okay I think I have the rest of it now.
So you can have your last digit be 5, 7, or 9. So that means you either get remainder of 0, 2, or 4 after division. Now no number in your sequence of digits can be less than 5 because:
1st case) Your first digit of the 2006 is 5. Remainder is 0. Performing long division means you can only have 5, 7, or 9 in the next slot otherwise you would divide that slot and get 0. So whenever you have a 5 in the previous slot you have 3 choices for the next up until the third to last digit. So when 5 is your first digit you have 3 ^ 2003 possibilities.
2nd case) Your first digit is 7. Remainder is 2 after long division. The next digit must be greater than or equal to 5 otherwise you will get an even number after dividing. Eg. The 2 carries over and let's say you have either a 1 or a 3 in your next slot. 21 or 23 divided by 5 is 4. So again you have a sequence that always has 3 choices per slot. 3 ^ 2003 possibilities.
3rd case) Your first digit is 9. Remainder is 4 after long division. The next digit must be greater than or equal to 5 otherwise you will get an even number after dividing. Eg. The 4 carries over and let's say you have either a 1 or a 3 in your next slot. 41 or 43 divided by 5 is 8. So again you have a sequence that always has 3 choices per slot. 3 ^ 2003 possibilities.
3 * 3 ^ 2003 = 3 ^ 2004 possibilities. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9092687368392944, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/6435/cosmic-ray-hazards/6437 | # Cosmic ray hazards
The Pierre Auger Observatory site mentions the detection of a 3E20 eV (48 J) cosmic ray whose energy, well above the GZK cutoff, was based on an analysis of its atmospheric shower. This was equivalent to the kinetic energy of a baseball with a speed of 79.5 m/s or 177 mph. Of course, cosmic rays with such ulta-high energies are extremely rare. What kind of damage would occur if an astronaut or a space vehicle encountered such a cosmic ray? How would the damage differ from that from the hypothetical 79.5 m/s baseball?
-
## 3 Answers
One must keep in mind also that it is the particle, not the shower that goes through the astronaut in dmckee's estimate above, where he treats the relativistic particle going through matter.
The shower in your question which gave the energy estimate of the parent particle is generated by cascade/sequential collisions of deep angle scattering over a long path. The energy is not released in one go unless the astronaut is very unlucky.
The deep inelastic scattering crossection at those energies is still not up to barn values ( a barn is about the size of a uranium nucleus) so the astronaut would have to be very unlucky even to get one energetic scatter let alone to start a shower.
-
Ultra-high energy cosmic rays all come from a very, very long way away (anything with the power to create them nearby would constitute a danger to life as we know it). I think the preferred mechanism these days is dynamic acceleration in the jets formed by active galactic nuclei, but don't quote me.
Anyway, ultra-relativistic though they are, that means they are stable particles. Mostly protons, in fact.
When a ultra-high energy proton passes though a modest amount of matter, like an astronaut, we can model its energy loss very simply. The graph in the Particle Physics Data Book, doesn't actually go high enough, but we can extrapolate and say the energy loss is still less than $10\text{ MeV/g/cm}^2$. Our astronaut has a density pretty near $1\text{ g/cm}^3$ and a average thickness (allowing for all aspect ratios) of around 50 cm. So the total energy to be expected is only 500 MeV.
It is ionizing radiation, of course, but not qualitatively different from the rest of the cosmic background.
If the space craft is a smallish, thin-walled can with a little air in it the situation is only a little worse. There is more chance of showering. But it's just a higher dose, rather than being a spectacular death.
-
so .. basically, the secondary particles produced from the initial collision are themselves high energy particles, and the vast majority of the particle shower simply keeps on going, through the astronaut, and out the other side? – JustJeff Mar 6 '11 at 18:05
@JustJeff: Yep. Compute the velocity in the Astronaut's frame of the CoM after $A + p \to \text{D.I.S.}$. – dmckee♦ Mar 6 '11 at 18:08
The bottom line appears to be that damage from an ultra-energetic proton creates virtually no physical damage, while its hypothetically energy similar baseball could be disasterous. – Michael Luciuk Mar 9 '11 at 3:07
@Michael: It is ionizing radiation dose, and more than your average 1 GeV cosmic ray. If you look at Carl's BoTE calculation below, you can see that the worse case can be a factor of 100 more than the average cosmic ray pretty easily. That's not life threatening, but it adds up. – dmckee♦ Mar 9 '11 at 3:15
– romkyns Mar 27 '12 at 12:53
show 1 more comment
These are rare, on the order of one per square mile per century. See Ultra-high-energy cosmic ray (Wikipedia). So a human, with a cross sectional area less than 1 square meter might get hit about once per each 100 million years. I think that the risk of life due to spacecraft malfunction is significantly greater than this.
And when they do hit the Earth's atmosphere, they expend their energy over the whole depth of the atmosphere. No one single region gets plastered. (Otherwise they'd have been detected long before, by the sound of the air getting overheated.) The thickness of the atmosphere, in terms of equivalent mass of water, is around 10 meters of water. This is much thicker than the equivalent weight distribution of a spacecraft. So my back of the envelope conclusion is that if a spacecraft did get hit by one, most of the energy will exit the far side of the spacecraft.
On the other hand, cosmic rays of all sorts can damage electronics. It can switch ones to zeroes or vice versa, and I would think that such a high energy ray as this could actually hurt a component.
To realistically examine spacecraft in terms of how cosmic rays act in them, it's useful to use numbers for a real spacecraft. In addition to a thin metal shell, a spacecraft also has gas storage, fuel storage, and a place to go to the bathroom. And a spacecraft must be built sturdily even if it doesn't have to land on the earth. They are big, heavy, objects.
For example, the International Space Station (ISS) weighs 375,000 kg and has a pressurized volume of 907m^3:
Thus its density is 375,000/907 = 413 kg/m^3 almost half that of water, and it has a distance between inelastic collisions of about 3.7 cm.
Most of the spacecraft consists of the solar panels. However, like any other nuclear matter, these will generate cascades when they are hit by a cosmic ray. The length of the vessel (along its crew compartment) is 51 meters. With the pressurized volume, if this were a cylinder its diameter would be about 4.8 meters.
With a distance of 51 meters and a density of 0.41 that of water, the long dimension on the ISS is equivalent to 20 m of water. This is twice the depth of the Earth's atmosphere. The short distance is around 2 m of water or 1/5th the Earth's atmosphere. This is enough to get a shower going.
These calculations are worst case in that they've imputed all the weight of the spacecraft to the pressurized volume. A more accurate calculation would involve a spacecraft which does not have the solar panels. An example of this would be the US space shuttle orbiter. With payload, it's 93,000 kg (max landing weight 104,000 kg). The length is 37 m with a wingspan of 24 m. The diameter of the main crew compartment appears to be about 7 m.
Under the assumption that the spacecraft is a cylinder of length 37 m and diameter 7 m, its volume is around 1400 cubic meters, and its density is around 74 kg/m^3. Thus a particle traveling the length of the craft will encounter about 37 m x 74 kg/m^3 = 2.7 meters of water equivalent. At 1/4 the thickness of the earth's atmosphere, this is enough to get a good cascade going.
Anna v notes that the (inelastic) cross-sectional area for an extremely high energy proton-proton collision is only around 1 barn or 1.0E-28 m^2. Actually, her (1984) article gives 530 mb, but at very high energies, the cross section is unknown and might exceed a barn, see figure (4):
Nucl.Phys.Proc.Suppl.196:335-340 (2009), Ralf Ulrich, Ralph Engel, Steffen Müller, Fabian Schüssler, Michael Unger Proton-Air Cross Section and Extensive Air Showers
http://arxiv.org/abs/0906.3075
The mass of the proton is about 1.6E-27 kg, and a cubic meter of water weighs 1000 kg (almost entirely nucleons, that is, protons and neutrons). Thus there are 1000/1.6E-27 = 6.25E+29 nucleons per cubic meter of water. Multiplying by the cross-sectional area gives a total cross sectional area/m^3 of 62.5 m^2. Thus the distance at which such a proton begins a cascade, in water, is about 1/62.5 m = 1.6 cm, and by the time the particle has exited the human body 50 cm later, it has had about 31 such collisions. Thus the size of the human body is sufficient to begin a cosmic ray cascade.
In a practical long-distance spaceship, it's important to shield the crew from cosmic rays and solar flares. One of the suggestions for doing this is with shields composed of 7 kg of aluminum per square meter, which is about 0.7 cm water equivalent thickness: http://arxiv.org/abs/astro-ph/0701314 Thus there's about a 50% probability for a high energy cosmic ray starting off a cascade on going through this thickness.
-
2
Does not have to be very high energy to damage electronics. Rather the flux plays a role in this , and high energy ones are very rare, whereas low energies have high fluxes from which we are shielded by the atmosphere, in contrast to satellites. – anna v Mar 7 '11 at 10:35
1
– Carl Brannen Mar 7 '11 at 21:56
– Carl Brannen Mar 7 '11 at 22:11
dmckee 's calculation is correct. The most probable scattering is small angle scattering that just ionizes atoms. These rare high energy tracks will damage one path with area at most 1xe**-24cm**2, which is the barn unit, the crossectional area of a uranium atom. The probability of hitting a nucleus is small (link to crossections in my answer), and the probability of giving a deep inelastic scatter even smaller. It can happen but most probably not. The paper you quote is the integral of a path through the atmosphere, which increases probabilities of inelastic scatter. – anna v Mar 8 '11 at 12:42
@anna v; Interesting, my intuition says that a spacecraft should be enough to generate a shower. I'll edit my answer with a "back of the envelope" calculation. Not sure where I'm going wrong with this. – Carl Brannen Mar 9 '11 at 0:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266611337661743, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/199018/transversal-intersection/199024 | # Transversal intersection.
In my textbook, it says: "Consider two curves in the plane, one of which is the x-axis, the other being the graph of a function $f(x)$. The two curves intersect transversally at a point x if $f(x)=0$ (the intersection condition), and $f'(x)\neq0$ (transversality)." I know that: "Two submanifolds of a given finite dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point." From: Transversality (wikipedia). In the example of my textbook, what are the spaces tangent to the x-axis and to the graph of the function $f(x)$? What is the tangent space of the ambient manifold?
Thank you very much
-
## 2 Answers
The tangent space of the x-axis (at a given point) is again the x-axis (because it's horizontal, i.e. derivative is zero). Then the tangent space of the graph of $f(x)$ is whatever it is, but the key is that when it intersects the x-axis, the tangent space at that intersection point is not horizontal (i.e. not the x-axis). So you have a tangent vector on the x-axis (which is necessarily horizontal) and a tangent vector on the graph of f(x) (which is necessarily not-horizontal), and hence these two vectors are linearly independent, hence span the whole plane (agreeing with Wikipedia)!
-
I was writing my answer when noticed that your one had appeared. It is very similar, but I decided to publish my version too. I upvoted you answer, of course :-) – Yuri Vyatkin Sep 19 '12 at 8:07
Tangent spaces at points of $\mathbb{R}^n$ are naturally identified with $\mathbb{R}^n$ (see Tangent space in Wikipedia). So at a point $\mathbf{x}=(x,0)$ on x-axis the tangent space of the ambient space (i.e. $\mathbb{R}^2$) is $\mathbb{R}^2$, the tangent space of x-axis is $\mathbb{R}^1$, and the tangent space to the graph is the line passing through point $\mathbf{x}=(x,0)$ at the slope $f'(x)$. If you take a basis in both tangent lines the resulting pair will be a basis in the tangent plane, hence transversality.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161569476127625, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/87676/probability-from-other-probability?answertab=oldest | # Probability from other probability [closed]
Let $m$, $n$ be very big natural numbers, s.t. $m\leq n$. Let $L\geq 1$ and $Ln\ge 1$. Let also $t>0$, $C>0$ and for some random variable $x$ the following is true: $P(x^2\geq Ln )\geq \frac{C}{(Ln)^t}$.
Show: if $t\geq 4$, then $\frac{n}{2}P(x^2\geq Ln )\leq \frac{1}{m}$. If $t\leq 4$, then $\frac{n}{2}P(x^2\geq Ln )\geq \frac{1}{m}$.
-
Is there a typo? $t$ seems to be unrelated. – Matt N. Dec 2 '11 at 8:41
@Matt it appears in the denominator of $P(x^2\geq Ln) \geq C/(Ln)^t$ – Chris Taylor Dec 2 '11 at 9:06
@ChrisTaylor: I need glasses! Thanks Chris. – Matt N. Dec 2 '11 at 9:12
1
The level of precision and clarity of this question are well below what is needed for it to be addressed succesfully. At present the hypothesis seems to be that $P(x^2\ge k)\ge C/k^t$ and the desired conclusion when $t\ge4$ seems to be $P(x^2\ge n)\le2/n^2$. There is no way the former (bounding the tail from below) may imply the latter (bounding the tail from above). – Did Dec 2 '11 at 12:40
1
David, at least three of us have commented that your question is seriously in need of clarification. Your inattention strikes me as sufficient reason to close the question. – Gerry Myerson Dec 3 '11 at 8:57
show 9 more comments
## closed as not a real question by Did, t.b., Gerry Myerson, J. M., SashaDec 3 '11 at 14:42
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597179889678955, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/25430?sort=votes | ## Can Goodstein’s theorem been proven with first order PA + Constructive Omega Rule?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to understand transfinite induction and Gentzen's theories.
But I was wondering, if there is any connection with the Constructive Omega Rule (COR).
With COR I mean that if you can proof:
φ(x)
for every x in fully axiomatized system defined within your PA + COR system, then you may conclude:
∀ x.φ(x)
My question: Is it possible to prove Goodstein's theorem with PA + COR?
Or in general, has COR the same strength as transfinite induction or is it something entirely different (then I want to understand the difference).
Regards,
Lucas
Given the responses, some clarification of the rule is necessary. The referred article gives a rather good description of the rule I mean.
However, I do mean a rule that can actually be implemented. So, if there is a computable function that generates a PA proof A(n) for each n, then it is necessary to show in the meta-system (PA + COR), that this function terminates for each n.
Only then, the constructive omega rule (at least my variant), as additional inference rule, can be used to conclude ∀ n.A(n) in the PA + COR system.
Some second order proofs, with a first order final theorem can also be proven with first order PA + COR. Since, the Goodstein theorem is a second order proof with first order final theorem, I was curious of it is one of them.
-
COR looks like some kind of compactness condition to me- a very different beastie from transfinite induction... – Tom Boardman May 20 2010 at 22:24
Can you clarify the meaning of your constructive omega rule? Your description of it sounds like the plain omega rule. – François G. Dorais♦ May 20 2010 at 22:58
## 2 Answers
The anwer is Yes.
Goodstein's theorem asserts that for every natural number $n$ the Goodstein sequence starting with $n$ eventually terminates. Thus, it can be stated in the form $\forall n\ \exists m\ \varphi(n,m)$, where $\varphi$ has only bounded quantifiers.
I claim that any true statement of this form is provable in PA+COR. To see this, note that for any particular fixed $n$ there is an $m$ such that $\varphi(n,m)$, since the statement was assumed to be true, and so for each $n$ the statement $\exists m\ \varphi(n,m)$ is provable in PA. Furthermore, the map $n\mapsto p_n$, where $p_n$ is th shortest PA proof of $\exists m\ \varphi(n,m)$ is a computable function. According to the article to which Kristal links, this is what is needed in order to deduce by COR that $\forall n\ \exists m\ \varphi(n,m)$, as desired.
As François mentioned in the comments, the way you have described your rule, it sounds more like the ordinary $\omega$-rule. The difference is whether the $\omega$-sequence of proofs of the instances is effective or not. According to the article in Kristal's answer, you find (on page 1) that the sequence of proofs must be given by a computable procedure. For the purpose of Goodstein's theorem, we were able to attain this. But it turns out not to matter, since the article mentions that Shoenfield proved that PA+$\omega$-rule is the same as PA+ recursively restricted $\omega$-rule. The article also mentions that a weakend form with primitive recursive proof enumerations is also complete (Nelsen), since one may use a padding trick to reduce to the computable case.
Edit. You have changed or updated the formal system to insist that the effective enumeration of proofs also be provably total (provable in P+COR). Although this seems like a more severe requirement, I claim it does not change anything, provided that you have the copy proof element mentioned in the context of Nelson's theorem in the article linked to by Kristal. In that article, it is claimed that insisting on primitive recursive enumerations of proofs is fully complete, equivalent to the full $\omega$-rule. The proof evidently uses a padding trick of some kind (and I have not looked at the details). I take this to mean that any statement provable in PA with the full $\omega$-rule is provable in PA+ primitive recursive COR. Since primitive recursive functions are provable total in PA, this would seem to satisfy your additional requirement. Thus, it seems still to be the case that Goodstein's theorem is provable in your version of PA+COR.
-
Thanks for the answer. I added a clarification to my question. I intended to refer to a system that can actually build. In such system it is necessary to prove in the meta-logic that the computable function that generates the PA proof, indeed does do the job. You can have a function that generates a proof for each n, but for which it is not provable that it does. If PA + COR could prove each sentence you suggested, then it could prove its own consistency. This is not possible if the system can be formalized. – Lucas K. May 21 2010 at 20:37
Lucas, because of Nelson's theorem, mentioned in the article Kristal links to, I don't think this makes a difference. – Joel David Hamkins May 22 2010 at 2:28
I misunderstand something, or there is some kind of error. It looks to me that you say that PA+COR is complete for sentences in the arithmetic hierarchy ∏0/2. But PA+COR is a formal system that can be implemented, and completeness for ∏0/2 means that there is a procedure for undecidable problems. Since this is based on Nelson's theorem, I doubt if Nelson's theorem is correct. I tried to find the article. Googling on Colloquium Mathematicum gives me a Polish site. But searching on the article does not give results (probably not yet digitalized). So, I have to go to university library. – Lucas K. May 25 2010 at 21:47
I also have become confused about it, and I don't know Nelson's theorem except what was said about it in that article. Perhaps the resolution will be that the proof in Nelson's system is still an infinite object, the proof tree, but that this tree has primitive recursive behavior on the nodes. Such a system would be different than what you seem to have in mind. I'll give it some more thought. I think your system, if I understand it, is probably very weak. – Joel David Hamkins May 25 2010 at 21:57
With COR you can unroll some second order induction proofs, that have a first order final theorem. If SO(n) is a second order property of a natural number n, and you have SO(0), SO(n)->SO(n+1) and SO(n)->P(n), where P(n) is a first order property of n, then you can conclude P(n) for all n. The proof is second order, and can not be reduced to first order in general. However, with COR, you can do SO(0)->SO(1), SO(1)->SO(2) ... SO(n-1)->SO(n), SO(n)->P(n). You can do that for every n and reduce each individual proof for n to a first order proof. Problems arise, when nested. But Goodstein? – Lucas K. May 25 2010 at 22:15
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
According to :this article in COR proof trees are limited to those are effective and effectiveness corresponds to primitive recursion. Goodstein's problem cannot be solved with primitive recursion so I don't think it can be solved with COR.
-
Kristal, doesn't the article insist merely on recursive as opposed to primitive recursive proof enumerations for the constructive $\omega$-rule? – Joel David Hamkins May 21 2010 at 14:10
Yes I went back and looked at the article. In section 2 the restrictions as you say so Goodstein's theorem could be proved. – Kristal Cantwell May 21 2010 at 16:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541662335395813, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/53016/does-f-1x-yf-2x-y-imply-y-1y-2-for-solutions-to-the-integral-equation/116354 | ## Does $f_1(x,y)<f_2(x,y)$ imply $y_1<y_2$ for solutions to the integral equation $y_k'=f_k(x,y_k)$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose for two given functions $f_1,f_2 \colon \mathbb{R}^2 \to \mathbb{R}$ there exist unique solutions $y_1$ and $y_2$ with the intersection of their intervals of existence $[0,\epsilon)$ to the integral equations $$y_k(x)=\int_{0}^{x} f_k(t,y_k(t)) dt$$. Moreover, suppose that $f_1(x,y)\leq f_2(x,y)$ (or strictly if that makes things easier), does it follow that $y_1 \leq y_2$ on $[0,\epsilon)$?
It's true if $f_1$ and $f_2$ are Lipschitz in $y$ and continuous in $x$ (i.e., satisfy the hypotheses of the standard uniqueness and existence theorem) or if $f_1$ and $f_2$ are decreasing in $y$ (i.e., satisfy the hypotheses of Peano's uniqueness theorem). The assumption of unique solutions is certainly necessary.
-
The functions $f_1$ and $f_2$ are assumed continuous, right? – Pietro Majer Jan 24 2011 at 16:31
That's one way to ensure existence of a solution, by Peano's existence theorem, but the proof of Peano's uniqueness does not require it. If both $f_k$ are continuous, then the solutions are classical solutions in that $y_k'(x)=f_k(x,y_k(x))$, and the problem becomes a matter of noticing that, $y_1(x)\leq y_2(x)$, on an interval $[0,\varepsilon)$, since $y_1'(0)\leq y_2'(0)$, and then you argue that the solutions can never cross such that $y_1(x)>y_2(x)$ after the crossing. – AppliedSide Jan 24 2011 at 17:52
(continued) For me the problem is interesting only if at least one of the $f_k$ is not continuous, and one only has that $y_k'=f_k(x,y_k)$ a.e. – AppliedSide Jan 24 2011 at 17:53
ah, I read your answer only after posting an answer. – Pietro Majer Jan 24 2011 at 18:16
## 1 Answer
In general the answer seems to be no: Let $$f_1(t,y)=\begin{cases} -1, &t\leq 0,\quad y\in \mathbb R\\ 1,& t>0,\quad y>t/2,\\ -1,& t>0,\quad -t/2 \leq y \leq t/2,\\ -e^{-n^2y},& t\in [1/n,1/(n-1)),\quad y< -t/2,\quad n\geq 1, \end{cases}$$ and let $f_2(t,y)=-f_1(t,-y)$ so that $f_1(t,y)< f_2(t,y)$ when $(t,y)\in \mathbb R^2$.
For $x\geq 0$ we have the solutions $y_1(x)=x$ and $y_2(x)= -x$ and in order to show that e.g. $y_1$ is unique one must show that $y_1$ cannot be such that $-x/2 \leq y_1(x)\leq x/2$ some $x>0$ and furthermore show that if $-\infty < y_1(x)< -x/2$ when $1/n \leq x \leq 1/(n-1)$ for some $n>1$ then $$1 > e^{n^2y(1/(n-1))}-e^{n^2y(1/n)}= \frac n{n-1} > 1.$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322054386138916, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/134739-lie-group-frame-bundle.html | # Thread:
1. ## Lie group and the frame bundle
I am working in a (psuedo) Riemannian manifold with the fibres of orthonormal frame based on some Lie group. From the Levi-Civita connection, I can separate the tangent space of the frame bundle into horizontal and vertical bits, i.e.
$T(\mathbb{O}M)=H(\mathbb{O}M)\oplus V(\mathbb{O}M)$.
Here is my problem, I have equations involving $X_i f(\sigma)$ on the Lie group where the X_i s are a basis for the Lie algebra and $f \in C^2(G)$. I want to be able to map these across to the frame bundle such that I get $V_i f(s)$ where the V_i s are the canonical vertical vector fields and $f \in C^2(\mathbb{O}M)$.
I have found that there exists a one-form between the tangent of the frames and the algebra which maps the horizontal bits to 0, but does this induce a map between the frame bundle and G? How?
Any suggestions would be welcome as my geometry knowledge is slim to none. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388182163238525, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-statistics/183035-covariance-functions-random-variable.html | # Thread:
1. ## Covariance of functions of a random variable
Let X be a random variable in the range (0,1), i.e. P(X)=0 for X<=0 or X>=1. Is there a way to prove that Cov(X,X/(1-X)) > 0 ? Is it true for any pdf f(X) whose support is in (0,1)?
2. ## Re: Covariance of functions of a random variable
Wow, this took me a surprisingly long time to show. Yes, it is true, provided that you allow the inequality to be attained (if X is a point mass you get a covariance of 0).
Let $f(x) = \frac x {1 - x}$. Let $\mu = \mbox{E}[X]$ and $\tau = \mbox{E}[f(X)]$. First, note that $f(x)$ is convex, and hence by Jensen's Inequality we have $\tau \ge f(\mu)$.
Next, note that it suffices to show $\mbox{Cov}[1 - X, f(X)] \le 0$. By definition this is the statement that
$\displaystyle \mbox{E}[(1 - X)f(X)] - \mbox{E}[1 - X]\mbox{E}[f(X)] = \mu - (1 - \mu)\tau \le 0$
and solving for $\tau$ this is the statement $f(\mu) \le \tau$, which is always true by Jensen's Inequality. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277716279029846, "perplexity_flag": "head"} |
http://mathhelpforum.com/geometry/199772-regular-polygon-formula-print.html | Regular polygon formula
Printable View
• June 7th 2012, 01:24 PM
Alex3914425
Regular polygon formula
Hello geniuses,
Recently, when making some calculations for people, I came across what I hope is a new formula. I have done some googling and I can't find it anywhere.
Could you take a look at this and tell me if it rings any bells? Does it even make any sense? I have applied it to a number of random shapes and they all came out exactly correct.
Please excuse me for any stupid mistakes I made. I'm not much of a math wonder ;)
The area of any regular polygon is determined by the formula
½ax*tan((180a - 360)/2a)
with a = number of sides
with x = length of a side
I figured that if anyone could help me it would be you guys. In any case, thank you for your time.
• June 7th 2012, 03:17 PM
HallsofIvy
Re: Regular polygon formula
That looks like a formula that has been found by any number of school kids. You can divide an a-gon into a isosceles triangles having base length x and vertex angle 360/a. Each of those triangles can be divided into two right triangles with "opposite side" x/2 and angle 180/a. The altitude of each isosceles triangle is the "near side" of the right triangle which has length (x/2)cot(180/a) so the area of each isoscelese triangle has area (1/2)(x)((x/2)cot(180/a)= (1/4)(x^2)cot(180/a). Although I personally would leave it that way, since cot(x)= tan(90- x), you can write that as cot(180/a)= tan(90- 180/a)= tan((90a- 180)/a)= tan((180a- 360)/2a).
Congratulations on finding that on your own!
• June 7th 2012, 03:22 PM
Plato
Re: Regular polygon formula
Quote:
Originally Posted by Alex3914425
I have done some googling and I can't find it anywhere.
Could you take a look at this and tell me if it rings any bells? Does it even make any sense? I have applied it to a number of random shapes and they all came out exactly correct.
Please excuse me for any stupid mistakes I made. I'm not much of a math wonder ;)
The area of any regular polygon is determined by the formula
½ax*tan((180a - 360)/2a)
with a = number of sides
with x = length of a si
I am not sure how hard you looked.
• June 7th 2012, 04:07 PM
Alex3914425
Re: Regular polygon formula
Quote:
That looks like a formula that has been found by any number of school kids.
Actually I was, at the time, a (high)school kid.
Quote:
You can divide an a-gon into a isosceles triangles having base length x and vertex angle 360/a. Each of those triangles can be divided into two right triangles with "opposite side" x/2 and angle 180/a.
This was indeed the basic idea which led to the formula. The only thing left needed to complete the formula is the adjacent side which is determined by tan which in turn is determined by a.
The link provided by Plato in the post above me leads to a webpage/formula which is known to me. While it does the same, it is different from mine as I didn't use any circles or radius, though I have a strong feeling this formula could easily be deviated from the one in the link, since r and R are essentially the same as the adjacent and hypotenuse from the right triangles. But even if it is, could this be seen as an original formula?
Quote:
Congratulations on finding that on your own!
I'm not sure how to take this, but for convenience I'll take it as a compliment :)
• June 7th 2012, 04:38 PM
HallsofIvy
Re: Regular polygon formula
Quote:
Originally Posted by Alex3914425
Actually I was, at the time, a (high)school kid.
This was indeed the basic idea which led to the formula. The only thing left needed to complete the formula is the adjacent side which is determined by tan which in turn is determined by a.
The link provided by Plato in the post above me leads to a webpage/formula which is known to me. While it does the same, it is different from mine as I didn't use any circles or radius, though I have a strong feeling this formula could easily be deviated from the one in the link, since r and R are essentially the same as the adjacent and hypotenuse from the right triangles. But even if it is, could this be seen as an original formula?
I'm not sure how to take this, but for convenience I'll take it as a compliment :)
Believe it or not, that was how I intended it!
• June 8th 2012, 12:57 AM
BobP
Re: Regular polygon formula
Presumably it's a typo since you say that you've used it a number of times and the results have turned out to be correct, but the formula you give can't possibly be correct. It has dimension of length rather than area. The correct formula has to contain an x squared.
• June 8th 2012, 02:08 AM
Alex3914425
Re: Regular polygon formula
Quote:
Presumably it's a typo since you say that you've used it a number of times and the results have turned out to be correct, but the formula you give can't possibly be correct. It has dimension of length rather than area. The correct formula has to contain an x squared.
I assure you there is no typo in the formula above. The way you see it here is the way it works. I don't get it. Why would the fact that x is in length make it an invalid formula? Isn't the area the product of lenght and width, after all? Why would the formula need to contain an x squared?
• June 8th 2012, 04:15 AM
BobP
Re: Regular polygon formula
Check again what you originally posted.
Try putting a=4 so that the polygon is a square. Doesn't your formula give an area of 2x ? It should be x squared.
Put a=3 so that you have an equilateral triangle and the result should be $x^{2}\sqrt{3}/4.$. Whatever regular polygon your calculating the area of, it has to contain an x squared or else the dimension is wrong.
You mention length times width, that's a length multiplied by a length, so the result will contain a length squared component.
HallsofIvy gives you the correct formula.
• June 8th 2012, 06:19 AM
Alex3914425
Re: Regular polygon formula
I was quite befoddled when I tested your example and it came out wrong, even with x filled in. I had tested my formula before on pentagons, heptagons and 9-gons and somehow they came out correct. It cannot have been mere coincidence because it went up to many decimals.
The wrong answer I get can be corrected by multiplying it by 1/2x, which, not surprisingly, leads to HallsofIvy's formula. There is one minor difference though. HallsofIvy writes "(1/4)(x^2)cot(180/a)" but I think it should be ¼a(x^2)cot(180/a). If you multiply ½ax*tan((180a - 360)/2a) by ½x you get ¼a(x^2)tan((180a - 360)/2a)=¼a(x^2)cot(180/a)
Here's an example: a pentagon with side 7.47 (a random number).
¼*5*(7.47^2)*tan(54) = 69.751125 x 1.37638192 = 96.0041873 which is correct according to online calculators. With HallsofIvy's formula the outcome would be 5 times smaller.
In any case, this formula already exists so with that my question is answered. Thanks for you help!
All times are GMT -8. The time now is 07:30 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9574555158615112, "perplexity_flag": "middle"} |
http://www.cs.purdue.edu/homes/dgleich/nmcomp/lectures/lecture-0.html | # Basic Notation
$\newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}}$
Let us begin by introducing basic notation for matrices and vectors.
We’ll use $\RR$ to denote the set of real-numbers and $\CC$ to denote the set of complex numbers.
We write the space of all $m \times n$ real-valued matrices as $\RR^{m \times n}$. Each
With only a few exceptions, matrices are written as bold, capital letters. Matrix elements are written as sub-scripted, unbold letters. When clear from context,
instead, e.g. $A_{11}$ instead of $A_{1,1}$.
An short-hand notation for $\mA \in \RR^{m \times n}$ is
In class I’ll usually write matrices with just upper-case letters.
We write the set of length-$n$ real-valued vectors as $\RR^{n}$. Each
Vectors are denoted by lowercase, bold letters. As with matrices, elements are sub-scripted, unbold letters. Sometimes, we’ll write vector elements as
Usually, this choice is motivated by a particular application. Throughout the class, vectors are column vectors.
In class I’ll usually write vectors with just lower-case letters. I may try and follow the convention of underlining vectors. We’ll see.
# Operations
Transpose Let $\mA : m \times n$, then
Example $\mA = \sbmat{ 2 & 3 \\ 1 & 4 \\ 3 & -1 } \quad \mA^T = \sbmat{ 2 & 1 & 3 \\ 3 & 4 & -1 }$
Hermitian Let $\mA \in \CC^{m \times n}$, then
Example $\mA = \sbmat{ 2 & 3 \\ i & 4 \\ & 3 & -i } \quad \mA^* = \sbmat{ 2 & -i & 3 \\ 3 & 4 & i }$
Addition Let $\mA : m \times n$ and $\mB : m \times n$, then
Example $\mA = \sbmat{ 2 & 3 \\ 1 & 4 \\ 3 & -1 }, \mB = \sbmat{ 1 & -1 \\ 2 & 3 \\ -1 & 1 }$ $\mA + \mB = \sbmat{3 & -2 \\ 3 & 2 \\ 2 & 0 }$.
Scalar Multiplication Let $\mA : m \times n$ and $\alpha \in \RR$, then
Example $\mA = \sbmat{ 2 & 3 \\ 1 & 4 \\ 3 & -1 }, 5 \mA = \sbmat{ 10 & 15 \\ 5 & 20 \\ 15 & -5 }$
Matrix Multiplication Let $\mA : m \times n$ and $\mB : n \times k$, then
Matrix-vector Multiplication Let $\mA : m \times n$ and $\vx \in \RR^{n}$, then
This operation is just a special case of matrix multiplication that follows from treating $\vx$ and $\vc$ as $n \times 1$ and $m \times 1$ matrices, respectively.
Vector addition, Scalar vector multiplication These are just special cases of matrix addition and scalar matrix multiplication where vectors are viewed as $n \times 1$ matrices.
# Partitioning
It is often useful to represent a matrix as a collection of vectors. In this case, we write
where each $\va_j \in \RR^{m}$. This form corresponds to a partition into columns.
Alternatively, we may wish to partition a matrix into rows.
Here, each $\vr_i \in \RR^{n}$.
Using the column partitioning:
And with the row partitioning:
Another useful partitioned representation of a matrix is into blocks:
or
Here, the sizes “just have to work out” in the vernacular. Formally, all $\mA_{i,\cdot}$ must have the same number of rows and all $\mA_{\cdot,j}$ must have the same number of columns. This means the diagonal blocks are always square, but the off-diagonal blocks may not be. | {"extraction_info": {"found_math": true, "script_math_tex": 34, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948732018470764, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/43530/item-information-in-irt-with-item-covariates-linear-logistic-test-model | # Item information in IRT with item covariates (linear logistic test model)
Short question: How does the calculation and interpretation of IRT item information and test information change in the presence of item properties?
Long question: There's a variation on IRT called the linear logistic test model (LLTM):
$logit(Y_{p,j}) = \theta_p + \sum_{k=1}^K q_{j,k} \alpha_k$
For persons, $\theta_p$ is a random effect across persons $p$, just as in 1PL IRT. But unlike 1PL IRT, items have properties and each property is treated as a covariate. There are $K$ possible properties, and each item $j$ is coded with property values in the vector $q_{j,k}$. The effect of each property is the weight $\alpha_k$.
For example, if your test has math problems and reading problems, one item property may be an indicator of whether the item is a math problem or a reading problem.
Suppose the properties include indicator variables for each item $j$, i.e., that the LLTM item properties are a superset of the 1PL IRT item properties. That means we have a per-item effect, a.k.a. "difficulty" in IRT parlance. Knowing the difficulty of each item allows us to compute item information, and summing up information across item tells us the information of the full test.
In the presence of item properties, can we still talk about some questions being more informative than others? How does the calculation and interpretation of IRT item information and test information change in the presence of item properties?
-
## 1 Answer
In terms of calculation, I don't really see a problem since for each item there is one random 'ability' component and really only one fixed item intercept (created from summing across the $K$ item predictor elements), so the calculation for each item is the standard Rasch information function $I_j(\theta, \beta_j) = P (1-P)$, where $\beta_j = \sum^K_{k=1} q_{j,k}\alpha_k$. The test information naturally is then just $T(\theta) = \sum^n_{j=1} I_j(\theta)$. This makes sense since the LLTM model is really just a design constrained dichotomous Rasch model.
The interpretation wouldn't be any different than a standard Rasch model either since by using the LLTM model you've selected a prior that the identified 'math' item block is systematically easier/more difficult than the 'reading' block, and constrained the model to reflect this by creating an appropriate item design matrix $q$. In turn this makes the model more parsimonious (saving precious degrees of freedom; though I'd be tempted to think of this as a bifactor model if it's the effect of the specific factors on $\theta$ you are worried about rather than a shift in item difficulty...).
Just a note, in this case no item is really more informative than others since they all have the same slope parameter by definition. However, they will be more informative at specific levels of $\theta$ (most informative at the point of inflection in the item response curve), which is no different than typical Rasch analysis interpretation.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134761095046997, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/18638/model-selection-logistic-regression | # Model Selection: Logistic Regression
Suppose we have $n$ covariates $x_1, \dots, x_n$ and a binary outcome variable $y$. Some of these covariates are categorical with multiple levels. Others are continuous. How would you choose the "best" model? In other words, how do you choose which covariates to include in the model?
Would you model $y$ with each of the covariates individually using simple logistic regression and choose the ones with a significant association?
-
1
– gung Nov 19 '11 at 20:36
2
I'll quote @jthetzel from a recent comment on this site: "A good question, but one that most here studied in semester-long university courses, and some have spent careers studying." It's kind of like sitting down with a person and saying, "Can you teach me Swahili this afternoon?" Not that Gung doesn't make good points in his answer. It's just a vast territory. – rolando2 Nov 19 '11 at 20:44
1
– EpiGrad Nov 19 '11 at 23:30
Okay so I think I will just use AIC as a criterion. The full model has the lowest AIC. Also the AIC's are pretty different from each other. – Thomas Nov 21 '11 at 18:55
## 3 Answers
This is probably not a good thing to do. Looking at all the individual covariates first, and then building a model with those that are significant is logically equivalent to an automatic search procedure. While this approach is intuitive, inferences made from this procedure are not valid (e.g., the true p-values are different from those reported by software). The problem is magnified the larger the size of the initial set of covariates is. If you do this anyway (and, unfortunately, many people do), you cannot take the resulting model seriously. Instead, you must run an entirely new study, gathering an independent sample and fitting the previous model, to test it. However, this requires a lot of resources, and moreover, since the process is flawed and the previous model is likely a poor one, there is a strong chance it will not hold up--meaning that it is likely to waste a lot of resources.
A better way is to evaluate models of substantive interest to you. Then use an information criterion that penalizes model flexibility (such as the AIC) to adjudicate amongst those models. For logistic regression, the AIC is:
$AIC = -2*ln(likelihood) + 2*k$
where $k$ is the number of covariates included in that model. You want the model with the smallest value for the AIC, all things being equal. However, it is not always so simple; be wary when several models have similar values for the AIC, even though one may be lowest.
I include the complete formula for the AIC here, because different software outputs different information. You may have to calculate it from just the likelihood, or you may get the final AIC, or anything in between.
-
4
I like AIC but beware that computing AIC on more than 2 pre-specified models results in a multiplicity problem. – Frank Harrell Nov 20 '11 at 14:31
@FrankHarrell nice tip! – gung Nov 20 '11 at 17:29
@gung: it's good to have you on this site, you are adding a lot. – rolando2 Nov 22 '11 at 2:02
@rolando2 thanks! I seem to have caught the bug, although I may not be able to contribute this much forever. I'm learning a lot from the real experts like Frank Harrell and whuber as well. – gung Nov 22 '11 at 6:44
There are many ways to choose what variables go in a regression model, some decent, some bad, and some terrible. One may simply browse the publications of Sander Greenland, many of which concern variable selection.
Generally speaking however, I have a few common "rules":
• Automated algorithms, like those that come in software packages, are probably a bad idea.
• Using model diagnostic techniques, like gung suggests, are a good means of evaluating your variable selection choices
• You should also be using a combination of subject-matter expertise, literature searchers, directed acyclic graphs, etc. to inform your variable selection choices.
-
2
Well put, especially points 1 and 3. Model diagnostic techniques can result in a failure to preserve type I error. – Frank Harrell Nov 20 '11 at 14:32
1
Well put @Epigrad. I would add one point though. Automated algorithms become very attractive when your problem becomes large. They may be the only feasible way of doing model selection in some cases. People are now analysing huge data sets with 1000s of potential variables and millions of observations. How is the subject matter's expertise at 1000-dimensional intuition? And what you will find is that even if you do it manually (i.e. with an analyst), they will likely end up creating some short-cut rules for choosing variables. The hard part is really coding up those choices. – probabilityislogic Nov 26 '11 at 11:22
@probabilityislogic I would agree with that. Honestly, I think traditional techniques are poorly suited for very large data sets, but the tendency to fall back to more amenable techniques alarms me. If an automated algorithm can bias a data set with 10 variables, there's no reason it can't bias one with 10,000. The current emphasis on the acquisition of big data over its analysis in some parts makes me somewhat skittish. – EpiGrad Nov 27 '11 at 1:22
How would you choose the "best" model?
There isn't enough information provided to answer this question; if you want to get at causal effects on y you'll need to implement regressions that reflect what's known about the confounding. If you want to do prediction, AIC would be a reasonable approach.
These approaches are not the same; the context will determine which of the (many) ways of choosing variables will be more/less appropriate.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481223225593567, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/F-test | # F-test
An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact F-tests mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.[1]
## Common examples of F-tests
Examples of F-tests include:
• The hypothesis that the means of several normally distributed populations, all having the same standard deviation, are equal. This is perhaps the best-known F-test, and plays an important role in the analysis of variance (ANOVA).
• The hypothesis that a proposed regression model fits the data well. See Lack-of-fit sum of squares.
• The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other.
• Scheffé's method for multiple comparisons adjustment in linear models.
### F-test of the equality of two variances
Main article: F-test of equality of variances
This F-test is sensitive to non-normality.[2][3] In the analysis of variance (ANOVA), alternative tests include Levene's test, Bartlett's test, and the Brown–Forsythe test. However, when any of these tests are conducted to test the underlying assumption of homoscedasticity (i.e. homogeneity of variance), as a preliminary step to testing for mean effects, there is an increase in the experiment-wise Type I error rate.[4]
## Formula and calculation
Most F-tests arise by considering a decomposition of the variability in a collection of data in terms of sums of squares. The test statistic in an F-test is the ratio of two scaled sums of squares reflecting different sources of variability. These sums of squares are constructed so that the statistic tends to be greater when the null hypothesis is not true. In order for the statistic to follow the F-distribution under the null hypothesis, the sums of squares should be statistically independent, and each should follow a scaled chi-squared distribution. The latter condition is guaranteed if the data values are independent and normally distributed with a common variance.
### Multiple-comparison ANOVA problems
The F-test in one-way analysis of variance is used to assess whether the expected values of a quantitative variable within several pre-defined groups differ from each other. For example, suppose that a medical trial compares four treatments. The ANOVA F-test can be used to assess whether any of the treatments is on average superior, or inferior, to the others versus the null hypothesis that all four treatments yield the same mean response. This is an example of an "omnibus" test, meaning that a single test is performed to detect any of several possible differences. Alternatively, we could carry out pairwise tests among the treatments (for instance, in the medical trial example with four treatments we could carry out six tests among pairs of treatments). The advantage of the ANOVA F-test is that we do not need to pre-specify which treatments are to be compared, and we do not need to adjust for making multiple comparisons. The disadvantage of the ANOVA F-test is that if we reject the null hypothesis, we do not know which treatments can be said to be significantly different from the others — if the F-test is performed at level α we cannot state that the treatment pair with the greatest mean difference is significantly different at level α.
The formula for the one-way ANOVA F-test statistic is
$F = \frac{\text{explained variance}}{\text{unexplained variance}} ,$
or
$F = \frac{\text{between-group variability}}{\text{within-group variability}}.$
The "explained variance", or "between-group variability" is
$\sum_i n_i(\bar{Y}_{i\cdot} - \bar{Y})^2/(K-1)$
where $\bar{Y}_{i\cdot}$ denotes the sample mean in the ith group, ni is the number of observations in the ith group,$\bar{Y}$ denotes the overall mean of the data, and K denotes the number of groups.
The "unexplained variance", or "within-group variability" is
$\sum_{ij} (Y_{ij}-\bar{Y}_{i\cdot})^2/(N-K),$
where Yij is the jth observation in the ith out of K groups and N is the overall sample size. This F-statistic follows the F-distribution with K − 1, N −K degrees of freedom under the null hypothesis. The statistic will be large if the between-group variability is large relative to the within-group variability, which is unlikely to happen if the population means of the groups all have the same value.
Note that when there are only two groups for the one-way ANOVA F-test, F = t2 where t is the Student's t statistic.
### Regression problems
Consider two models, 1 and 2, where model 1 is 'nested' within model 2. Model 1 is the Restricted model, and Model 2 is the Unrestricted one. That is, model 1 has p1 parameters, and model 2 has p2 parameters, where p2 > p1, and for any choice of parameters in model 1, the same regression curve can be achieved by some choice of the parameters of model 2. (We use the convention that any constant parameter in a model is included when counting the parameters. For instance, the simple linear model y = mx + b has p = 2 under this convention.) The model with more parameters will always be able to fit the data at least as well as the model with fewer parameters. Thus typically model 2 will give a better (i.e. lower error) fit to the data than model 1. But one often wants to determine whether model 2 gives a significantly better fit to the data. One approach to this problem is to use an F test.
If there are n data points to estimate parameters of both models from, then one can calculate the F statistic, given by
$F=\frac{\left(\frac{\text{RSS}_1 - \text{RSS}_2 }{p_2 - p_1}\right)}{\left(\frac{\text{RSS}_2}{n - p_2}\right)} ,$
where RSSi is the residual sum of squares of model i. If your regression model has been calculated with weights, then replace RSSi with χ2, the weighted sum of squared residuals. Under the null hypothesis that model 2 does not provide a significantly better fit than model 1, F will have an F distribution, with (p2 − p1, n − p2) degrees of freedom. The null hypothesis is rejected if the F calculated from the data is greater than the critical value of the F-distribution for some desired false-rejection probability (e.g. 0.05). The F-test is a Wald test.
## One-way ANOVA example
Consider an experiment to study the effect of three different levels of a factor on a response (e.g. three levels of a fertilizer on plant growth). If we had 6 observations for each level, we could write the outcome of the experiment in a table like this, where a1, a2, and a3 are the three levels of the factor being studied.
a1 a2 a3
6 8 13
8 12 9
4 9 11
5 11 8
3 6 7
4 8 12
The null hypothesis, denoted H0, for the overall F-test for this experiment would be that all three levels of the factor produce the same response, on average. To calculate the F-ratio:
Step 1: Calculate the mean within each group:
$\begin{align} \overline{Y}_1 & = \frac{1}{6}\sum Y_{1i} = \frac{6 + 8 + 4 + 5 + 3 + 4}{6} = 5 \\ \overline{Y}_2 & = \frac{1}{6}\sum Y_{2i} = \frac{8 + 12 + 9 + 11 + 6 + 8}{6} = 9 \\ \overline{Y}_3 & = \frac{1}{6}\sum Y_{3i} = \frac{13 + 9 + 11 + 8 + 7 + 12}{6} = 10 \end{align}$
Step 2: Calculate the overall mean:
$\overline{Y} = \frac{\sum_i \overline{Y}_i}{a} = \frac{\overline{Y}_1 + \overline{Y}_2 + \overline{Y}_3}{a} = \frac{5 + 9 + 10}{3} = 8$
where a is the number of groups.
Step 3: Calculate the "between-group" sum of squares:
$\begin{align} S_B & = n(\overline{Y}_1-\overline{Y})^2 + n(\overline{Y}_2-\overline{Y})^2 + n(\overline{Y}_3-\overline{Y})^2 \\[8pt] & = 6(5-8)^2 + 6(9-8)^2 + 6(10-8)^2 = 84 \end{align}$
where n is the number of data values per group.
The between-group degrees of freedom is one less than the number of groups
$f_b = 3-1 = 2$
so the between-group mean square value is
$MS_B = 84/2 = 42$
Step 4: Calculate the "within-group" sum of squares. Begin by centering the data in each group
a1 a2 a3
6 − 5 = 1 8 − 9 = -1 13 − 10 = 3
8 − 5 = 3 12 − 9 = 3 9 − 10 = -1
4 − 5 = -1 9 − 9 = 0 11 − 10 = 1
5 − 5 = 0 11 − 9 = 2 8 − 10 = -2
3 − 5 = -2 6 − 9 = -3 7 − 10 = -3
4 − 5 = -1 8 − 9 = -1 12 − 10 = 2
The within-group sum of squares is the sum of squares of all 18 values in this table
$S_W = 1 + 9 + 1 + 0 + 4 + 1 + 1 + 9 + 0 + 4 + 9 + 1 + 9 + 1 + 1 + 4 + 9 + 4 = 68$
The within-group degrees of freedom is
$f_W = a(n-1) = 3(6-1) = 15$
Thus the within-group mean square value is
$MS_W = S_W/f_W = 68/15 \approx 4.5$
Step 5: The F-ratio is
$F = \frac{MS_B}{MS_W} \approx 42/4.5 \approx 9.3$
The critical value is the number that the test statistic must exceed to reject the test. In this case, Fcrit(2,15) = 3.68 at α = 0.05. Since F = 9.3 > 3.68, the results are significant at the 5% significance level. One would reject the null hypothesis, concluding that there is strong evidence that the expected values in the three groups differ. The p-value for this test is 0.002.
After performing the F-test, it is common to carry out some "post-hoc" analysis of the group means. In this case, the first two group means differ by 4 units, the first and third group means differ by 5 units, and the second and third group means differ by only 1 unit. The standard error of each of these differences is $\sqrt{4.5/6 + 4.5/6} = 1.2$. Thus the first group is strongly different from the other groups, as the mean difference is more times the standard error, so we can be highly confident that the population mean of the first group differs from the population means of the other groups. However there is no evidence that the second and third groups have different population means from each other, as their mean difference of one unit is comparable to the standard error.
Note F(x, y) denotes an F-distribution with x degrees of freedom in the numerator and y degrees of freedom in the denominator.
## ANOVA's robustness with respect to Type I errors for departures from population normality
The oneway ANOVA can be generalized to the factorial and multivariate layouts, as well as to the analysis of covariance. None of these F-tests, however, are robust when there are severe violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts.[5] Furthermore, if the underlying assumption of homoscedasticity is violated, the Type I error properties degenerate much more severely.[6] For nonparametric alternatives in the factorial layout, see Sawilowsky.[7] For more discussion see ANOVA on ranks.
## References
1. Lomax, Richard G. (2007) Statistical Concepts: A Second Course, p. 10, ISBN 0-8058-5850-4
2. Box, G.E.P. (1953). "Non-Normality and Tests on Variances". Biometrika 40 (3/4): 318–335. JSTOR 2333350. More than one of `|number=` and `|issue=` specified (help)
3. Markowski, Carol A; Markowski, Edward P. (1990). "Conditions for the Effectiveness of a Preliminary Test of Variance". 44 (4): 322–326. doi:10.2307/2684360. JSTOR 2684360. More than one of `|number=` and `|issue=` specified (help)
4. Sawilowsky, S. (2002). "Fermat, Schubert, Einstein, and Behrens-Fisher:The Probable Difference Between Two Means When σ12 ≠ σ22". Journal of Modern Applied Statistical Methods, 1(2), 461–472.
5. Blair, R. C. (1981). "A reaction to 'Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance.'" Review of Educational Research, 51, 499-507.
6. Randolf, E. A., & Barcikowski, R. S. (1989, November). "Type I error rate when real study values are used as population parameters in a Monte Carlo study". Paper presented at the 11th annual meeting of the Mid-Western Educational Research Association, Chicago.
7. Sawilowsky, S. (1990). Nonparametric tests of interaction in experimental design. Review of Educational Research, 25(20-59). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8885599374771118, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/49116-system-equation-condition.html | # Thread:
1. ## System of equation, condition of a
For which values of $a$ the following system of equations has a unique solution? Infinite solutions?
$\left[ \begin{array}{ccc}x-y+z=2 \\ ax-y+z=2 \\ 2x-2y +(2-a)z=4a \end{array} \right]$.
I've solve the system via matrices and got that $\left[ \begin{array}{ccc}x=0 \\ y=-2-4a \\ z=-4a(a-1) \end{array} \right]$. I'm stuck from here... I see that for a given $a$ there is always a unique solution, but that you can give whatever value for $a$ so that you have an infinity of solutions. So how do I answer the question?
2. Hello, arbolis!
For which values of $a$ does the following system of equations has a unique solution?
Infinite solutions?
$\begin{array}{ccc}x-y+z\;=\;2 & [1]\\ ax-y+z\;=\;2 & [2]\\ 2x-2y +(2-a)z\;=\;4a & [3]\end{array}$
Just "eyeballing" the system, I can see the following:
If $a = 1$, equations [1] and [2] are identical.
. . The system will have infinite solutions.
If $a = 0$, equation [3] becomes: . $2x - 2y + 2z \:=\:0 \quad\Rightarrow\quad x - y + z \:=\:0$
. . which is incompatible with equation [1].
The system will have no solution.
3. Thank you Soroban!
You are right, it seems more easy than I thought. So what I did with matrices was not necessary and only complicates things.
But I'd like to know if there is a formal way to determine what values of $a$ would make the system to get only one solution. Is that possible? Or have I to "eyeball" this?
4. Hello,
Here is my trial... lol
The system can be rewritten this way :
$\begin{pmatrix} 1 & -1 & 1 \\ a & -1 & 1 \\ 2 & -2 & 2-a \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}=\begin{pmatrix} 2 \\ 2 \\ 4a \end{pmatrix}$
I don't have my notes with me, so I can't check. The wikipedia article may help you, I'm not sure (System of linear equations - Wikipedia, the free encyclopedia).
In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed.
Independent vectors mean that there is no linear combination of two of them giving the third. Or any relation of proportionality.
If you find dependent vectors, it is likely that you'll get an infinity of solutions. And if there are dependent vectors, the discriminant will be = 0 (to be checked).
General behavior
The solution set for two equations in three variables is usually a line.
The solution set for two equations in three variables is usually a line.
In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns:
1. Usually, a system with fewer equations than unknowns has infinitely many solutions.
2. Usually, a system with the same number of equations and unknowns has a single unique solution.
3. Usually, a system with more equations than unknowns has no solution.
I'm sorry not to be able to help more, algebra is not what I prefer so I easily forget these things
Hope you'll extract something from this.
5. And if there are dependent vectors, the discriminant will be = 0 (to be checked).
What do you mean by discriminant? Anyway, I've not yet seen basis (even if I had to deal with them when I studied Numerical Analysis).
1. Usually, a system with fewer equations than unknowns has infinitely many solutions.
2. Usually, a system with the same number of equations and unknowns has a single unique solution.
.
The system can be rewritten this way :
I know, that's what I did to reach
but I certainly did an error since it doesn't fit with the eyeball of Soroban.
So my question remains "I'd like to know if there is a formal way to determine what values of would make the system to get only one solution."
Thanks for your time Moo and your help, I appreciate the research you've done for me.
Lastly,
I'm sorry not to be able to help more, algebra is not what I prefer so I easily forget these things
I hate this matter (even if less than a year ago) but it is an extremely important one. Not only for me (as a physics student) but for many other careers. I had an exam of physics I last Wednesday and I totally depreciated Algebra II. The exam of Algebra II is coming this Wednesday so I'm hurrying up!
6. Originally Posted by arbolis
What do you mean by discriminant?
Determinant
Sorry, I'm being very tired..
I hate this matter (even if less than a year ago) but it is an extremely important one. Not only for me (as a physics student) but for many other careers. I had an exam of physics I last Wednesday and I totally depreciated Algebra II. The exam of Algebra II is coming this Wednesday so I'm hurrying up!
So good luck
There is (at least) a little mistake in your answer.
You should have had (1-a)x=0. But this doesn't mean that x=0 ! This is the first eyeball | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9583503007888794, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/149833/is-critical-haudorff-measure-a-frostman-measure?answertab=active | # Is critical Haudorff measure a Frostman measure?
Let $K$ be a compact set in $\mathbb{R}^d$ of Hausdorff dimension $\alpha<d$, $H_\alpha(\cdot)$ the $\alpha$-dimensional Hausdorff measure. If $0<H_\alpha(K)<\infty$, is it necessarily true that $H_\alpha(K\cap B)\lesssim r(B)^\alpha$ for any open ball $B$? Here $r(B)$ denotes the radius of the ball $B$.
This seems to be true when $K$ enjoys some self-similarity, e.g. when $K$ is the standard Cantor set. But I am not sure if it is also true for the general sets.
-
2
In case you want to study this subject further, I'll point out the key words: (a) "Ahlfors regular" means $\mathcal H^{\alpha}(K\cap B_r)\sim r^{\alpha}$; (b) "upper Ahlfors regular" means $\mathcal H^{\alpha}(K\cap B_r)\lesssim r^{\alpha}$. Some people drop "Ahlfors". – user31373 May 25 '12 at 22:43
## 2 Answers
Consider e.g. $\alpha=1$, $d=2$. Given $p > 1$, let $K$ be the union of a sequence of line segments of lengths $1/n^2$, $n = 1,2,3,\ldots$, all with one endpoint at $0$. Then for $0 < r < 1$, if $B$ is the ball of radius $r$ centred at $0$, $H_1(K \cap B) = \sum_{n \le r^{-1/2}} r + \sum_{n > r^{-1/2}} n^{-2} \approx r^{1/2}$
-
Nice! Thank you. – Syang Chen May 26 '12 at 14:11
I would read about the local dimension of a measure here.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8749198317527771, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=472029 | Physics Forums
Page 1 of 3 1 2 3 >
## Simple dipstick problem
1. The problem statement, all variables and given/known data
I have a problem involving a dipstick, a cone and a cylinder. Basically I want to make a dipstick for a cone and cylinder which shows the volume of remaining fluid in the containers.
2. Relevant equations
3. The attempt at a solution
for the cone
V=pi r^2 *h/3
now I cant find how to relate depth to volume?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I just remembered to say that what I'm really really having a problem with is finding the correct relationship between height and radius. Thanks
Well, you could start by focusing on the cylinder. If the volume of the cylinder is volume = pi * radius^2 * height Can you rewrite the equation so that you solve for height? If you can do that you know that a certain height on the dipstick will correspond to a certain volume of the cylinder. Then you can figure out how the cone will add on to that.
## Simple dipstick problem
the cylinder is on its side so its not that simple I don't think??? sorry I forgot to say that in the OP.
In that case you need to rewrite the volume equation for the cylinder in terms of the diameter of the cylinder. Remember that radius = 1/2 the diameter. That way you'll know the volume that corresponds to the "diameter height" that you have written on the dipstick. Is the cone attached to the cylinder or is it a separate question?
the cone is a separate question. so with the diametre V=pi*1/4 D^2 *h then D=sqrt(4V/pi) but that will only work until 1/2 way won't it?
You know what, I think I actually made a mistake. I think you're actually going to have to use some trigonometry to solve this. I'll have to think about this a bit.
Well, I can say this much. If you look at the cylindrical tank from one of the ends, you would have a circle with the liquid coming up a certain height of the circle. Therefore the problem becomes a question of finding the area of the segment of the circle below the chord marked by the top of the liquid. This is then multiplied by the length of the cylinder to find the volume. Since it's a circle, you can draw a line from the center of the circle to the ends of the chord that marks the height of the liquid. Bisecting this, you get two right angle triangles with a hypotenuse equal to the radius, and a height equal to the radius minus the height of the liquid. The angle between these two is then used to figure things out... Unfortunately... I'm kind of rusty on this so I'm afraid I can't be of much more help. Maybe someone else can take it from here?
What you are saying at the start makes perfect sense to me and hopefully someone can help me further with the math. Thanks
I thought I would mention another thing. If you take the area of the "pie slice" that is marked by the lines drawn to the top of the chord (including the area that is swept out to the bottom of the tank), and then you subtract the area of the triangles above the liquid, that should give you the area of the liquid in the circle.
I found this on wikipedia... which might help http://en.wikipedia.org/wiki/Circular_segment You need to rewrite the area formula in terms of the height of the liquid, and I think that R = h + (r-h) gives you a way to do that.
Recognitions: Homework Help I'm willing to give this question a go, but I can't seem to concentrate at the moment, probably because of my headache. If anyone could give me a jump start with what we're trying to solve, I'll give a lending hand to finish the problem off.
Hey Mentallic, any/all help is appreciated. Were trying to make a dipstick for a sideways cylinder and a cone. Currently I'm having trouble relating the height of the liquid on the circle to the area of that part. Thanks
Recognitions: Homework Help Ok since you said in post #6 that the cylinder and cone are separate questions, I'm going to assume that you want a relation between the height and volume of liquid present in a cone on its side (like a wheel) and the cone will be laying down which way exactly? First of all, do you know about and are you supposed to use integration?
Cone is upright, cylinder is on its side. I know integration but this is for an 11th grader I'm helping, I seem to have forgotten everything from then. No integration for him yet though. It is the cone question that I need help with as there is ample information on the cylinder on the net. Thanks and hopefully the fact that its upright (ice cream cone ways) makes this simpler and sorry for not stating this directly.
Recognitions: Homework Help Alright sure, and yes it does make the problem easier Ok so I'm guessing that the dimensions of the cone are known constants. Let's give it a base radius R and perpendicular height H, so the volume of the cone is $$V=\frac{\pi R^2 H}{3}$$ Now let's fill this cone up a bit with water, the water level will be at a height h. Now, just look at the part of the cone that isn't filled with water - it's a smaller cone with height H-h. Let the radius of this cone be r. So the volume of the water is now simply $$V=\frac{\pi R^2 H}{3}-\frac{\pi r^2 (H-h)}{3}$$ Ok so what constants do we know in this formula? We know R, H, h (because this will be observed on the dipstick) and we don't know r. Well r is pretty easy to find, just take a side-view of the cone and look at one side of the cone. It will be a right triangle with height H and length R. There will be another similar but smaller right triangle within it with height H-h and length r. Since these triangles are similar, their dimensions are proportional to each other. Can you take it from here?
Page 1 of 3 1 2 3 >
Thread Tools
| | | |
|----------------------------------------------|-------------------------------|---------|
| Similar Threads for: Simple dipstick problem | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 10 |
| | Product Claims | 10 |
| | Advanced Physics Homework | 17 |
| | Calculus | 1 |
| | Calculus & Beyond Homework | 2 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499537348747253, "perplexity_flag": "head"} |
http://www.mathscoop.com/calculus/infinite-sequences-and-series/nth-term-test.php | The Nth Term Test is often called the "Divergence Test"
It can only prove that a series diverges.
This test does not verify that a series converges.
# The Formula
$$\sum\limits_{n=1}^{\infty}a_{n} \text{ diverges if } \lim\limits_{n\rightarrow\infty}a_{n} \text{ fails to exists or does not equal zero.}$$
Note: If the limit equals zero that does not mean that the series converges . It means that it could converge. If the limit equals zero, more tests have to be performed to prove whether or not the series converges. However, if the limit is not zero, we know the series diverges. This is why the test is also known as the divergence test.
The Nth term test is often the first test that you should use to rule out the series diverges before you go any further.
| | | |
|------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| Series | Limit | Conclusion |
| $$\sum\limits_{n=0}^{\infty}\frac{n}{2n+1}$$ | $$\lim\limits_{n\rightarrow\infty}\frac{n}{2n+1}=\frac{1}{2}$$ | diverges since $$\lim\limits_{n\rightarrow\infty}\frac{n}{2n+1}\neq 0$$ |
| $$\sum\limits_{n=0}^{\infty}\frac{n!+1}{3n!-5}$$ | $$\lim\limits_{n\rightarrow\infty}\frac{n!+1}{3n!-5}=\frac{1}{3}$$ | diverges since $$\lim\limits_{n\rightarrow\infty}\frac{n!+1}{3n!-5}\neq0$$ |
| $$\sum\limits_{n=0}^{\infty}\frac{n}{ln(n)}$$ | $$\lim\limits_{n\rightarrow\infty}\frac{n}{ln(n)}=\infty$$ L'Hopitals Rule | diverges since $$\sum\limits_{n=0}^{\infty}\frac{n}{ln(n)}$$ |
| $$\sum\limits_{n=0}^{\infty}n \cdot sin(\frac{1}{n})$$ | $$\lim\limits_{n\rightarrow\infty}n \cdot sin(\frac{1}{n})=1$$ L'Hopitals Rule | diverges since $$\lim\limits_{n\rightarrow\infty}n \cdot sin(\frac{1}{n})\neq0$$ |
| $$\sum\limits_{n=0}^{\infty}\frac{1}{n}$$ | $$\lim\limits_{n\rightarrow\infty}\frac{1}{n}=0$$ | could possibly converge |
Remember: A limit of zero does not mean that the series converges.
In fact, in the last example we found that $$\lim\limits_{n\rightarrow\infty}\frac{1}{n}=0$$, however, we know that the series $$\sum\limits_{n=0}^{\infty}\frac{1}{n}$$ is a P-series that diverges.
On the other hand, consider the series $$\sum\limits_{n=0}^{\infty}\frac{1}{n^2}$$ the $$\lim\limits_{n\rightarrow\infty}\frac{1}{n^2}=0$$ . In this case, we have a P-series with p=2 so this series converges.
Tweet Graphing
Calculator
Calculus Homework
Answers | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 19, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427507519721985, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/tagged/manifold | ## Tagged Questions
0answers
1 views
### Lipschitz map of the ellipse
Is there a L-Lipschitz homeomorphism of the Elipse $x^2/4+y^2=1$ onto the unit circle $x^2+y^2=1$ such that $L<1$?
5answers
648 views
+100
### Another colored balls puzzle (part II)
The same colleague as in http://mathoverflow.net/questions/130489/another-colored-balls-puzzle asked me the following variant which she called "part II". Imagine you have $n$ ball …
1answer
24 views
### Basics of minimal Elliptic Surfaces [following Beauville]
I asked this on mathStackexchange but might be a bit too localized for it. I am reading Beauville's chapter IX on Elliptic surfaces. Let $S$ be a minimal elliptic surface with \$\ …
7answers
1k views
### Why is Set, and not Rel, so ubiquitous in mathematics?
The concept of relation in the history of mathematics, either consciously or not, has always been important: think of order relations or equivalence relations. Why was there the n …
5answers
789 views
### Groupoid actions on spaces
The action of a group $G$ on a topological space $X$ can be viewed as a functor $F: G \to \mathcal{Top}$ with $F(*)=X$. (Here I'm viewing a group as a category with one object, \$ * …
1answer
83 views
### The relations between the Perelman’s entropy functional and notions of entropy from statistical mechanics
I am looking for the relations and analogies between the Perelman's entropy functional,$\mathcal{W}(g,f,\tau)=\int_M [\tau(|\nabla f|^2+R)+f-n] (4\pi\tau)^{-\frac{n}{2}}e^{-f}dV$, …
5answers
293 views
### Sequences equidistributed modulo 1
Let $\alpha$ be any positive irrational and $\beta$ be any positive real. We have the following results. H. Weyl (1909): The fractional part of the sequence $\alpha n$ is equidist …
1answer
71 views
### What is an interpretation of the relation in the cohomology of the pure braid groups?
In 1968, Arnol'd proved that the integral cohomology of the pure braid group $P_n$ is isomorphic to the exterior algebra generated by the collection of degree-one classes \$\omega_{ …
0answers
7 views
### Are the d quantities log(\lambda_j.s+\mu_j) linearly independent over Q for all s>1?
This question deals with the gamma factor of a function of the Selberg class. Writing the functional equation of such a function $F$ as $\Phi(s)=\overline{\Phi(\overline{1-s})}$ wi …
2answers
63 views
### A question about large real closed fields
A real closed field can be ordered in one and only one way, and is therefore provided with a unique order topology. Given any infinite cardinal number k, does there always exist a …
5answers
557 views
### Are the two meanings of “undecidable” related?
I am usually confused by questions of the type "could such and such a problem be undecidable", because as far as I know there are two distinct possible meanings of "undecidable". …
1answer
33 views
### Is this cube packing possible?
I know how to pack $5$ unit squares in a square of side length $2+\frac{\sqrt{2}}{2}$. Is there an $\varepsilon>0$ such that there exists a packing of $9$ unit cubes in a cube of …
0answers
14 views
### Zeroes of a homogeneous function
I am interested in the zero-set of a homogeneous function $f(x_1, \cdots, x_n)$, where $f$ is not necessarily a polynomial. In particular, I would like to know if there are any gen …
1answer
149 views
### Differentiable manifolds by Serge Lang question
I have started reading "Introduction to differentiable manifolds" by Serge Lang. In this book, Lang takes a different approach, by immediately introducing manifolds on arbitrary Ba …
0answers
25 views
### Proving a variety is not unirational
It is known that if a variety is unirational then it is rationally connected. However, there are no known examples of rationally connected varieties which are not unirational. In …
15 30 50 per page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916430652141571, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/181720/proof-of-non-ordering-of-complex-field/181732 | # Proof of Non-Ordering of Complex Field
Let $\mathcal F$ be a field. Suppose that there is a set $P \subset \mathcal F$ which satisfies the following properties:
• For each $x \in \mathcal F$, exactly one of the following statements holds: $x \in P$, $-x \in P$, $x =0$.
• For $x,y \in P$, $xy \in P$ and $x+y \in P$.
If such a $P$ exists, then $\mathcal F$ is an ordered field.
Define $x \le y \Leftrightarrow y -x \in P \vee x = y$.
Exercise: Prove that the field of complex numbers $\mathbb C$ cannot be given the structure of an ordered field.
My Work So Far: (Edit 1 note: This section and the Question is at the beginning, simply leaving this up for reference as to where I started)
Let $i$ be such that $i \in P, i \ne 0 \Rightarrow i > 0$. But $i^2 = -1 \notin P$.
My Question: I am not sure how much I need to redefine, and how I go about rigorously making this patchwork argument airtight. I am aware that I have not addressed how I assumed that $-1 \notin P$, but I'm not sure how to distinguish between $1$ and $i$ in this proof.
Edit #1
1st Step: Showing that $-1 \notin P$, observe that $(-1)(-1) = 1$ therefore if $-1 \in P$, both $x, -x \in P$, a contradiction.
2nd Step: To show $i \notin P$, we have that if $i \in P \Rightarrow i^2 \in P$, but $i^2 = -1 \notin P$, so $i \notin P$.
3rd Step: To show $-i \notin P$, we have $(-i)(-i) = i^2 \notin P$, so $-i$ cannot be in $P$.
Conclusion: Since $i \ne 0$, and $i, -i \notin P$, there is no set $P \subset \mathbb C$ that satisfies the above properties, thus $\mathbb C$ is not ordered.
Thank you André Nicolas and Eric Stucky for your help!
-
## 2 Answers
To show that $-1$ is not in $P$, note that if $-1\in P$ then $(-1)(-1)\in P$, which contradicts the fact that if $x \ne 0$ exactly one of $x$ and $-x$ is in $P$.
Next we show that $i\notin P$. Suppose to the contrary that $i\in P$. Then $i^2\in P$, which contradicts the fact that $-1\notin P$.
The same argument shows that $-i\notin P$. This contradicts the fact that if $x\ne 0$, then exactly one of $x$ and $-x$ is in $P$.
-
I think the definition of $P$ you have is slightly off: if $x=0$ then all three conditions are satisfied. A possible fix is "Either $x\in P$ or $-x\in P$, with both holding iff $x=0$." On the other hand, I'm not convinced that it's important that $0\in P$; you should check that before making things complicated.
For an arbitrary field, $-1\notin P$ because then $(-1)(-1) = 1\in P$, which is impossible.
From there, you assume that $P$ exists and begin a proof by contradiction. Using your work in the OP you can therefore show that $i\notin P$. However, since $i\neq 0$, we also need to show that $-i\notin P$ before we continue. The proof is essentially identical to the one you gave in the OP.
This will contradict the fact that either $i$ or $-i$ is in $P$. Therefore, there cannot be such a set $P\subset\mathbb{C}$.
-
I think the definition is OK. 0 is not meant to be an element of P. For example, if we were talking about the reals, then P would be the positive reals, not including 0. – Ben Crowell Aug 12 '12 at 17:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528480172157288, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/67370-geometric-arithmetic-sequence.html | # Thread:
1. ## Geometric and Arithmetic Sequence
Hi im a bit confused about these questions here. I do not know where to start, but much help is appreciated.
Question 1
In an arithmetic sequence t4 + t5 + t6 = 300 and t15 + t16 + t17 = 201
Find t18
Question 2
In a geometric sequence t1 + t2 + t3 = 21 and t4 + t5 + t6 = 168
Find the squence
Thanks in advance =]
2. In an arithmetic sequence $t_n=t_1+(n-1)r$
In a geometric sequence $t_n=t_1r^{n-1}$
Write the equations in terms of $t_1$ and $r$, then solve the systems.
3. Another way for the first one.
The n-th term of an arithmetic sequence is given by $t_n = t_m + (n-m)d$ where n > m and d is common difference.
You are given the sum of consecutive odd-numbered terms. The arithmetic mean of these terms is the middle term. For instance:
$\frac{t_4+t_5+t_6}{3} = t_5$
Note that this applies only for the arithmetic sequence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021726250648499, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2009/11/11/the-jacobian/?like=1&_wpnonce=67418a1937 | # The Unapologetic Mathematician
## The Jacobian
Now that we’ve used exterior algebras to come to terms with parallelepipeds and their transformations, let’s come back to apply these ideas to the calculus.
We’ll focus on a differentiable function $f:X\rightarrow\mathbb{R}^n$, where $X$ is itself some open region in $\mathbb{R}^n$. That is, if we pick a basis $\{e_i\}_{i=1}^n$ and coordinates of $\mathbb{R}^n$, then the function $f$ is a vector-valued function of $n$ real variables $x^1,\dots,x^n$ with components $f^1,\dots,f^n$. The differential, then, is itself a vector-valued function whose components are the differentials of the component functions: $df=df^ie_i$. We can write these differentials out in terms of partial derivatives:
$\displaystyle df^i(x^1,\dots,x^n;t^1,\dots,t^n)=\frac{\partial f^i}{\partial x^1}\bigg\vert_{(x^1,\dots,x^n)}t^1+\dots+\frac{\partial f^i}{\partial x^n}\bigg\vert_{(x^1,\dots,x^n)}t^n$
Just like we said when discussing the chain rule, the differential at the point $(x^1,\dots,x^n)$ defines a linear transformation from the $n$-dimensional space of displacement vectors at $(x^1,\dots,x^n)$ to the $n$-dimensional space of displacement vectors at $f(x^1,\dots,x^n)$, and the matrix entries with respect to the given basis are given by the partial derivatives.
It is this transformation that we will refer to as the Jacobian, or the Jacobian transformation. Alternately, sometimes the representing matrix is referred to as the Jacobian, or the Jacobian matrix. Since this matrix is square, we can calculate its determinant, which is also referred to as the Jacobian, or the Jacobian determinant. I’ll try to be clear which I mean, but often the specific referent of “Jacobian” must be sussed out from context.
So, in light of our recent discussion, what does the Jacobian determinant mean? Well, imagine starting with a $n$-dimensional parallelepiped at the point $(x^1,\dots,x^n)$, with one side in each of the basis directions, and positively oriented. That is, it consists of the points $(x^1+t^1,\dots,x^n+t^n)$ with $t^i$ in the interval $[0,\Delta x^i]$ for some fixed $\Delta x^i$. We’ll assume for the moment that this whole region lands within the region $X$. It should be clear that this parallelepiped is represented by the wedge
$\displaystyle(\Delta x^1e_1)\wedge\dots\wedge(\Delta x^ne_n)=(\Delta x^1\dots\Delta x^n)e_1\wedge\dots\wedge e_n$
which clearly has volume given by the product of all the $\Delta x^i$.
Now the function $f$ sends this cube to a sort of curvy parallelepiped, consisting of the points $f(x^1+t^1,\dots,x^n+t^n)$, with each $t^i$ in the interval $[0,\Delta x^i]$, and this image will have some volume. Unfortunately, we have no idea as yet how to measure such a volume. But we might be able to approximate it. Instead of using the actual curvy parallelepiped, we’ll build a new one. And if the $\Delta x^i$ are small enough, it will be more or less the same set of points, with the same volume. Or at least close enough for our purposes. We’ll replace the curved path defined by
$\displaystyle f(x^1,\dots,x^i+t,\dots,x^n)\qquad0\leq t\leq\Delta x^i$
by the displacement vector between the two endpoints:
$\displaystyle f(x^1,\dots,x^i+\Delta x^i,\dots,x^n)-f(x^1,\dots,x^i,\dots,x^n)$
and use these new vectors to build a new parallelepiped
$\displaystyle\left(f(x^1+\Delta x^1,\dots,x^n)-f(x^1,\dots,x^n)\right)\wedge\dots\wedge\left(f(x^1,\dots,x^n+\Delta x^n)-f(x^1,\dots,x^n)\right)$
But this is still an awkward volume to work with. However, we can use the differential to approximate each of these differences
$\displaystyle\begin{aligned}f(x^1,\dots,x^k+\Delta x^k,\dots,x^n)&-f(x^1,\dots,x^k,\dots,x^n)\\&\approx df(x^1,\dots,x^n;0,\dots,\Delta x^k,\dots,0)\\&=\Delta x^kdf(x^1,\dots,x^n;0,\dots,1,\dots,0)\\&=\Delta x^kdf^i(x^1,\dots,x^n;0,\dots,1,\dots,0)e_i\\&=\Delta x^k\frac{\partial f^i}{\partial x^k}\bigg\vert_{(x^1,\dots,x^n)}e_i\end{aligned}$
with no summation here on the index $k$.
Now we can easily calculate the volume of this parallelepiped, represented by the wedge
$\displaystyle\left(\Delta x^1\frac{\partial f^i}{\partial x^1}\bigg\vert_{(x^1,\dots,x^n)}e_i\right)\wedge\dots\wedge\left(\Delta x^n\frac{\partial f^i}{\partial x^n}\bigg\vert_{(x^1,\dots,x^n)}e_i\right)$
which can be rewritten as
$\displaystyle\left(\Delta x^1\dots\Delta x^n\right)\left(\frac{\partial f^i}{\partial x^1}\bigg\vert_{(x^1,\dots,x^n)}e_i\right)\wedge\dots\wedge\left(\frac{\partial f^i}{\partial x^n}\bigg\vert_{(x^1,\dots,x^n)}e_i\right)$
which clearly has a volume of $\left(\Delta x^1\dots\Delta x^n\right)$ — the volume of the original parallelepiped — times the Jacobian determinant. That is, the Jacobian determinant at $(x^1,\dots,x^n)$ estimates the factor by which the function $f$ expands small volumes near that point. Or it tells us that locally $f$ reverses the orientation of small regions near the point if the Jacobian determinant is negative.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 22 Comments »
1. [...] « Previous | [...]
Pingback by | November 12, 2009 | Reply
2. [...] Posts The Jacobian of a CompositionThe JacobianThe Cross Product and PseudovectorsThe Hodge StarSunday Samples 146An Example of a [...]
Pingback by | November 13, 2009 | Reply
3. [...] this case, the Jacobian transformation is just the function itself, and so the Jacobian determinant is nonzero if and only if the matrix [...]
Pingback by | November 17, 2009 | Reply
4. [...] the theorem that I promised. Let be continuously differentiable on an open region , and . If the Jacobian determinant at some point , then there is a uniquely determined function and two open sets and so [...]
Pingback by | November 18, 2009 | Reply
5. [...] rank if and only if the leading matrix is invertible. And this is generalized to asking that some Jacobian determinant of our system of functions is [...]
Pingback by | November 19, 2009 | Reply
6. [...] That is, the new component functions are just the coordinate functions . We can easily calculate the Jacobian matrix [...]
Pingback by | November 20, 2009 | Reply
7. [...] cases, we know that the inverse function exists because of the inverse function theorem. Here the Jacobian determinant is simply the derivative , which we’re assuming is everywhere [...]
Pingback by | January 5, 2010 | Reply
8. [...] differentiable function defined on an open region . Further, assume that is injective and that the Jacobian determinant is everywhere nonzero on . The inverse function theorem tells us that we can define a continuously [...]
Pingback by | January 6, 2010 | Reply
9. [...] Geometric Interpretation of the Jacobian Determinant We first defined the Jacobian determinant as measuring the factor by which a transformation scales infinitesimal pieces of -dimensional [...]
Pingback by | January 8, 2010 | Reply
10. [...] again, in the sense that if one of these integrals exists then so does the other, and their values are equal. The function plays the role of the absolute value of the Jacobian determinant. [...]
Pingback by | August 2, 2010 | Reply
11. [...] and take the th partial derivative of that function. And this is precisely the definition of the Jacobian of this transition [...]
Pingback by | April 1, 2011 | Reply
12. [...] from multivariable calculus of a linear map that takes tangent vectors to tangent vectors: the Jacobian, which we saw as a certain extension of the notion of the derivative. We will find that our map is [...]
Pingback by | April 6, 2011 | Reply
13. [...] dimensions of and , respectively. Like we saw for coordinate transforms in place, this is just the Jacobian [...]
Pingback by | April 6, 2011 | Reply
14. [...] are the components of the Jacobian matrix of the transition function . What does this mean? Well, consider the linear [...]
Pingback by | April 13, 2011 | Reply
15. [...] function theorem from multivariable calculus: if is a map defined on an open region , and if the Jacobian of has maximal rank at a point then there is some neighborhood of so that the restriction is [...]
Pingback by | April 14, 2011 | Reply
16. [...] , and the Jacobian of [...]
Pingback by | April 15, 2011 | Reply
17. [...] is, the two forms differ at each point by a factor of the Jacobian determinant at that point. This is the differential topology version of the change of basis formula for top [...]
Pingback by | July 8, 2011 | Reply
18. [...] look at what happens when and is a singular -cube. Since it has a Jacobian at each point in the unit cube, and we’ll keep things simple by assuming that it’s [...]
Pingback by | August 3, 2011 | Reply
19. [...] little thought gives us our answer: is the Jacobian determinant of the coordinate transformation from one patch to the other. Indeed, we use the Jacobian to change [...]
Pingback by | August 29, 2011 | Reply
20. [...] the (local) orientation form on differs from the (local) orientation form on by a factor of the Jacobian determinant of the function with respect to these coordinate maps. This repeats what we saw in the case of [...]
Pingback by | September 1, 2011 | Reply
21. [...] manifold to another. Since is both smooth and has a smooth inverse, we must find that the Jacobian is always invertible; the inverse of at is at . And so — assuming is connected — [...]
Pingback by | September 12, 2011 | Reply
22. [...] So let’s see how this form changes; if is another coordinate patch, we can assume that by restricting each patch to their common intersection. We’ve already determined that the forms differ by a factor of the Jacobian determinant: [...]
Pingback by | October 6, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203580021858215, "perplexity_flag": "head"} |
http://conservapedia.com/Cosecant | # Cosecant
### From Conservapedia
The cosecant is a trigonometric function describing the ratio of the hypotenuse to the opposite side in a right triangle. It is often abbreviated as cosec or cose.
$\csc \theta = \sec \left(\frac{\pi}{2} - \theta \right) = \frac{1}{\sin \theta}\,$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8740078806877136, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2008/07/21/the-sum-of-subspaces/?like=1&source=post_flair&_wpnonce=d0199d491e | # The Unapologetic Mathematician
## The Sum of Subspaces
We know what the direct sum of two vector spaces is. That we define abstractly and without reference to the internal structure of each space. It’s sort of like the disjoint union of sets, and in fact the basis for a direct sum is the disjoint union of bases for the summands.
Let’s use universal properties to prove this! We consider the direct sum $V\oplus W$, and we have a basis $A$ for $V$ and a basis $B$ for $W$. But remember that the whole point of a basis is that vector spaces are free modules.
That is, there is a forgetful functor from $\mathbf{Vec}(\mathbb{F})$ to $\mathbf{Set}$, sending a vector space to its underlying set. This functor has a left adjoint which assigns to any set $S$ the vector space $\mathbb{F}\left[S\right]$ of formal linear combinations of elements of $S$. This is the free vector space on the basis $S$, and when we choose the basis $A$ for a vector space $V$ we are actually choosing an isomorphism $V\cong\mathbb{F}\left[A\right]$.
Okay. So we’re really considering the direct sum $\mathbb{F}\left[A\right]\oplus\mathbb{F}\left[B\right]$, and we’re asserting that it is isomorphic to $\mathbb{F}\left[A\uplus B\right]$. But we just said that constructing a free vector space is a functor, and this functor has a right adjoint. And we know that any functor that has a right adjoint preserves colimits! The disjoint union of sets is a coproduct, and the direct sum of vector spaces is a biproduct, which means it’s also a coproduct. Thus we have our isomorphism. Neat!
But not all unions of sets are disjoint. Sometimes the sets share elements, and the easiest way for this to happen is for them to both be subsets of some larger set. Then the union of the two subsets has to take this overlap into account. And since subspaces of a larger vector space may intersect nontrivially, their sum as subspaces has to take this into account.
First, here’s a definition in terms of the vectors themselves: given two subspaces $V$ and $W$ of a larger vector space $U$, the sum $V+W$ will be the subspace consisting of all vectors that can be written in the form $v+w$ for $v\in V$ and $w\in W$. Notice that there’s no uniqueness requirement here, and that’s because if $V$ and $W$ overlap in anything other than the trivial subspace $\left\{0\right\}$ we can add a vector in that overlap to $v$ and subtract it from $w$, getting a different decomposition. This is precisely the situation a direct sum avoids.
Alternatively, let’s consider the collection of all subspaces of $U$. This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power set of a set, except that only those subsets of $U$ which are subspaces get included.
Now it turns out that, like the power set, this poset is actually a lattice. The meet is the intersection of subspaces, but the join isn’t their union. Indeed, the union of subspaces usually isn’t a subspace at all! What do we use instead? The sum, of course! It’s easiest to verify this with the algebraic definition of a lattice.
The lattice does have a top element (the whole space $U$) and a bottom element (the trivial subspace $\left\{0\right\}$). It’s even modular! Indeed, let $X$, $Y$, and $Z$ be subspaces with $X\subseteq Z$. Then on the one hand we consider $X+(Y\cap Z)$, which is the collection of all vectors $u=x+y$, where $x\in X$, $y\in Y$, and $y\in Z$. On the other hand we consider $(X+Y)\cap Z$, which is collection of all vectors $u=x+y$, where $x\in X$, $y\in Y$, and $u\in Z$. I’ll leave it to you to show how these two conditions are equivalent.
Unfortunately, the lattice isn’t distributive. I could work this out directly, but it’s easier to just notice that complements aren’t unique. Just consider three subspaces of $\mathbb{F}^2$: $X$ has all vectors of the form $\begin{pmatrix}x\\{0}\end{pmatrix}$, $Y$ has all of the form $\begin{pmatrix}y\\y\end{pmatrix}$, and $Z$ has all of the form $\begin{pmatrix}0\\z\end{pmatrix}$. Then $X+Y=\mathbb{F}^2=X+Z$, and $X\cap Y=\left\{0\right\}=X\cap Z$, but $Y\neq Z$.
This is all well and good, but it’s starting to encroach on Todd’s turf, so I’ll back off a bit. The important bit here is that the sum behaves like a least-upper-bound.
In categorical terms, this means that it’s a product in the lattice of subspaces (considered as a category). Don’t get confused here! Direct sums are coproducts in the category $\mathbf{Vec}(\mathbb{F})$, while sums are coproducts in the category (lattice) of subspaces of a given vector space. These are completely different categories, so don’t go confusing coproducts in one with those in the other.
In this case, all we mean by saying this is a categorical coproduct is that we have a description of the sum of two subspaces which doesn’t refer to the elements of the subspaces at all. The sum $V+W$ is the smallest subspace of $U$ which contains both $V$ and $W$. It is the “smallest” in the sense that any other subspace containing both $V$ and $W$ must contain $V+W$. This description from the outside of the subspaces will be useful when we don’t want to get our hands dirty with actual vectors.
## 6 Comments »
1. I hope I’m not confusing products in one category with coproducts in the other, but… aren’t joins in lattices coproducts, while meets are products?
Comment by Sridhar Ramesh | July 21, 2008 | Reply
2. Oops.. I’d turned that upside-down in my head. Fixed now.
Comment by | July 21, 2008 | Reply
3. (One tiny instance left to fix, at the beginning of the last paragraph.)
Comment by Sridhar Ramesh | July 21, 2008 | Reply
4. [...] since direct sums add dimensions this [...]
Pingback by | July 23, 2008 | Reply
5. [...] Let’s start with just two subspaces and of some larger vector space. We’ll never really need that space, so we don’t need to give it a name. The thing to remember is that and might have a nontrivial intersection — their sum may not be direct. [...]
Pingback by | July 24, 2008 | Reply
6. [...] Complements and the Lattice of Subspaces We know that the poset of subspaces of a vector space is a lattice. Now we can define complementary subspaces in a way that [...]
Pingback by | May 7, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 64, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331676959991455, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/1924/what-are-some-reasonable-sounding-statements-that-are-independent-of-zfc/6594 | What are some reasonable-sounding statements that are independent of ZFC?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Every now and then, somebody will tell me about a question. When I start thinking about it, they say, "actually, it's undecidable in ZFC."
For example, suppose A is an abelian group such that every short exact sequence of abelian groups 0→ℤ→B→A→0 splits. Does it follow that A is free? This is known as Whitehead's Problem, and it's undecidable in ZFC.
What are some other statements that aren't directly set-theoretic, and you'd think that playing with them for a week would produce a proof or counterexample, but they turn out to be undecidable? One answer per post, please, and include a reference if possible.
-
I think this is a very good question, but the example given doesn't seem particularly outside of normal logic intuition. When you say the group is "free" it means "it can be proven that there exists a basis B such that..." so surely you need find that specific B --- that really can't follow from a description like you gave without some mechanism! – Ilya Nikokoshev Oct 22 2009 at 21:41
@ilya: I don't understand your objection to the example. The statement is that Whitehead's problem is independent of ZFC (which allows you to do things like chose elements of sets, and lots of other constructions). I agree it would be silly to say that it's independent of the empty list of axioms. – Anton Geraschenko♦ Oct 22 2009 at 21:49
Mm, I think I misunderstood what ZFC is. I wanted to point out that axiom of choice (or something) should be necessary which was obvious to the asker from the start. Apologies. – Ilya Nikokoshev Oct 22 2009 at 22:10
26 Answers
"If a set X is smaller in cardinality than another set Y, then X has fewer subsets than Y."
Althought the statement sounds obvious, it is actually independent of ZFC. The statement follows from the Generalized Continuum Hypothesis, but there are models of ZFC having counterexamples, even in relatively concrete cases, where X is the natural numbers and Y is a certain uncountable set of real numbers (but nevertheless the powersets P(X) and P(Y) can be put in bijective correspondence). This situation occurs under Martin's Axiom, when CH fails.
-
35
This blows my mind. – Charles May 31 2010 at 17:29
2
Wow! Is this a good argument for believing in GCH? – Mozibur Ullah Dec 3 at 13:14
@Joel Can I get a reference for this? In particular what is the meaning of 'fewer' in the statement? Does this hold because of some 'weird' failures of separation, i.e. it is unable to 'create' subsets where we normally would expect it to? I agree with Charles, this is mind-bending to say the least, and makes me doubt the conceptual robustness of ZFC, especially since it is supposed to be so good at giving us a theory of 'size'... – Chuck Dec 31 at 17:30
1
I had meant "fewer" in the sense of cardinality. In other words, the question is whether $\kappa\lt\lambda$ implies $2^\kappa\lt 2^\lambda$. The answer is that this is independent of ZFC. It is consistent with ZFC, for example, that the power set $P(\mathbb{N})$ is bijective with $P(Y)$ for some uncountable set $Y$, such as $Y=\omega_1$. Indeed, the assertion that $2^\omega=2^{\omega_1}$ is known as Luzin's hypothesis. As for a reference, this is covered in any good graduate set theory text book, such as Jech's book Set Theory. – Joel David Hamkins Dec 31 at 18:05
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
UPDATE: I edited the answer by adding details and adding a reference. In particular, I specialized from an arbitrary field to the complex numbers in response to John's comment.
Here's an example from commutative algebra. The projective dimension of a module M is defined as the minimal length of a projective resolution of M. Let S be the ring ℂ[x,y,z] and M be the module ℂ(x,y,z). Then the projective dimension of M is undecidable in ZFC. More specifically, the projective dimension of M is 2 if the continuum hypothesis holds, and it is 3 if the continuum hypothesis fails.
This follows from Barbara Osofsky's work (MR0548131); see Theorem 2.51 of Homological Dimensions of Modules. She seems to have a huge number of results which would be relevant to this question.
-
That's bizarre. If somebody tracks down a reference, I'd be curious to know: is this true for any field k? What kind of bounds on the projective dimension can you prove in ZFC? – John Goodrick Oct 23 2009 at 13:11
10
It has to do with the cardinality of the field. In his book Basic Homological Algebra, Scott Osbourne proves that a module generated by \aleph_m elements has projective dimension at most m+n+1, where n is the maximum projective dimension of a finitely generated submodule. What's going on here is that these modules are disgustingly infinitely generated. Something generated by \aleph_m elements can be written as a direct limit of a chain of things generated by \aleph_{m-1} elements, and you use that and induction on m to prove it. – Eric Wofsey Oct 23 2009 at 21:02
1
Interesting, thanks for the reference. – John Goodrick Oct 24 2009 at 15:20
re:
Here's an example from commutative algebra. [C = complex numbers] let S be the ring C[x,y,z] and M be the module C(x,y,z). Then the projective dimension of M is 2 if the continuum hypothesis holds, and it is 3 if the continuum hypothesis fails.
Drinfeld has pointed out (see http://arxiv.org/abs/math/0309155v4 ) that the set-theoretic problems are coming from using the wrong definition of "projective". Raynaud and Gruson proved that projectivity of a module M is equivalent to the combination of three conditions:
(1) flatness
(2) decomposition as a direct sum of countably generated modules
(3) Mittag-Leffler condition
That (2) is possible for projective modules is a theorem of Kaplansky but the decomposition is non-canonical, and this is what introduces the axiom of choice into the proof of "free implies projective".
As I understood it from Drinfeld's lecture, only (1) and (3) are necessary for applications and for developing homological algebra, and (2) is undesirable because it is too strong a condition when working with infinite dimensional bundles. He proposed either "flat and Mittag-Leffler" directly, or a minor variant of that, as a definition of what he called "projectivity with a human face", that would work smoothly in the existing applications and allow a reasonable generalization to the infinite dimensional case.
Also, (1) and (3) are definable in first-order logic, so there is less chance of set theoretic problems from quantification over large or complicated structures.
-
My favourite is the statement that if X is a set of reals, and for every sequence (a_n) of positive reals you can find a sequence of intervals (I_n) that cover X such that I_n has length at most a_n, then X is countable. I think it's of a similar strength to Martin's axiom.
-
11
A set of reals with this property is usuall called strong measure zero. The question of whether such have to be countable is called the Borel Conjecture. It was first proven independent by Laver. – Richard Dore Oct 23 2009 at 14:08
6
@gowers : I am not sure what you mean by "of similar strength to Martin's axiom". Would you mind briefly commenting on this? – Andres Caicedo Dec 6 2010 at 17:30
If X is a compact Hausdorff space, and f is an algebra homomorphism from C(X) to some Banach Algebra, must f be continuous?
This question turns out to be independent. The affirmative answer is referred to as Kaplansky's Conjecture.
-
Wow! (Just unbelievable :-). – Wlodzimierz Holsztynski 2 days ago
"There is no definable well-ordering of the real numbers."
Although many mathematicians simply believe this statement to be true, actually, it is independent of ZFC. In Goedel's constructible universe L, for example, there is a definable well-ordering of the reals, having complexity Delta^1_2 in the descriptive set-theoretic hierarchy. That is, the well-ordering is a subset of the plane RxR, and it is the projection of the complement of the projection of a Borel set (and simultaneously, the complement of another such set).
The idea that well-orders of the reals cannot in principle be described or constructed is simply not correct.
-
5
I think this misconception comes down partly to a difference between formal and informal usage. To “construct a widget W with property P*” often covers (informally) not just the construction of *W, but also the proof of P for it. In this sense, it is indeed impossible to construct a w-o of the reals in ZFC, by the very independence result you give. – Peter LeFanu Lumsdaine Sep 22 2010 at 14:15
To Peter LeFanu Lumsdaine: can you at least construct a widget W that we can prove is a well-ordering of reals if there exists at least one well-ordering? That's the case we have with P!=NP: we can give a concrete algorithm that we can prove solves an NP-complete problem if P=NP. – Zsbán Ambrus Aug 16 2011 at 9:02
2
Zsbán, we can prove in ZFC that there is a well-ordering. So the issue isn't existence of the well-ordering, but rather the question of whether there is a definable such well-ordering. It is consistent with ZFC that there is a very low complexity definition, and it also follows from large cardinals thought to be consistent that there is no definable such ordering anywhere in the projective hierarchy. Meanwhile, there is a particular universal definition $\varphi$ in set theory, such that any universe of set theory can be extended to one where $\varphi$ defines a well-ordering of $\mathbb{R}$. – Joel David Hamkins Aug 16 2011 at 11:11
This isn't an answer but an argument that there isn't really a good answer. Having done a good amount of set theory and seen how you prove some of these statements to be independent, I tend to be rather skeptical about how reasonable these statements actually sound. Typically, while these statements sound like they're talking about some ordinary mathematical object, they aren't really, because their independence comes from very large and pathological objects that are far removed from your usual mathematical experience.
For example, in the Whitehead problem, you have to realize that while abelian groups sound very down-to-earth, uncountable abelian groups can have incredibly complicated structure. As a (fairly difficult!) exercise, you can prove that a countable product Z^N of copies of Z is not free, and in fact admits no homomorphism to Z besides the obvious finite sums of projections. On the other hand, the Whitehead problem has an affirmative answer if you restrict to countable groups, and this is something you could come up with if you thought about it for a week.
-
3
There's a big difference between "We don't know if there are pathological groups," and "We provably can't know if there are pathological groups." There are lots of places where things get complicated, and people either prove theorems or find counterexamples. And in most cases we just kind of assume on or the other of those is possible. It's substantially different to know that is a hopeless process. – Richard Dore Oct 23 2009 at 0:16
1
I guess my point is that most of the time you can't prove things when you're dealing with things that are complicated because they are very infinite. That is, when I see this kind of problem, I don't assume you can solve it. This might just be because I've done too much set theory. – Eric Wofsey Oct 23 2009 at 0:53
This makes a lot of sense to me,say, in the context of Whitehead's problem. But I cannot, for instance, understand how Harvey Friedman's example could be a fact about large cardinals in disguise. It would be great if you could explain more. – Keivan Karai Jul 4 2010 at 11:02
Harvey Friedman has devoted a large portion of his career to finding "natural" statements that are unprovable in ZFC. One example is given at the end of Martin Davis's article "The incompleteness theorem," Notices AMS 53 (2006), 414-418:
http://www.ams.org/notices/200604/fea-davis.pdf
It takes a paragraph or so to state the definitions so I won't do so here, but the point is that, unlike many other examples, which clearly make reference to uncountable sets or are just Goedelian diagonalization statements in disguise, Friedman's proposition is a purely finitary statement in graph theory whose statement gives no hint of large cardinals. Indeed it is a small perturbation of a graph-theoretical theorem with an elementary proof.
For another example of Friedman's work, see his book Boolean Relation Theory and Incompleteness, a draft of which is downloadable from his website:
http://www.math.ohio-state.edu/~friedman/manuscripts.html
Here Friedman presents a family of innocuous-looking elementary statements about functions and sets and unions/intersections/complements. Almost all statements in the family have easy proofs in ZFC (or actually much weaker systems), but one of them requires a large cardinal axiom.
Friedman is aware that these examples don't quite reach the holy grail of a completely natural finitary mathematical statement that is independent of ZFC, but he continues to make progress in this direction. You can subscribe to the Foundations of Mathematics mailing list if you want to keep track of his latest results.
-
Following Eric Wofsey's point, you probably are interested in things that don't involve the pathologies of arbitrary uncountable objects. Some people have argued that the only things "ordinary mathematicians" care about are things that are representable in second-order arithmetic. (This gives you arbitrary real numbers, arbitrary complete separable metric spaces, Borel and analytic sets on all of those, and similarly interesting amounts of algebraic stuff.) The project of Steven Simpson's book Subsystems of Second-Order Arithmetic is to analyze just what axioms are needed to prove which results. All of the things considered there are far weaker than ZFC. But it's interesting to discover that beyond the axioms of Peano Arithmetic and the existence of arbitrary recursive sets of natural numbers, there are exactly five natural strengths. That is, there are five levels such that very large numbers of theorems from ordinary mathematics fall at exactly one of the levels, where any theorem at one level can be proved by assuming any theorem at that level or higher. Interestingly, things like the Heine-Borel theorem and the Bolzano-Weierstrass theorem, which are often thought of as equivalent, actually fall at different levels.
Not everything falls exactly at these levels though. Some things do still depend on a version of the axiom of choice, which is above any of these five levels, and there are other results like Goodstein's theorem and Borel determinacy that are higher still (I believe).
-
2
Borel Determinacy is interesting because it really uses Replacement. In fact, it was known that ZFC-Replacement was insufficient to prove Borel Determinacy before Borel Determinacy was proven. – Richard Dore Oct 23 2009 at 4:24
Thanks for explanation and the reference! – Jose Brox Nov 8 2009 at 2:38
One of my favourites is
"Three clouds cover the plane"
where a subset $A \subseteq \mathbb{R}^2$ is a cloud around $a$ if every line through $a$ has a finite intersection with $A$.
This is due to Péter Komjáth; see http://www.cs.elte.hu/~kope/p28.ps.
In fact, three clouds cover the plane if and only if CH is true.
If the continuum is at most $\aleph_n$, then you can cover the plane with $n+2$ clouds (whether the reverse holds is open) (see comments).
-
2
Actually it looks like the reverse is not open any longer: helios.impan.gov.pl/cgi-bin/fm/pdf?fm177-3-02 – Justin Palumbo Jul 5 2010 at 4:13
2
My friend Ramiro de la Vega recently obtained a result for a related concept. A spray centered at $c$ is a subset of the plane that meets every circle centered at $c$ in only finitely many points. Unlike the situation for clouds, one can prove that 3 sprays cover the plane, independently of the status of CH. See "Decompositions of the plane and the size of the continuum", Ramiro de la Vega, Fund. Math. 203 (2009), 65-74, and "Covering the plane with sprays", James H. Schmerl, Fund. Math. 208 (2010), 263-272. – Andres Caicedo Jul 5 2010 at 5:10
1
Speaking of stuff on the plane, a certain problem of coloring the plane with no corners is equivalent to CH too - artofproblemsolving.com/Forum/… – Yoo Jul 5 2010 at 9:35
Thanks for the comments and references! – Peter Krautzberger Jul 5 2010 at 15:29
Since Justin's link leads to a paywall here's a survey by Arnold Miller math.wisc.edu/~miller/old/m873-05/setplane.ps – Peter Krautzberger Jul 5 2010 at 15:42
A recent example is the question of the existence of outer automorphisms of the Calkin algebra of a separable infinite-dimensional Hilbert space (this is the algebra of all bounded operators divided by the compacts). See this paper of Farah for details.
A feature this example shares with Kaplansky's conjecture mentioned above is the use of CH for the construction of the desired object. For the other direction (proving the non-existence of an outer automorphism), which is more interesting, Farah uses a natural combinatorial axiom (OCA) which can be forced in any model of ZFC.
Wikipedia also has a list of independence results.
-
You can leave multiple links once you have 15 reputation. I added a link; I hope it's to the right place. – Anton Geraschenko♦ Oct 22 2009 at 21:45
Another example is certain strong forms of Fubini's Theorem.
If you have a real value function on the product of two closed intervals which is bounded, and which is measurable in either coordinate when you fix the other, are the two iterated integrals equal?
(In the actual Fubini's theorem you care about joint measurability, not just measurability on either coordinate.)
If you assume CH, it is easy to construct counterexamples. It turns it is also consistent to have models where it is true.
I don't have a good reference on this, if you know of one please add it in to my answer.
-
1
The CH result is due to Sierpinski; the independence result was shown almost simultaneously by Chris Freiling, Harvey Friedman, and Miklos Laczkovich. See Ciesielski's survey math.wvu.edu/~kcies/prepF/56STA/STAsurvey.html – François G. Dorais♦ Feb 5 2010 at 22:50
1
One nice way to get the statement to be true is using Freiling's axioms of symmetry. There's a proof of it in Chris Freiling. Axioms of symmetry: Throwing darts at the real line. The Journal of Symbolic Logic, 51, 1986. – David R. MacIver Apr 11 2010 at 20:45
1
One reference for strong Fubini theorems is Joe Shipman's paper, "Cardinal conditions for strong Fubini theorems," Trans. Amer. Math. Soc. 321 (1990), 465-481. – Timothy Chow May 31 2010 at 16:57
"The real line is the only endless dense complete linear order in which every family of disjoint intervals is countable."
This statement generalizes the familar characterization of the real line (due to Cantor) as the unique endless dense complete linear order having a countable dense set. Souslin inquired whether this separability condition can be weaked to the condition on families of disjoint intervals. (Here, complete means that the order as the LUB property.)
This statement is known as Souslin's Hypothesis, and it is independent of ZFC. It is false under the combinatorical assertion known as Diamond, but follows from Martin's Axiom at Aleph_1. The proof that the statement is consistent, due to Solovay and Tennenbaum, is highly important in the history of set theory, since it required the development of iterated forcing, now a fundamental tool.
-
The assertion that the union of any Aleph_1 many measure zero sets is still measure zero. This is independent of ZFC. Of course, it implies the failure of the Continuum Hypothesis, but is not equivalent to this.
There are a huge variety of such statements in the field known as Cardinal Characteristics of the Continuum. For example, what is the additivity of the meager ideal (the ideal of all meager sets)? It is at least Aleph_1, but can be larger. What is the smallest size of a family of functions f:omega to omega such that every function is bounded by an element of the family? It can be Aleph_1, or larger independently. There are dozens of such examples.
-
Paul Erdős proved a funny statement about analytic functions to be equivalent to the continuum hypothesis. The same proof can also be found in Proofs from THE BOOK.
-
Sometimes ZFC is just not sufficient to prove statements which morally should be true. The axiom of projective determinacy is perhaps the best known instance of this. If you don't know what this is, consider the following example: take a closed set in R^3, project it in R^2, take the complement and project it into R. One cannot prove in ZFC that the resulting set is Lebesgue measurable (or has the property of Baire, or the perfect set property). However, this is an easy consequence of PD which in turn can be proved using large cardinals (Martin and Steel, "A Proof of Projective Determinacy", JAMS).
-
2
Not sure if this is the right forum for this, but: do you think PD is "morally true" because it follows from large cardinal axioms, which themselves are "morally true"? Or does PD follow from some other set of ethical principles? Guess I'm somewhat of a libertine, myself... – John Goodrick Oct 23 2009 at 17:59
1
Maybe I didn't choose the right word. Anyway, for me, PD is a strong justification for large cardinals, not the other way round. But this is, of course, a philosophical rather than mathematical discussion. – Todor Tsankov Oct 24 2009 at 8:12
My favorite is the first problem I worked on, back in 1966 when I was an undergraduate. The question is: does every non separable Banach space have an uncountable biorthogonal system? Shelah constructed a counterexample under diamond; Kunen later gave a C(K) counterexample using CH. In 2005 Stevo Todorcevic gave a positive answer when mm > aleph_1. His paper "Biorthogonal Systems and Quotient spaces via Baire Category" appeared in Math. Annalen in the last couple of years.
-
In the ring of bounded operators on (complex, separable) Hilbert space, the ideal of compact operators is the sum of two properly smaller ideals. (I mean 2-sided ideals, in the algebraic sense, not topologically closed ideals.)
In the Stone-Cech compactification of a half-open interval minus the interval itself (call this space X), there exist two points such that the only compact, connected subset of X containing both points is X itself.
Both of these statements are true under CH (or various weaker assumptions) but not provable in ZFC. Lest anyone remind me that there should be only one statement per answer, I point out that, despite appearances, these two statements are provably equivalent. See an old paper of mine, "Near coherence of filters II," Trans. A.M.S. 300 (1987) 557-581, for the equivalence, and references there to older papers, including one with Gary Weiss and one with Saharon Shelah, for the CH result and the independence from ZFC.
-
Of course, it follows from the negative solution to Hilbert's 10th problem (Putnam-Davis-Robinson-Matijasevic) that one can construct a specific diophantine equation P(x1,x2,...,xm)=0 for some m such that the solvability of this equation (over Z) is undecidable in ZFC. I actually think m, and the degree of P, can be made to be quite smallish.
It follows, of course, that the equation has no integer solutions (for if it had, this would have been easily demonstrable in ZFC). But ZFC is not capable of providing a proof of this fact (assuming that ZFC is consistent. If it's not then it can provide a proof of anything...)
-
This not right. The theorem of Matijasevic et al. states that there is a specific polynomial equation p(y, x_1, ..., x_n) such that the set of all integers a for which p(a, x_1, ..., x_n) has a solution where all the x_i's are integers is "undecidable," but here "undecidable" means uncomputable, i.e. you could never write a Java script to determine membership in this set. "Undecidable from ZFC" means that both the statement and its negation are logically consistent with ZFC. – John Goodrick Oct 22 2009 at 23:39
1
Greg Igusa says: That doesn't sound right to me. If the set A of a for which p(a, x_i) has a solution is undecidable, then there would have to be a specific integer a_0 for which the question "does p(a_0,x_i) have a solution" is independent of ZFC, else we could compute the set A by searching for proofs as to whether or not the polynomial has a solution. But this gives an example of a polynomial as in the original answer. – Anton Geraschenko♦ Oct 23 2009 at 0:30
2
"Given a number a, search for some number n and some proof (from ZFC) that p(a, n) holds" is not a good algorithm for deciding whether or not (exists x) p(a, x) is true. If you do find such n and such a proof, then OK, you're done; but otherwise, you'll get stuck in an infinite loop. The set of a for which p(a,x) has a solution is always r.e., but not always recursive: en.wikipedia.org/wiki/Diophantine_set – John Goodrick Oct 23 2009 at 1:16
2
Since some of the references here are long, let me give a comment-sized statement: The solution of Hilbert's 10th problem also shows that there is a Diophantine equation E such that E has a solution if and only if ZFC is inconsistent (and this equivalence is provable in Peano Arithmetic). So those of us who think ZFC is consistent think E has no solutions but ZFC can't prove that. Those who think ZFC is inconsistent could make their case by finding a solution of E (but it would probably be easier just to exhibit a proof of a contradiction in ZFC). – Andreas Blass Jul 4 2010 at 4:27
1
Greg Igusa's argument above shows by contradiction the existence of a polynomial $p$ whose solvability is independent of ZFC, but does not by itself tell us how to get our hands on such a $p$. The missing ingredient is the result that one can computably translate arbitrary existential ($\Sigma^0_1$) statements into statements about solvability of Diophantine equations. This yields both the uncomputability (Matiyasevich's theorem), and the fact that a specific $p$ can be found (as in Andreas Blass' comment and Alon Amit's answer) since $\neg$Con(ZFC) is a $\Sigma^0_1$ statement. – Bjørn Kjos-Hanssen Oct 19 2010 at 10:36
show 2 more comments
The statement, "Any two aleph-1-dense subsets of the reals are order isomorphic."
A subset X of R is called aleph-1-dense if between any two real numbers, there are exactly aleph-1 elements of X. On the one hand, Sierpinski used a diagonalization argument (working within ZFC) to construct kappa pairwise non-isomorphic suborderings of R each of density kappa, where kappa is the cardinality of R, so the Continuum Hypothesis implies that this statement fails. On the other hand, James Baumgartner used a clever forcing argument to build models of ZFC where the size of R is aleph-2 and any two aleph-1-dense suborderings of R are isomorphic.
See "All aleph_1 dense sets of reals can be isomorphic," James E. Baumgartner, Fundamenta Mathematicae v. LXXIX (1973), pp. 101-106.
-
Albin Jones has a draft paper on his web page, "Even more partitioning triples of countable ordinals", which has a survey of infinite Ramsey theory results stated in terms of ordinals.
Let $\omega$ be the first infinite ordinal and let $\omega_1$ be the first uncountable ordinal. Citing results of Todorcevic and Hajnal, Jones says that if you color pairs of elements of $\omega_1$ in blue and red, then it is independent of ZFC to decide whether there must be either a blue subset of type $\omega_1$ or a red subset of type $\omega+2$.
-
Here's one (a corollary of some work I did with Keith Kearnes):
It is undecidable in ZFC whether there exists a commutative Noetherian domain of size aleph_{2} with a finite residue field.
-
One of my favorites has to do with products of spaces of countable cellularity:
"If X and Y are topological spaces with countable cellularity then their product X x Y has countable cellularity"
Is independent of ZFC. (The failing example being a Souslin line)
-
What does "countable cellularity" mean? – Qiaochu Yuan Mar 25 2011 at 20:44
That any collection of disjoint open subsets of a space is at most countable. – Michael Blackmon Mar 29 2011 at 19:33
My favourite one(in fact it is equivalent to continuum hypothesys, proving equivalency is a very nice exercise,btw):
Real line could be represented as a countable union of linearly independent(over $\mathbb{Q}$) subsets.
-
Is every regular ($T_3$) topological space $X$ that is hereditarily separable (all subspaces are separable) Lindelöf (every open cover of $X$ has a countable subcover) ?
A counterexample is known as an S-space and Baumgartner showed there are models of ZFC without them. But under CH (and many other axioms) they do exist. Interestingly, in ZFC there does exist an L-space (a hereditarily Lindelöf space that is not separable), which was surprising to many topologists, who expected a certain duality to hold between S- and L-spaces. For a short intoduction see these slides.
-
For every function $f$ mapping the reals into the set of all countable subsets of the reals, there are real numbers $x$ and $y$ such that $x \notin f(y)$ and $y \notin f(x)$.
This innocent and reasonable statement is actually equivalent to the negation of the Continuum Hypothesis. The equivalence was first proved by Sierpiński, and is actually very easy to see.
If CH holds, let `$\{x_\alpha: \alpha < \omega_1\}$` be an enumeration of the reals in type $\omega_1$, the function defined by `$f(x_\alpha)=\{x_\beta: \beta \leq \alpha \}$` is a counterexample to the boxed statement or otherwise we'd have a pair of ordinals $\alpha, \beta < \omega_1$ such that both $\alpha < \beta$ and $\beta < \alpha$.
Suppose now that CH fails and let $f: \mathbb{R} \to [\mathbb{R}]^{\leq \aleph_0}$. Let $S$ be a set of reals of cardinality $\aleph_1$. Let `$T=\bigcup \{f(x): x \in S \}$`. The set $T$ has cardinality $\aleph_1$ and hence we can pick a real number $y \notin T$. Since $f(y)$ is countable we can pick a real number $z \in S \setminus f(y)$. We have both $z \notin f(y)$ and $y \notin f(z)$.
-
1
Might as well call this statement by its name, Freiling's axiom of symmetry. – Asaf Karagila May 9 at 23:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402356147766113, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2009/12/16/iterated-integrals-i/?like=1&source=post_flair&_wpnonce=18b5244a97 | # The Unapologetic Mathematician
## Iterated Integrals I
We may remember from a multivariable calculus class that we can evaluate multiple integrals by using iterated integrals. For example, if $f:R\rightarrow\mathbb{R}^{\geq0}$ is a continuous, nonnegative function on a two-dimensional rectangle $R=[a,b]\times[c,d]$ then the integral
$\displaystyle\iint\limits_Rf(x,y)\,d(x,y)$
measures the volume contained between the graph $z=f(x,y)$ of the function and the $x$-$y$ plane within the rectangle. If we fix some constant $\hat{y}$ between $c$ and $d$ we can calculate the single integral
$\displaystyle\int\limits_a^bf(x,\hat{y})\,dx$
which describes the area that the plane $y=\hat{y}$ cuts out of this volume. It exists because because the integrand is continuous as a function of $x$. In such classes, we make the reasonable assumption that as we vary $\hat{y}$ this area varies continuously. This gives us a continuous function on $[c,d]$, which will then be integrable:
$\displaystyle\int\limits_c^d\left(\int\limits_a^bf(x,y)\,dx\right)\,dy$
This is an “iterated integral”, since we perform more than one integral in sequence. We usually leave out the big parens and trust in the notation to tell us when the inner integral is closed. Our handwaving argument then justifies the belief that this iterated integral is the same as the double integral above. And this is true:
$\displaystyle\iint\limits_Rf(x,y)\,d(x,y)=\int\limits_c^d\int\limits_a^bf(x,y)\,dx\,dy$
but we haven’t really proven it.
Besides, we’re interested in more general situations. What if, say, $f$ is discontinuous along the whole line $(x,\hat{y})$ for some fixed $c\leq\hat{y}\leq d$? This line can be contained in an arbitrarily thin rectangle, so it has outer Lebesgue measure zero in the rectangle $R$. If these are the only discontinuities, then $f$ is integrable on $R$, but we can’t follow the above prescription anymore, even if it were actually rigorous. We need some method of handling this sort of thing.
To this end, we have five assertions relating the upper and lower single and double integrals involving a function $f$ which is defined and bounded on the rectangle $R$ above. Unfortunately, our notation for upper and lower integrals gets a little cumbersome here, and the $\LaTeX$ support on WordPress isn’t the most elegant. Still, we soldier on and write
$\displaystyle{\int\limits_-}_a^bf(x)\,dx=\underline{I}_{[a,b]}(f)$
and similarly for upper integrals, and for lower and upper double integrals. Now, our assertions:
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_a^b{\int\limits_-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}_a^b{\int\limits_-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_c^d{\int\limits^-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}_c^d{\int\limits^-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_c^d{\int\limits_-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}_c^d{\int\limits_-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• If $\int_Rf(x,y)\,d(x,y)$ exists, then we have
$\displaystyle\begin{aligned}\int\limits_Rf(x,y)\,d(x,y)&=\int\limits_a^b{\int\limits_-}_c^df(x,y)\,dy\,dx=\int\limits_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\\&=\int\limits_c^d{\int\limits_-}_a^bf(x,y)\,dx\,dy=\int\limits_c^d{\int\limits^-}_a^bf(x,y)\,dx\,dy\end{aligned}$
Okay, as ugly as all those are, they’re what we’ll prove next time.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 10 Comments »
1. Not ugly to me. In fact, more beautiful following your careful platform for understanding Calculus. I like your rigor, combined with your clarity of exposition. This reminds me of my joy in taking calsulus at Caltech from Tom Apostol. And keeping in touch with him since then (i.e. since 1968). His textbooks sell well, so yours should also, once you get co-authors to write them.
Comment by | December 16, 2009 | Reply
2. Jonathan, I mean the typesetting, not the content. The typesetting is terrible, which is why I tried using the $\underline{I}$ notation instead of underlining the integral sign like Apostol does.
Speaking of whom, if you’re still in contact with him could you put me in contact? I have a huge question about the exercise that inspired this coming Friday’s post.
Comment by | December 16, 2009 | Reply
3. [...] Integrals II Let’s get to proving the assertions we made last time, starting [...]
Pingback by | December 17, 2009 | Reply
4. This can be found online, so I’m not betraying privacy. Just tell him that I sent you when you email him, on the chance that it might help slightly. I also attended a book-signing by his wife within the past year.
# Tom Apostol (apostol at caltech.edu)
376 Sloan
Caltech Undergraduate
1200 E. California Blvd
Pasadena CA 91125
(626) 395-4363
Project MATHEMATICS!
(626) 395-3759
Comment by | December 18, 2009 | Reply
5. [...] might guess that we can always evaluate double integrals by iterated integrals as we’ve been discussing. After all, that’s exactly what we do in multivariable calculus courses as soon as we [...]
Pingback by | December 18, 2009 | Reply
6. [...] So we’ve established that as long as a double integral exists, we can use an iterated integral to evaluate it. What happens when the dimension of our space is even [...]
Pingback by | December 21, 2009 | Reply
7. I ran into Tom Apostol at Caltech this afternoon, and mentioned to him that I’d given him your email, and why. He said that he’d answer an email, so long as it wasn’t crackpot. I reassured him about you, and wished him and his wife happy holidays. So — have you emailed him yet about this rigorous instruction on Iterated Integrals?
Comment by | December 23, 2009 | Reply
8. Just doing it now, JVP.
Comment by | December 23, 2009 | Reply
9. [...] Integrals V Our iterated integrals worked out nicely over -dimensional intervals because these regions are simple products of [...]
Pingback by | December 31, 2009 | Reply
10. [...] the Riemann-Stieltjes integral. First up is a question that seems natural from the perspective of iterated integrals: what can be said about the continuity of the inner [...]
Pingback by | January 12, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347098469734192, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/88090/list | ## Return to Question
2 added 5 characters in body
I am trying to understand the topology (in terms of homology groups) of the free loop space $\Lambda M$ of nice spaces (Complete Riemannian connected finite dimensional manifolds $M$). I see the free loop space (of H^1 loops) as a Hilbert manifold, cf. Klingenbergs book. If the manifold $M$ has a non-trivial fundamental group, the free loop space has as many connected components as there are conjugacy classes in $\pi_1(M)$. How much do these components of $\Lambda M$ differ? Are these components all homotopy equivalent? For the circle the answer is yes, because all components of the free loop space are homotopy equivalent to the circle itself.
The following question is related to my question
http://mathoverflow.net/questions/34927/are-the-path-components-of-a-loop-space-homotopy-equivalent
However, I cannot seem to use the answer to this question directly, because I cannot concatenate two free loops, but maybe I am missing something obvious.
1
# The connected components of the free loop space
I am trying to understand the topology (in terms of homology groups) of the free loop space $\Lambda M$ of nice spaces (Complete Riemannian connected finite dimensional manifolds $M$). I see the free loop space (of H^1 loops) as a Hilbert manifold, cf. Klingenbergs book. If the manifold $M$ has a non-trivial fundamental group, the free loop space has as many connected components as there are conjugacy classes in $\pi_1(M)$. How much do these components of $\Lambda M$ differ? Are these components all homotopy equivalent? For the circle the answer is yes, because all components of the free loop space are homotopy equivalent to the circle itself.
The following question is related to my question
http://mathoverflow.net/questions/34927/are-the-path-components-of-a-loop-space-homotopy-equivalent
However, I cannot seem to use the answer to this question directly, because I cannot concatenate two loops, but maybe I am missing something obvious. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9136250019073486, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-math-topics/123008-infinity.html | # Thread:
1. ## Infinity
I understand how the set of transcendental numbers is greater than the set of algebraic numbers (Aleph II vs Aleph I). Can someone explain why the set of functions (Aleph III) is greater than the set of transcendental numbers?
2. Originally Posted by wonderboy1953
I understand how the set of transcendental numbers is greater than the set of algebraic numbers (Aleph II vs Aleph I). Can someone explain why the set of functions (Aleph III) is greater than the set of transcendental numbers?
You are using $\aleph$ in an unconventional manner $\aleph_0$ usually denote the cardinality of the naturals and $\mathfrak{c}$ that of the continuum (the reals). The continuum hypothesis is that $\mathfrak{c}=\aleph_1$ (that is that there is no set with cadinality between that of naturals and that of the reals).
The algebraic numbers are in fact countable and so have cardinality $\aleph_0$.
The trancendentals have cardinality $\mathfrak{c}$ as they are those reals which are not algebraic.
The set of all functions on the reals is equivalent to the power set of the reals and we know that the cardinality of the power set of a set is strictly larger than that of the set itself.
CB
3. Originally Posted by CaptainBlack
You are using $\aleph$ in an unconventional manner $\aleph_0$ usually denote the cardinality of the naturals and $\mathfrak{c}$ that of the continuum (the reals). The continuum hypothesis is that $\mathfrak{c}=\aleph_1$ (that is that there is no set with cadinality between that of naturals and that of the reals).
The algebraic numbers are in fact countable and so have cardinality $\aleph_0$.
The trancendentals have cardinality $\mathfrak{c}$ as they are those reals which are not algebraic.
The set of all functions on the reals is equivalent to the power set of the reals and we know that the cardinality of the power set of a set is strictly larger than that of the set itself.
CB
Agreed. http://www.mathhelpforum.com/math-he...rdinality.html
4. Originally Posted by wonderboy1953
I understand how the set of transcendental numbers is greater than the set of algebraic numbers (Aleph II vs Aleph I). Can someone explain why the set of functions (Aleph III) is greater than the set of transcendental numbers?
Also, how do you know that the cardinality of the algebraic numbers is greater than that of the transcendental? I mean, it's definitely true. But, I have never seen you post or say anything about cardinal numbers. Do you have proof?
5. Originally Posted by Drexel28
Also, how do you know that the cardinality of the algebraic numbers is greater than that of the transcendental? I mean, it's definitely true. But, I have never seen you post or say anything about cardinal numbers. Do you have proof?
err.. em transcendentals > algebraic ?
6. Originally Posted by CaptainBlack
err.. em transcendentals > algebraic ?
Oops, typo.
I meant to say that the cardinality of the transcendentals is greater than that of the algebraic! And that the proof of this isn't all together trivial!
7. ## Memory
I now recall that G. Gamow wrote a book titled "One, Two, Three...Infinity" where he stated that the set of functions is greater than the set of transcendentals.
Does anyone have that book and know what he actually stated?
8. Originally Posted by wonderboy1953
I now recall that G. Gamow wrote a book titled "One, Two, Three...Infinity" where he stated that the set of functions is greater than the set of transcendentals.
Does anyone have that book and know what he actually stated?
No, but as we stated. That is correct. We know that the transendentals have a cardinal number greater than $\aleph_0$ and less than or equal to $\mathfrak{c}$. But, the set of all functions defined on the reals has cardinal number $2^{\mathfrak{c}}$ so that if $\text{card }\mathbb{T}=\mathfrak{m}$ then $\aleph_0<\mathfrak{m}\leqslant\mathfrak{c}<2^{\mat hfrak{c}}$ and what you said ofllows.
9. ## Responding to Drexel28
"Also, how do you know that the cardinality of the algebraic numbers is greater than that of the transcendental? I mean, it's definitely true. But, I have never seen you post or say anything about cardinal numbers. Do you have proof?"
The proof that the cardinality of the reals is greater than that of the algebraic numbers is a well-known proof given by Cantor. The proof itself is referred to as the diagonalization proof whereby (expressed in decimal form), you go down diagonally through the list of algebraic numbers to come up with a new number which isn't part of the list (which you already know is referred to as C for the reals).
I'm sure that someone proved that the cardinality of C and the reals match up (maybe Cantor). What I had originally wanted to know is how someone knew that the cardinality of the functions is greater than the cardinality of the real numbers. My answer comes from a book written by Tobias Dantzig titled "number: the language of science", copyrighted 2/2007. On Page 233 it lists as an example of an aggregate with a power greater than C the functional manifold,"i.e. the totality of all correspondences which can be established between two continua....The corresponding cardinal number is denoted by f."
It appears that all of the functions themselves are part of the foregoing definition. The book goes on to say that the functional "space" is derived from the continuum (real numbers) through the diagonalization process that led from the algebraic numbers to the real numbers. And this diagonalizational process can indefinitely produce greater aggregates.
10. Originally Posted by wonderboy1953
"Also, how do you know that the cardinality of the algebraic numbers is greater than that of the transcendental? I mean, it's definitely true. But, I have never seen you post or say anything about cardinal numbers. Do you have proof?"
The proof that the cardinality of the reals is greater than that of the algebraic numbers is a well-known proof given by Cantor. The proof itself is referred to as the diagonalization proof whereby (expressed in decimal form), you go down diagonally through the list of algebraic numbers to come up with a new number which isn't part of the list (which you already know is referred to as C for the reals).
I'm sure that someone proved that the cardinality of C and the reals match up (maybe Cantor). What I had originally wanted to know is how someone knew that the cardinality of the functions is greater than the cardinality of the real numbers. My answer comes from a book written by Tobias Dantzig titled "number: the language of science", copyrighted 2/2007. On Page 233 it lists as an example of an aggregate with a power greater than C the functional manifold,"i.e. the totality of all correspondences which can be established between two continua....The corresponding cardinal number is denoted by f."
It appears that all of the functions themselves are part of the foregoing definition. The book goes on to say that the functional "space" is derived from the continuum (real numbers) through the diagonalization process that led from the algebraic numbers to the real numbers. And this diagonalizational process can indefinitely produce greater aggregates.
I gave you a link to where I proved that the cardinality of $\mathbb{R}^{\mathbb{R}}$ is $2^{\mathfrak{c}}$. The uncountability of the transcendentals is more instructively proven, in my humble opinion, by showing that the algebraics are countable (many easy ways to do this) and showing by contradiction that $\mathbb{R}-\mathbb{A}$ is uncountable.
$\mathbb{R}^n$ can also be shown to have cardinality $\mathbb{c}$. This shown by first noting that $[0,1]^{n}\simeq [0,1]$ and then that $\mathbb{R}^n\simeq[0,1]^n$.
Also, it can be shown that the set of all continuous mappings from the reals into the reals has cardinality $\mathbb{c}$.
Also,...I digress.
If you would like proofs or just justifications for any of these claims. Let me know.
11. Originally Posted by wonderboy1953
"Also, how do you know that the cardinality of the algebraic numbers is greater than that of the transcendental? I mean, it's definitely true. But, I have never seen you post or say anything about cardinal numbers. Do you have proof?"
The proof that the cardinality of the reals is greater than that of the algebraic numbers is a well-known proof given by Cantor. The proof itself is referred to as the diagonalization proof whereby (expressed in decimal form), you go down diagonally through the list of algebraic numbers to come up with a new number which isn't part of the list (which you already know is referred to as C for the reals).
That is not what Cantor's diagonal slash proves. What it does prove is that the reals are not countable. You still need to show that the algebraic numbers are countable.
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632020592689514, "perplexity_flag": "head"} |
http://www.haskell.org/haskellwiki/index.php?title=Power_function&oldid=19570 | Power function
From HaskellWiki
Revision as of 13:44, 26 February 2008 by Rk (Talk | contribs)
1 Question
Why are there several notions of power in Haskell, namely
(^)
,
(^^)
,
(**)
?
2 Answer
The reason is that there is no definition for the power function which covers all exotic choices for bases and exponent. It is even sensible to refine the set of power functions as it is done in the NumericPrelude project. In mathematical notation we don't respect types and we do not distinguish between powers of different types. However if we assume the most general types for both basis and exponent, the result of the power is no longer unique. Actually all possible solutions of say 1x, where x is irrational is dense in the complex unit circle. In the past I needed the power of two complex numbers only once, namely for the Cauchy wavelet (see also: [1]):
$f(t) = (1- i\cdot k\cdot t) ^ {-\frac{1}{2} + \frac{\mu_2}{k} + i\cdot \mu_1 }$
However, I could not use the built-in complex power function because the resulting function became discontinuous. Of course, powers of complex numbers have the problem of branch cuts and the choice of the branch built into the implementation of the complex power is quite arbitrary and might be inappropriate.
But also for real numbers there are problems:
For computing
(-1)**(1/3::Double)
the power implementation has to decide whether
(1/3::Double)
is close enough to $\frac{1}{3}$. If it does so it returns
(-1)
, otherwise it fails. However, why shall
0.333333333333333
represent $\frac{1}{3}$? It may be really meant as
333333333333333/10^15
,
and a real 1015th root of − 1 does not exist.
So I propose some balancing: The more general the basis the less general the exponent and vice versa. I also think the following symbols are more systematic and intuitive. They are used in NumericPrelude.
basis type provides symbol exponent type definition
any ring * ^ cardinal repeated multiplication $a^b = \prod_{i=1}^b a$
any field / ^- integer multiplication and division $a^b = \begin{cases} a^b & b\ge 0 \\ \frac{1}{a^{-b}} & b<0 \end{cases}$
an algebraic field root ^/ rational list of polynomial zeros (length = denominator of the exponent) $a^{\frac{p}{q}} = \{ x : a^p = x^q \}$
positive real log ^? any ring with a notion of limit exponential series $a^b = \sum_{i=0}^{\infty} \frac{(b\cdot\ln a)^i}{i!}$
• examples for rings are: Polynomials, Matrices, Residue classes
• examples for fields: Fractions of polynomials (rational functions), Residue classes with respect to irreducible divisors, in fact we do not need fields, we only need the division and associativity, thus invertible Matrices are fine
That is
(^-)
replaces
(^^)
,
(^?)
replaces
(**)
,
(^)
remains and
(^/)
is new.
3 See also
• Haskell-Cafe: Proposal for restructuring Number classes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8904457092285156, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/116894/list | ## Return to Question
2 added 2 characters in body
Let $G$ be a compact Lie group, let $T$ be a maximal torus, and let $W$ be the Weyl group. My main question is as follows:
• How does one prove that `$H^\ast(BG,\mathbb{Q})$` is isomorphic to the $W$-invariant part of $H^\ast(BT,\mathbb{Q}) \cong \mathbb{Q}[[x_1, \ldots, x_n]]$? This is apparently basic knowledge in algebraic topology, because I keep reading "recall that..." followed by some version of this statement and no references. But I can't find a proof in any of my textbooks.
I would ideally like a reference which also addresses the following secondary question:
• When is the natural map $H^\ast(BG,\mathbb{Z}) \to H^\ast(BT,\mathbb{Z})^W$ an isomorphism, and what can one say about the integral cohomology ring of $BG$ when it is not? Note the fact that the map above is an isomorphism for $G = U(n)$ is equivalent to the statement that the Chern classes are integral.
Thanks!
1
# Cohomology ring of BG
Let $G$ be a compact Lie group, let $T$ be a maximal torus, and let $W$ be the Weyl group. My main question is as follows:
• How does one prove that $H^\ast(BG,\mathbb{Q})$ is isomorphic to the $W$-invariant part of $H^\ast(BT,\mathbb{Q}) \cong \mathbb{Q}[[x_1, \ldots, x_n]]$? This is apparently basic knowledge in algebraic topology, because I keep reading "recall that..." followed by some version of this statement and no references. But I can't find a proof in any of my textbooks.
I would ideally like a reference which also addresses the following secondary question:
• When is the natural map $H^\ast(BG,\mathbb{Z}) \to H^\ast(BT,\mathbb{Z})^W$ an isomorphism, and what can one say about the integral cohomology ring of $BG$ when it is not? Note the fact that the map above is an isomorphism for $G = U(n)$ is equivalent to the statement that the Chern classes are integral.
Thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433836340904236, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/5619/public-key-cryptography-public-key-encrypts-and-cannot-decrypt?answertab=oldest | # Public key cryptography - public key encrypts and cannot decrypt?
I understand the basics behind public key cryptography, in that each party has two keys - the public one encrypts, and the private one decrypts. What I cannot figure out is, How does the public key encrypt and not decrypt, but yet the private key can decrypt?
I do understand the possibilities of this, but, does anyone know what cipher can do this, and how does this practically operate?
-
## 2 Answers
How it works depends on the cipher, but the basic idea is that of a trapdoor function.
A trapdoor function is a function that is easy to compute in one direction, yet believed to be difficult to compute in the opposite direction (finding its inverse) without special information, called the "trapdoor".
Many number theoretic problems have been used in the past successfully to build such functions, and the wikipedia article is fairly good.
Let's take one as an example, lets say the problem of factoring a number into it's prime factors. Every number can be uniquely broken up into a product of primes. It turns out though, that breaking a number up into its prime factors is difficult if the number is large enough and the factors are large enough, but give me a few large prime numbers and I can easily compute the composite that those numbers make up.
So, how can this be leveraged with cryptography? I choose say two large prime numbers called $p,q$ and set $n=p\cdot q$. I then set $e=65567$ and give you $e,n$. I then use my extra information ($p,q$) to compute $d$ such that $ed\equiv 1\pmod{\phi(n)}$. This can only be efficiently computed if you know $p$ and $q$.
You have $e,n$, so to send me a message $m$ you compute $c=m^e\bmod{n}$ and send $c$ to me. For me to get $m$ back, I must know $d$, I can only know $d$ if I know $p,q$ and you knowing only $n$ and $e$ cannot compute $d$.
To get $m$ back I compute $c^d\equiv (m^e)^d \equiv m^{ed} \equiv m^1 \equiv m\pmod{n}$, so I have successfully used my extra information to recover $m$, something you cannot do without the extra information.
The public key can encrypt (it is the forward direction of the "one-way function"), but cannot decrypt because it does not know the trapdoor information to go in the reverse direction.
The cipher described here is RSA. There are others, but RSA is pretty simplistic.
-
quote: "such that ed ≡ 1 mod n. This can only be efficiently computed if you know p and q." Is that: ed ≡ 1 (mod n)? – AUTO Dec 8 '12 at 21:21
– David Cary Dec 9 '12 at 1:27
If you are talking about RSA public key encryption the albebraic structure of the construction and some number theory will clear your puzzled situation. There is no meaning to write down how RSA works.Have a look here here
"$d$ is the multiplicative inverse of $e$: $ed\equiv1 \pmod{n}$. That's why $(m^e)^d \pmod{n}=m$
-
1
I think you meant $(m^e)^d \pmod{n} = m$. – Thomas Dec 9 '12 at 5:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461008906364441, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/90884-can-someone-help-me-function.html | Thread:
1. Can someone help me with this function
Hi can anyone help me out with how to find the stationary points of this functions:
y = x/x^2+1
Thanks I am completely at a loss
2. Originally Posted by Emililypad
Hi can anyone help me out with how to find the stationary points of this functions:
y = x/x^2+1
Thanks I am completely at a loss
Is it $\frac{x}{x^2 + 1}$?
Find the first derivative using the quotient rule, then set it equal to zero.
$f'(x) = (x^2+1)^{-1} - 2x^2 (x^2 + 1)^{-2} = \frac{1-x^2}{1+x^2}$
(check to verify the work above)
... so the zeros occur where $1 - x^2 = 0$, which means x = 1 or x = -1.
Good luck!
3. Yes thats correct and thank you so much. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250521659851074, "perplexity_flag": "head"} |
http://physics.aps.org/articles/print/v2/103 | # Viewpoint: The super cool atom thermometer
, JILA, NIST, and University of Colorado, Department of Physics, University of Colorado, Boulder, CO 80309, USA
Published December 7, 2009 | Physics 2, 103 (2009) | DOI: 10.1103/Physics.2.103
A new method of thermometry for ultracold atoms in optical lattices has the potential to accurately measure temperatures down to $50pK$.
The conquest of the cold has become one the most exciting goals in modern atomic physics. The development of laser and evaporative cooling techniques have been among the greatest achievements in the second half of last century, as recognized by two Nobel prizes on these topics (1997 and 2001). These methods made it possible to reach temperatures of the order of a few $nK$ and led to the first experimental realization of a Bose-Einstein condensate [1, 2] and a quantum degenerate Fermi gas [3] in dilute atomic vapors.
Even though $nK$ temperatures may seem extremely low, they are still too high for various potential applications, and the battle for reaching lower and lower temperatures in ultracold atoms continues. One of the driving forces for achieving even lower temperatures is Richard Feynman’s pioneering idea of a quantum simulator [4] wherein the behavior of a complex quantum system can be simulated by another quantum system. For instance, we want to load atoms in optical lattices (light shift potentials created by the interference of multiple laser beams) and use them to mimic the physics of electrons in solid-state crystals [5]. The optical lattice supplies the periodic potential in which atoms move. Bosonic (Fermionic) atoms in optical lattices are actually a perfect implementation of Bose (Fermi) Hubbard Hamiltonians [6], which describe particles hopping on a lattice with onsite interactions [7]. It is believed that these are the simplest models that contain the fundamental ingredients required to describe the behavior of strongly correlated materials, including, for example, quantum magnets or high-temperature superconductors. However, since atoms are much heavier than electrons, and since typical optical lattice interwell spacings are of the order of $104$ times the ionic lattice spacings, temperatures below $10-2nK$ are required in cold-atom laboratories in order to probe the same physics that occur at Kelvin temperatures in solid-state systems. The capability of reaching such low temperatures has to be accompanied by the development of new thermometers capable of measuring them. In a recent paper published in Physical Review Letters, David Weld and colleagues at the MIT-Harvard center for ultracold atoms [8] now report a new thermometry method for ultracold atoms in lattices, with the potential to measure temperatures as low as tens of $pK$.
To date, typical bosonic cold-atom experiments in a single trap without an optical lattice have determined temperature from fits to absorption images of the expanded gas [9]. The data are fit to a bimodal density distribution. The area under the central peak, linked to the Bose-condensed fraction, and a Gaussian fit to the wings, which assumes noninteracting thermal atoms are combined to infer temperature. This technique has been very successful in a broad range of experimentally relevant temperatures, however, when atoms are loaded into optical lattices its regime of applicability reduces considerably.
When atoms are loaded into a lattice they acquire an effective mass, which exponentially increases with the depth of lattice potential. As a consequence, the diluteness condition, which requires a small ratio between the mean interaction energy per particle to its kinetic energy, becomes invalid as the lattice potential depth is ramped up and the system enters the strongly correlated regime. Beyond a critical lattice potential depth, the average kinetic energy required for an atom to hop from one site to the next, $J$, becomes insufficient to overcome the interaction energy cost, $U$, and atoms tend to get localized at individual lattice sites, forming the so-called Mott insulator [10].
In a homogeneous system, Mott insulating phases occur only at integer densities; noninteger density contours lie entirely in the superfluid phase because there is always an extra particle that can hop without energy cost. In the presence of an additional parabolic trapping potential, both phases can coexist and the system exhibits a shell structure in which Mott insulating domains with $n$ atoms per lattice site are separated by a superfluid region from other Mott domains with $n-1$ atoms per site.
Simple, direct thermometry of systems in the Mott insulating state has remained a challenge since methods relying on the noninteracting approximation fail. One of the most common techniques to estimate the temperature of a lattice gas is to measure the initial temperature of the gas before the lattice is switched on, and then determine the final temperature assuming the lattice is turned on adiabatically: One equates the entropy of the final state to that of the initial state and then deduces the final temperature from the initial temperature. However, the errors of this method are uncontrolled, not only because of the adiabatic approximation assumption, but also because of the difficulty of an accurate determination of the temperature dependence of entropy for the many-body system of interest.
In their work, Weld et al. [8] have experimentally demonstrated a solution to this problem in a two-component Bose gas in the Mott insulator regime. In the experiment, the two spin states used were the $F=|1,-1〉$ and $F=|2,-2〉$ hyperfine states of $87Rb$ atoms. Their strategy was to impose a magnetic field gradient and to use the mean magnetization of the gas as a thermometer (see Fig. 1). The basic idea can be understood by considering a spin-$1/2$ atom with magnetic moment $μσ$, $σ=±1/2$. In the present of a magnetic field $B(x)$, its mean magnetization is just $〈σ〉=1/2tanh[-βΔμσ⋅B(x)/2]$ with $Δμσ$ the difference in magnetic moment between the two spin states, $β=1/(kBT)$ the inverse temperature and $kB$ the Boltzmann constant. For an incoherent mixture of atoms with spin independent interactions (condition well satisfied in $87Rb$ atoms) the spatial and spin degrees of freedom can be factored out and the spin distribution becomes just $ρ(x)〈σ〉$ with $ρ(x)$ the density distribution. By imaging the spin and density distributions, and knowing $Δμσ⋅B(x)$, they were able to determine the temperature of the atoms by a standard fitting procedure.
The spin gradient thermometer was demonstrated to work in a range of temperatures not accessible to prior thermometry schemes. In the high-temperature regime, where interactions can be safety neglected, it agreed well with other methods relying on the noninteracting approximation, and was even shown to operate at temperatures high enough that no condensate existed before the lattice was ramped up. In the low-temperature regime it was used to measure a temperature as low as $1nK$, providing a direct demonstration that current experiments are able to operate deep in the quantum regime where the Mott insulator shell structure is well resolved. The lowest temperature measurable with the method is limited only by the experimental ability to resolve the domain wall ($∼50pK$ for current imaging resolution of a few $μm$) and more fundamentally by $kBTs∼J2/U$, the onset of super-exchange interactions (spin-spin interactions mediated by virtual tunneling process [11]), whichever is higher. Below $Ts$ the incoherent mixture approximation becomes invalid.
The development of experimental methods, such as the one reported in Ref. [8], capable of measuring subnanokelvin temperatures brings us one step forward towards the dream of using cold atoms to simulate and manipulate different strongly correlated many-body systems, which appear in various fields in physics, ranging from condensed matter to subatomic physics.
### References
1. M. H. Anderson et al., Science 269, 198 (1995).
2. K. B. Davis et al., Phys. Rev. Lett. 75, 3969 (1995).
3. B. De Marco and D. S. Jin, Science 285, 1703 (1999).
4. R. P. Feynman, Int. J. Theor. Phys. 21, 467 (1982).
5. I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
6. D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, Phys. Rev. Lett. 81, 3108 (1998).
7. J Hubbard, Proc. R. Soc. London A 276, 238 (1963).
8. D. M. Weld, P. Medley, H. Miyake, D. Hucul, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. 103, 245301 (2009).
9. E. A. Cornell and C. E. Wieman, Rev. Mod. Phys. 74, 875 (2002).
10. M. P. A. Fisher, P. B. Weichmann, G. Grinstein, and D. S. Fisher, Phys. Rev. B 40, 546 (1989).
11. P. Anderson, Phys. Rev. 79, 350 (1950).
### Highlighted article
#### Spin Gradient Thermometry for Ultracold Atoms in Optical Lattices
David M. Weld, Patrick Medley, Hirokazu Miyake, David Hucul, David E. Pritchard, and Wolfgang Ketterle
Published December 7, 2009 | PDF (free)
### Figures
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908029317855835, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/38433/energy-is-quantized | # Energy is quantized
How can energy be quantized if we can have energy be measured like in 1.56364, 5.7535, 6423.654 kilo joules, with decimals? Thanks
Also isnt it quantization means energy is represented in bit quantities meaning you can not divide, lets say 1 bit of energy
-
## 3 Answers
As far as the first part of your question goes, just having decimals in the number does not mean the energy levels are no longer quantized. Quantization of energy simply means that there are only specific energies that particles can take under certain circumstances. For example, you could say particle A can only have one of the following energies: {1.56364, 5.7535, 6423.654} kJ. Limiting the particle to these three energies is what it is meant by quantization of energy.
Also, there is no smallest bit of energy, for example the kinetic energy of a free particle can take a continuous range.
Mathematically, I am not certain how this is formulated. Off the top of my head, I would wager that any countable set could be considered quantized, but that would include all rationals which are dense in reals, so it really wouldn't be much of a quantization.
-
tl;dr of how it's formulated: energies are eigenvalues of the Hamiltonian operator. The eigenvalues can be discrete (which is the technical term for being limited to selected values) or continuous. Jerry's answer has some more details. – David Zaslavsky♦ Sep 27 '12 at 4:37
Typically, in quantum mechanics, bound states are quantized and free/scattering states are not. This is because bound states, by the mere fact that they're constrained to a certain area, will have to satisfy certain boundary conditions, and these conditions won't be able to be satisfied in a continuous range.
The classic example of this is the infinite square well potential, where $V(x) = 0$ if $0<x<a$, and $V(x) =\infty$ elsewhere. Then, the particle will have zero probability of appearing outside of the well, and will have to satisfy the zero-potential Schrödinger equation $E\psi = -\frac{\hbar^{2}}{2m}\nabla^{2}\psi$ inside of the well. For simplicity, we'll only consider one-dimensional motion.
In this case, we see right away that the basis states to our solutions have to satisfy $\psi = A\sin\left(\frac{\sqrt{2mE}}{\hbar}x+\phi\right)$, and we also know that the wave function must be continuous, and that it is restricted to be zero for $x<0$ and $x>a$. We can satisfy the first boundary condition by choosing $\phi=0$, but the second one is not satisfied for all values of the energy. Instead, it is necessary that $\frac{\sqrt{2mE}}{\hbar}a=n\pi$, where $n$ is some integer. Thus, the allowed energies of 'pure' states of this system are quantized, and take the values $E_{n} = \frac{n^{2}\pi^{2}\hbar}{2m}$.
For any other bound state, you will find yourself using similar logic about boundary conditions, albiet with much, much more complexity. Note that, however, it is also the case that we can construct a general state out of the energy eigenstates $\Psi = \sum a_{n}\psi_{n}$, and that the expectation value for the energy of $\Psi$ will be $\sum|a_{n}|^{2}E_{n}$, so the values for the "average" value of a state are still allowed to be continuous (and in the case of the infinite square well, can actually take any value greater than the ground state energy).
-
I see that the wave function must be continuous as a postulate, but is there an explanation for why that must be true? – jcohen79 Sep 27 '12 at 4:22
2
The momentum operator is $-i\hbar\partial/\partial x$. If the wavefunction is spatially discontinuous that would imply infinite momentum. – John Rennie Sep 27 '12 at 9:27
An excellent way to understand how the wave function works in quantum mechanics is to study the model of the hydrogen atom. We can see in this model that the quantum variables $n,l,m$ are effectively variables that determine the shape of the spatial density associated with the detection of an electron. The quantum aspect is that the variables $n,l,m$ are integers, where the continuous aspect is density associated with the wave function. It is important to understand that the density function is a space filling function. This means that there is a value of the function associated with each point in space.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948051929473877, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2008/02/25/indefinite-integration/?like=1&source=post_flair&_wpnonce=78d053c4d8 | # The Unapologetic Mathematician
## Indefinite Integration
Since we’ve established the connection between integration and antidifferentiation, we’ll be concerned mostly with antiderivatives more directly than derivatives. So, it’s useful to have some simple notation for antiderivatives.
That’s pretty much what the “indefinite integral” amounts to. It looks like an integral, and it does (what the FToC tells us is) all the hard work of integration, but it stops short of actually calculating an integral. Given a function $f(x)$, we write an antiderivative as $\int f(x)dx$. Note that we aren’t saying which antiderivative we mean, and for the purposes of the FToC (part 2), we don’t need to be. It’s customary, though, to write the result generically by adding a $+C$ to the end of it.
We know, for example, that
$\displaystyle\frac{d}{dx}\frac{x^{n+1}}{n+1}=\frac{(n+1)x^n}{n+1}=x^n$
Then we turn this around to write
$\displaystyle\int x^ndx=\frac{x^{n+1}}{n+1}+C$
and so on.
We can also go back and rewrite the two rules of integration we found before:
$\displaystyle\int f(x)+g(x)dx=\int f(x)dx+\int g(x)dx$
$\displaystyle\int cf(x)dx=c\int f(x)dx$
Notice here that we don’t need to add the $+C$, since each side consists of indefinite integrals. We can hide these “constants of integration” on both sides. They only need to show up once we fully evaluate an indefinite integral.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 1 Comment »
1. [...] is defined for all measurable functions . This isn’t quite the same indefinite integral that we’ve worked with before. In that case we only considered functions , picked a base-point , and defined a new function on [...]
Pingback by | May 27, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.892503559589386, "perplexity_flag": "middle"} |
http://motls.blogspot.com/2013/01/quantum-physics-doesnt-depend-on.html?showComment=1358364357245 | # The Reference Frame
## Wednesday, January 16, 2013
... /////
### Quantum physics doesn't depend on definitions of observers
Lots of people who are trying to understand quantum mechanics but who don't really want to listen constantly ask the same question:
What is an observer really?
This question is usually encapsulated in the linguistic mud that is equivalent to the following monologue:
I'm sick and tired of explanations of quantum mechanics because they never tell me who is an observer and who isn't. Now, I am the savior of physics who will ask you and you will finally tell me and everyone else what are the ultimate, exact, well-defined criteria that determine who is an observer and who isn't, when the sound of a falling tree was heard and when it wasn't. This will permanently eliminate all the confusion about quantum mechanics. Amen.
These people must believe what they're saying but if they were also able to think about it, they would realize how stupid the question is. What kind of an answer are they envisioning if they really want to divide the objects or physical systems in the world to observers and non-observers?
Maybe, they expect you to say "An observer is someone with a social security card that must carry the signature of Barack Obama." Perhaps, an observer divides people to castes and only some of them are composed of observers. Maybe, the definition denies Darwin's evolution theory and it declares humans as observers while all other animals and organisms are qualitatively different. Maybe the humans have souls and consciousness or blessing from God and they make a qualitative difference. Maybe there's a sharp boundary between conscious processes and unconscious processes.
Needless to say, all such boundaries would be totally preposterous and their existence would violate the basic character of the laws of Nature. There can't be any sharp boundaries between observers and non-observers. No stamps or ID cards play a role in the laws of physics. There's no fundamentally qualitative difference between different species of organisms. In fact, there's no sharp boundary between microscopic and macroscopic objects. All these characteristics are continuous, gradual, and for an object to become a human is a long journey (it's easier to be born as one). And the human isn't necessarily the most perfect intelligent being that is allowed by the laws of Nature.
Most importantly, the vague and colloquial understanding of the word "observer" is perfectly enough because the actual rules and laws of physics don't depend on any details of the definition of the word "observer".
So as plain English indicates, an observer is someone (a physical system, but usually one resembling an intelligent animal or an AI-like computer) who observes something (I say "who" and not "that" exactly because that's the word we start to consider more apt than "that" once the physical systems become able to observe things and do related things!). That's everything you need to know!
There isn't any "deeper or more accurate definition" that would be needed to specify how Nature works. In physics, we use the word "observer" to describe a physical system or "agent" that is able to perceive the information about some observables (time-dependent dynamical variables that describe the state of the physical systems) and, if possible, process them. Most often, we want the observers to be able to remember the information, send it somewhere, and/or verify the laws of physics that claim to say something about the patterns relating different observations.
An observer of a particular observable, for example the number $N$ of photons in a box $\dd V$, is simply someone for whom the proposition (equation) $N=n_i$ has (or will have) a well-defined truth value. That's everything I need. The observable has a well-defined truth value because – and I hope you won't be surprised – the observer has observed the observable. ;-)
Now, to determine whether a macroscopic collection of organic molecules (a candidate animal) or a bar with semiconductor molecules (a candidate computer) is actually able to see the light (or something else) and/or remember its properties and/or calculate with it and/or use the information to optimize its behavior to achieve a certain goal, you need some specific complex disciplines of greater physics such as neuroscience or electronics or information technology.
But this is clearly not the issue that the people are asking about. I think that all of us – and most of them – understand that the neuroscience and electronics and information technologies in the real world simply aren't fundamental. Organisms and computers are complex bound states composed of many molecules that are described by the laws of quantum mechanics. I think that everyone who has at least started to think about the equations of quantum mechanics (even if incorrectly) does believe that these universal laws ultimately determine the behavior of semiconductors and proteins, too.
Instead, the goal of their question is more metaphysical or philosophical in character. They're really asking about the existence of "consciousness" because they believe that consciousness or something similar that requires a "soul" is needed to define the laws of physics. But it ain't so.
Consciousness: a great mystery whose pure part is outside science
You know, consciousness is fascinating. As a kid, I would be intensely attracted by its detachment from all empirical observations. I have consciousness, self-awareness, but you don't have to believe me because the "pure consciousness" doesn't have any visible consequences for the external observers. Of course, the "applied consciousness" does seem to have consequences. I am talking about things including consciousness itself, sometimes emotionally, I am able to say "I am aware of myself", and because you probably also have consciousness that manifests itself by talking and vibrating with your head and eyes etc., you decide that I am qualitatively analogous to you so I must have consciousness as well.
Maybe I have the same conscious feelings when I see the red color as your feelings when you see blue and vice versa (which is possible especially if you believe in some wrong political ideology). But you may dismiss these differences by Occam's razor. If the physical careers of our senses are analogous, and if we have analogous molecules in analogous cells of the retina, we arguably have the "same feelings" if a red photon hits our retina. More speculatively, we may think about "conscious feelings" that humans are incapable of. What does an uranium nucleus "feel" when it decays into decay products? This question is less urgent because the uranium atom can't process these feelings or information much so "who cares" (discrimination). But for humans, the question seems pressing: Do I have any consciousness at all?
A mysterious question, indeed. Over the years, I have lost much of my interest in this metaphysical question because of three reasons. First, almost by definition, it seems impossible to answer it within science. If I define the "pure consciousness" as something that is totally isolated and disconnected from all observations, there can't exist any manipulation with the empirical facts that would answer questions about consciousness. Second, because I don't have direct evidence of other people's (or objects') consciousness, it doesn't make much sense to study it. Third, there seems to be no sensible reason to expect that claims about "pure consciousness" may be sharp and rigorous. They are intrinsically vague for a simple reason: they have nothing to do with the quantitative things one may observe – and properties of "soul" vaguely attached to matter that can't be measured don't have any reason to carry well-defined values or rigorous laws relating these values.
So when it came to consciousness, I started to realize that only "applied consciousness" (the actual manifestations of the fact that a physical system is able to measure, remember, and process information in a sufficiently complex manner) belong to science. I would still agree that some "consciousness stripped of the dull material trivialities such as eyes, brains, and microprocessors" exists in some sense but I still find it important to appreciate that the hypothetical existence of this "conscious soul" does depend on the material carrier that may be studied by the scientific method i.e. by a careful analysis of observations. Even though I feel that "some mystery of pure consciousness remains unresolved" by science, I have also become extremely certain that "thinking about these things will never bring and can never bring anything more constructive than metaphysical flapdoodle." In this sense, I have abandoned much of my interest in consciousness for pragmatic reasons.
Maybe you are upset by the large number of paragraphs about pure philosophy – unlimited babbling. So let me get back to quantum mechanics a little bit. My main point is that none of the hypothetical claims about the existence of conscious souls influences any laws of quantum mechanics. This is a widely misunderstood point which is why it may be a good idea to mention how people like to misunderstand it.
They usually think – because they are often told – that a conscious observer causes a "collapse" which allows the superposition states \[
\ket\psi = a\ket{\psi_1}+b\ket{\psi_2}
\] to shrink to one of the options, either $\ket{\psi_1}$ or $\ket{\psi_2}$, with probabilities given by $|a|^2$ and $|b|^2$, respectively. When this collapse happens, someone is perceiving that something has happened, something has been measured. Because this collapse is such an important intervention into otherwise smooth and regular laws of evolution of the state vector according to Schrödinger's equation, evolution that is supported by all the evidence, one should need some special "stamp" – such as the social security card with Obama's signature that I started with – to interrupt the peaceful Schrödinger's evolution and to replace it by the "collapse".
I am writing down this preposterous story because this is exactly the type of thinking that many popular – and, using Sidney Coleman's words, sometimes even not-so-popular – books and articles want you to manipulate you into. GRW and Penrose collapse theories as well as the many-worlds ideology are example models giving special objects the right to "intervene" into Schrödinger's equation, either by discontinuous jumps or collapses or by splitting the world (which is comparably, infinitely ambitious). However, all this reasoning is completely nonsensical. There doesn't exist any systems for which the evolution according to the laws of quantum mechanics such as Schrödinger's equation is replaced by some discontinuous jumps. Quantum mechanics applies to all systems and processes in Nature, regardless of their size, duration, sex, race, and nationality.
Instead, what an observer does when it "measures" the value of an observable such as $N$, the number of photons in a region, is that it simply attaches a value to $N$. Equivalently, it ascribes the truth value to all propositions of the form $N=x$ or $N\gt x$. Who has the right to do it?
The key point is that to define the laws of physics, we don't need any definition or criterion here. Why? Because when an observer ascribes a value to the observable $N$, it has absolutely no impact on other observers' description of the reality! Indeed, as the Wigner's friend thought experiment was designed to explain, other observers – if they want to make really accurate predictions about their future observations – should keep on treating the "conscious observer" as a dull physical system that evolves into the state vector that is a general complex linear superposition of eigenstates with different values of $N$. In practice, one may use a description in which the truth values and/or probability distribution become "classical" but all these descriptions are at most approximately valid.
So whether the "conscious observer" has "perceived" the value of $N$ makes absolutely no impact. I still need to describe the degree of freedom $N$ in terms of probabilistic distributions – and indeed, in quantum mechanics, I need interference-capable complex probability amplitudes. I only reduce my probabilistic reasoning to a fact-based one once I learn about the value of $N$ or something else myself. At that moment, I ascribe a value to $N$ or another observable (which, equivalently, changes the state vector or density matrix I am using to describe Nature – these objects represent the state of my knowledge). I ascribe the truth values to various propositions. Whether a different observer ascribed a value to a proposition about an "intermediate question" makes absolutely no impact on my predictions – and my predictions are by definition the only physically "existing" knowledge about the physical system that I have.
It is wrong for me to ascribe particular values to observables if I don't know their values. It is wrong for me to ascribe particular truth values to propositions whose answers are unknown to me. That's why another observer's act of ascribing a value to $N$ just makes no impact on my knowledge – only if I learn about $N$ myself, it influences my predictions!
I have already mentioned that whether a physical object may actually see some information or calculate with it is a question for neuroscience or electronics or information technologies. I have already mentioned this point. But you expect some restrictions. Certain things can't be "perceived" at all. Indeed. When you ascribe truth values to some propositions, the operators expressing these propositions must behave as if they were classical numbers $0,1$ able to be added and multiplied according to the classical rules. For example, if you study which of the alternative histories will occur, the alternative histories in your set must be consistent histories. In practice, it means that unless these histories are artificially engineered and very contrived, they must be histories talking about the values of some "quasiclassical" observables according to some classical limit of your quantum theory.
Nevertheless, I could even extend the definition of an observer a little bit and allow him or her or it or them to "observe" the truth values of propositions that aren't logically compatible in the sense of "consistent histories". Such an observer would be an illogical observer or a confused observer. ;-) It often looks like most people fall into this generalized category of observers. :-) In the same way, there may be sloppy and inaccurate observers – those whose observations are sloppy or inaccurate. More seriously, the inconsistencies between the observations by the confused observers would be analogous to the "paradoxes" that appear when you try to interpret GHZM and similar quantum games classically.
Fine. If you want an observer who sees and perceives real facts – sufficiently accurately – and who processes them, you need an observer that has well-functioning eyes (or an equivalent measuring apparatus), a brain including the memory (or its electronic or another replacement), and so on, according to the rules of neuroscience or electronics or information technologies or something else, and this observer must work with questions and alternatives that are logically consistent in the same sense as "consistent histories".
However, I still need to emphasize the main point of this article. All these features of a good enough observer may or may not be imposed – and it makes exactly zero difference for everyone else! You only impose the "consistency of histories" because you want to be an "unconfused observer". At the end, the only special (or additional) feature of an observer is that he or she or it or they observe something. And observing something isn't a crime. For the observer to have a consistent logical framework with truth values of various propositions about observables (for his histories to be consistent), this observer will have to send actual photons somewhere that perturb the system they're observing (or intervene in an analogous way).
But this necessary perturbation is a mechanical rather than metaphysical process. If some photons hit electrons and if you're another observer in the room, you need to calculate with quantum mechanics for several particles and you get different predictions for the pattern drawn by the electron – effectively, the interference pattern is destroyed because of the electron's entanglement with the escaping photon. But you don't have to "know" whether someone in the vicinity of the photon was an observer or conscious or human or animate or anything like that. It's the photon hitting the electron that destroys the interference pattern, according to the standard rules of quantum mechanics, not the soul or other mysterious anthropomorphic features of the surrounding physical systems claiming to be human!
Once again, the only special or additional properties of observers by which they "exceed" the generic physical objects around them is that they observe. By observing things, they ascribe values to some observables. And the laws of quantum physics imply probabilistic relationships between these values at different moments. An observer who is really worth the name may verify these relationships. That's it. But other observers such as yourself ascribe values to other sets of observables and all the other observers may always be treated as dull, unconscious objects! Only if two observers – two physical objects "personifying" two sets of consistent histories – include the same questions/propositions into their logic (scheme that assigns the truth values to propositions about observables), quantum physics guarantees that the values will agree.
Needless to say, people are looking for an "objective classical model of reality" that is valid for everyone. But quantum mechanics shows that Nature can't be described in this way. Instead, quantum mechanics tells you that you must understand yourself as an observer who may perceive the values of certain observables and quantum mechanics tells you that observing some values of observables at one moment implies that the probability of observing some combination of other observables at a different moment is something or something else. That's the only thing you may really empirically verify so it's just unphysical to "demand" that science also explains something else (such as an "underlying objective reality"). And indeed, it's very important in quantum mechanics that physics can't objectively describe too many things, that it is always impossible to ascribe values to too many observables or truth values to too many propositions. The uncertainty principle is just one of the basic formulations of this fact: you can never ascribe exact values to two complementary variables such as the position and the momentum of the same object. But this principle is really omnipresent and essential in all of quantum mechanics and may be formulated in many related ways, seen in many reincarnations. You just can't talk about the objective properties of (most of the) things you can't observe.
To summarize, an observer is someone who may observe things and verify the predictions from the laws of physics, among related things (e.g. using them to his advantage). But if there's no one who can do it, no one is hurt! Nature may seem like it has no purpose – but it's how Nature is according to science, anyway. What's important is that the "consciousness" or "act of pure observation" or "realization" doesn't make absolutely any impact on physical predictions that other observers may prepare. For other observers, the original observer is just a piece of matter, a dull physical object. That's why, if they're sensible and physically oriented, they don't spend hours by trying to find an exact definition of an observer. They know damn well that they're able to ascribe truth values to propositions about observables and that's enough for them to verify the laws of physics. One can't expect any sharp rule dividing physical systems into castes, into observers and non-observers. No such sharp rule exists and no such sharp rule is needed in physics.
And that's the memo.
Posted by Luboš Motl
|
Other texts on similar topics: philosophy of science
#### snail feedback (39)
:
reader Rezso said...
Dear Lubos,
I would say that the observer is simply a macroscopic quantum system, which interacts with my original microscopic system. The interaction leads to decoherence, so one can effectively replace the wavefunction of the system with classical probabilities.
Consciousness clearly doesn't matter, the interaction with an arbitrary environment always leads to decoherence.
The air molecules in the box of the cat cause the same effect that a conscious observer would cause.
reader Shannon said...
Only what we observe produces our conscience.
reader Luke Lea said...
Dear Lubos, A question from the peanut gallery: in the two-slit experiment I seem to recall -- in Susskind's video lectures I think -- that if an electron interacts with the material in which the two holes exist, on the side of one of the holes let us say, in such a way as to leave a visible mark or piece of evidence behind that such an interaction occurred, that this would count as an observation and the interference pattern would disappear. Is this correct?
What would happen if you begin with two holes which are precisely of the same diameter but gradually made one of them smaller and smaller until it finally disappears. Would the interference pattern gradually be replaced by a non-interference pattern?
Can an observation be described as any macroscopic effect which a human observer can in principle verify either at the time it occurs or later, it making no difference when?
I hope these are not idiotic questions.
reader Luboš Motl said...
Dear Luke, nope, they're totally sensible questions and your definition of an observation is much better than what one usually sees.
If two slits in the double-slit interference have different size, it indeed means that the interference partly disappears, there won't be any "perfect minima" where the particle can't land.
It's because one is simply summing
A * exp(i*phase1) + B * exp(i*phase2)
where the absolute values of A,B are different because the slits have different sizes. If A is greater, then the slit proportional to A dominates and one may see something close to the picture one gets from the slit A only. In such a picture, the interference pattern is subdued - because terms of order A* B are smaller than A* A.
reader Anonymous said...
Lubos,
What do you think of this course by Spekkens on the foundations of quantum mechanics? At Quantum Frontiers everyone called it excellent.
And what do you think of the great (or not?) accomplishments of the Quantum Foundations Group at Perimeter Institute? I wish you would write a separate post discussing the merits or lack thereof of their much touted work.
I have my own opinions about the matter, but would like to hear yours.
http://perimeterscholars.org/301.html
http://www.perimeterinstitute.ca/people/research-area/quantum-foundations
reader Tobias Sander said...
Do you mean consciousness? ;-)
reader Paul Parnell said...
Man thats a lot of words. I almost tl;dr-ed it. I'm only an amateur but isn't decoherence the best way to understand the nature of an observer? Any interaction counts as an observation in proportion to its ability to cause decoherence.
As for the consciousness issue... you seem to be suggesting that since there is no way to tell the difference between a philosophical zombie and a conscious person then the question isn't really a science question. If consciousness has no observable effect then all it does is make us helpless observers to events that we have no control over.
Yet if consciousness has no physical effect then how can we talk about it? If consciousness did not exist would the philosophical zombies that evolved in our place talk about the consciousness that they do not have as if they did?
This ability to discuss consciousness is arguably a physical and observable consequence of consciousness.
reader Mikael said...
Dear Lubos,
as a long term reader of this blog I know your view on quantum mechanics and I would say it has deepened my understanding of it. But I think we never discussed about consciousness. I would say there is actually a way to check its presence empirically and it is called the Touring test. It should test for intelligent behaviour but I would claim this goes together with consciousness. The more advanced the computer programs become the more I would support Penrose that computer programs as we define them today can never show true intelligence or become conscious, Even for the hard to beat chess programs it is easy to create chess problems which can be solved by every bright child but not by them. So I find it a reasonable speculation that the construction of an intelligent computer may require to make use of the laws of quantum mechanics. By the way I found that even made a remark that he finds this view of Penrose a reasonable speculation. I would also tend to think that consciousness is a kind of independent actor which may be influenced by feelings of hurt etc. For a computer you may naively try to model the degree of hurt it "feels" by a counter but why would this counter need to be accompanied by an actual feeling, The physical laws also happen without feelings.
reader Gene Day said...
Classically, you cannot get complete destructive interference (cancellation) of the two waves unless they have exactly the same amplitude, i.e. the two slits transmit the same amount of light. Since the quantum mechanical description has to eventually reduce to the classical description no spot on the observed screen can have zero probability of being hit by a photon unless the slits have the same width.
reader Luke Lea said...
Since I wasn't so stupid on my last question, let me try again. Imagine me as a male version of Penny across the hall:
When I look at the wingback chair in my library am I staring at the average pattern of quantum events (of electrons interacting with photons) of countless billions of atoms arranged into atoms arranged into molecules in the shape of a chair? And if I am, then isn't a macroscopic object in a sense a pattern of events as described by the equations of quantum mechanics. Only the pattern is real.
You can give me a Sheldon look now.
reader Peter F. said...
I predict (before Lumo wakes up!) that you will remain on good terms with Sheldon even after this last sticking-your-neck-out question. ;)
IMO, the word "real" can be etymologically+philosophically tenuously extrapolated to a fuzzy notion of 'ultimate Reality' as an Eternal (but not classically 'timely') Patterning (from a Platonic blue-print of infinite possibilities & impossibilities) Tendency. %-}
reader Peter F. said...
Oh what a satisfying and super-witty (except for the slightly worn-out reference to a certain birth-certificate) sorting out of this issue this article was!!!
You Lumo deserve - do according to my naturally limited (but by definition always percEPTive) observations and estimations of value - a major reward for being/writing the ways that you are!!!
--
P.S. It should be obvious that I am emoting.
In practical reality, the size of this proposed (and wished for) "reward for TRFicness" will of course be determined by available financial means and by the degree of persistence of relevant positive motivating influences from several different, including unrelated, sources.
reader Mickei said...
Can I kindly ask commenters what they think about brain processes? After all, brain is supposed to be the physical system giving rise to consciousness. Do you think that consciousness, thought and cognitive processes are fundamentally based on classical physics (like computers) and therefore deterministic in nature or not?
reader anna v said...
Good stuff. Sometimes I think that, even though like you I started with wondering about consciousness and physics at some point, I have been saved from fluff because of all those bubble chamber pictures I scanned in my youth. Film is a good observer :).
reader Robert Rehbock said...
Consciousness is an ill defined term. Observation is on other hand a poor choice of word for a well defined term. Observation of possible outcomes is only by a consistent history constrained. I think I am observing this blog entry and all its content. but if I were a particle affected in a casually important way I would be just as much an observer and observed by the interaction that localized me in a causally important way.
If that is what you are expressing then I exist and are right. Otherwise be kind in disputing me as I may be part of your consciousness and you not I am wrong. :-) I always preferred Ambrose Bierce "I think therefore I think I am" to Descarte. Perhaps the former President Clinton should be given an honorable mention with it "depends on the meaning of is". Anyway philosophy is fascinating too so at least this reader enjoyed greatly what you called blather.
reader Stephen KIng said...
It seems to me that a separable QM system with some chosen basis should qualify as a generic observer.
reader Luboš Motl said...
Dear Mikael, I may share your beliefs - consciousness goes with the manifestations and may be measured by the Turing test. Fine. But what if not? What if the machines are really always "dull and dead", not aware of themselves, however accurately they emulate the human behavior? While this possibility may look contrived, it seems impossible to become certain that it's invalid.
reader Shannon said...
Maybe, I don't get the difference... "la conscience" in French means both "conscience" and "consciousness" in English...
reader Tobias Sander said...
I see, that doesn't make it easier. ;-) The difference is easy to understand, though.
"Consciousness is the quality or state of being aware of an external object or something within oneself."
http://en.wikipedia.org/wiki/Consciousness
"Conscience is an aptitude, faculty, intuition or judgment of the intellect that distinguishes right from wrong."
http://en.wikipedia.org/wiki/Conscience
reader Luboš Motl said...
What do you mean by separable? If you mean that it's described by a small Hilbert space such that the total Hilbert space is the tensor product of this Hilbert space and another one for the remainder, this condition doesn't hold in quantum gravity and isn't really necessary for anything that we demand observers to do.
If you mean a related thing that the observer is a subsystem that is unentangled with others, it's way too constraining because observers surely do get entangled with the rest of the world - this is really necessary for decoherence which is needed for them to observe the facts as classical facts.
A chosen basis... One may choose a basis at all time but any choice of basis doesn't really correspond to an observation - an observation is usually much less refined and it is so for fundamental reasons. Moreover, one should only be observing a subsystem, i.e. choose a basis for a tensor factor, but this factorization of the whole Hilbert space doesn't hold exactly, as discussed above.
So your definition really can't hold for tons of reasons. But my real complaint is that it has no meaningful purpose. Why would you introduce the term "observer" for the arbitrary set of conditions that you have outlined? In physics, we use the term "observer" for a totally different reason that actually agrees with the linguistic origin of the world. It's a physical system that may observe, i.e. ascribe values to observables. We use this concept because the observer may verify the laws of physics and do related things. If he can't do it, he isn't really the observer we're looking for. By trying to associate some objective mathematical properties to an observer, you're really missing the point of the concept "observer" - why it's used by physicists.
reader Luboš Motl said...
Thanks, Peter, for your excitement! ;-)
reader Luboš Motl said...
Dear Paul, the ability to cause decoherence is proportional to observation... I don't know what to do with such things. Decoherence is a genuine emergent effect in QM, a description why classical truth values of propositions emerge from the originally interfering, different quantum probability amplitudes. But why should one identify decoherence with observations? What do you want to achieve by that? We're using the word "observation" because someone may actually perceive and process information.
Burning coal experiences lots of decoherence - which may surely be applied to this system - but I wouldn't say that burning coal is a particularly good or strong example of an observer because the coal is just too dull and stupid.
Yet if consciousness has no physical effect then how can we talk about it?
Why should we be unable to talk about it? A tape recorder doesn't have consciousness, at least not the human-sized one, but it may still talk about consciousness in the same way as we do if you record such a monologue. So talking about consciousness is surely a proof of nothing mysterious or soul-like.
reader Robert Rehbock said...
QM remains consistent to all observations regardless subjective belief and that is also fascinating. But, my question is whether you are imposing a further constraint on an observation of it being subjective in any greater sense than that it is an outcome of some interaction , ie an event causally consistent with all other possible measurements were there someone measuring? If no one had been here to measure and notice QM is a description still the laws of physics would be the same and so I am not sure why the word subjective needs be included with measurement.
reader Luboš Motl said...
Dear Robert, I understand that people often say that the "observation is subjective" is a constraint. They say it to express that they are uncomfortable.
But the reality is exactly the other way around. When you say that observations reflect something that exists even without observations, you are imposing a highly nontrivial constraint. Roughly speaking, this proposition demands physics to be classical physics.
But classical physics is a special case (hbar to 0 limit) of quantum mechanics, not the other way around! In the classical limit, things may be considered objective. The objective reality emerges with an arbitrary accuracy if hbar is arbitrarily small relatively to the quantities describing the studied physical system.
The realization that the observations do *not* have to reflect any objective reality isn't a constraint at all: it is a removal of a constraint. And indeed, the observations force us to get rid of this constraint, to jettison this excess baggage in order to make progress. The laws of physics directly link the observations with each other - probabilistically - and they just don't allow one to "insert" some objective reality in between them. Instead, using Feynman's path integral approach, all possible realities must be summed over to calculate the probability ampitudes of a transition from an initial state to a final state. But none of the intermediate histories is more real than any other. They're just terms in a complicated calculation and only the final result of the calculation makes a physical sense.
All these things - totally universal and rudimentary facts about quantum mechanics - are equivalent to the (more philosophically sounding) proposition that the observations in QM are fundamentally subjective. The word has to be included because if the observations were objective - objective processes of any kind - according to a theory, the theory would simply be conceptually a classical theory. That is really equivalent to hbar=0. But we know that hbar is not zero so the observations and other acts can't be objective changes to a physical system. They're subjective.
reader Peter F. said...
I believe (not fanatically or strenuously) that while "to merely and purely think" that building conscious (in the way near enough normal people are aware reflective and feeling) copies of our evolved brains (or for that matter other brainy animals' brains) is obviously one of the infinite possibilities of 'the eternal patterning tendency', to 'practically build' such things (such extraordinary psychological things) is fundamentally prevented because it is one of those things that are infinitely impossible to build WITHOUT an immensely unlikely but possible 'bio-evolutionary patterning trajectory'. %-}
Because I see it this way I have little respect for people who (some of which are referred to as AI experts) seems to take seriously the idea of building conscious computers (such as likewise IMO ultimately impossible to build multi-purpose computers that calculate with qubits).
reader Peter F. said...
I can talk to my telephone and sometimes be correctly understood and sensibly talked-back to by it. So much for consciousness defined as a capability of being on "talking terms".
reader Robert Rehbock said...
Yes. I see that. I was merely misinterpreting your use of subjective.
reader Paul Parnell said...
I think decoherence lets us see the collapse of the wave function as a physical process and so we no longer need to talk about observers at all. It is true that you need an observer to talk about what actually happened from the point of view of that observer. But as you say QM does not depend on how we define that observer. A rock is complex enough that an interaction with it will cause decoherence. We can treat it as an observer. If the qbits of a quantum computer interact with the rock they will collapse and destroy the quantum computation in progress just like if you looked at the qbits. The fact that the rock cannot talk and probably is not conscious is irrelevant. It functions as an observer.
The tape recorder... yes in a sense it can at least seem to talk about itself. But the point is could such a tape recorder evolve?
Sure you could program a mindless zombie to talk about the consciousness that it does not have. But what evolutionary force would cause the zombies to evolve the ability to talk about consciousness as if they possessed it? If consciousness has no effect then by definition there can be no selective pressure for it or even an illusion of it. Yet at least the illusion of it exists.
I'm not trying to argue for some supernatural soul here but I do think there is still a mystery. I cannot imagine what the solution to the mystery would look like and that can be a sign that it is an illusion. But the self referential argument above makes me skeptical of that. After all what sense does it make to claim that the ability to have an illusion is itself an illusion?
reader Luboš Motl said...
Right, Paul, physics works without any concept of an observer (except for "us", each of us may call himself an observer but it plays no role) and decoherence is a part of the explanation why it does reproduce the classical intuition when it's applicable.
Your comments about evolution pressures for tape recorders are surely interesting if they work but I don't understand either of the logical links that you need to establish what you need to establish.
I don't understand why - and you seem to assume that - consciousness should have a necessary condition that it has spontaneously evolved in competition. Moreover, I don't understand in what sense tape recorders talking about consciousness haven't evolved. They were produced in a competitive environment that was evolving much like organisms and other things. At some moment, people recorded and sold CDs talking about consciousness (and especially souls and other esoteric stuff) because these occult consciousness-discussing CDs or tapes (sorry, I will use it interchangeably) had an evolutionary advantage in the natural selection and competition against other types of CDs such as the sound of squeaking doors or lectures on physics.
reader Shannon said...
Thanks Tobias. Indeed I meant consciousness. ;-)
reader Paul Parnell said...
Lubos,
Yes tape recordings of talk about consciousness "evolved" in a sense. But they evolved in an environment where there were people claiming to be conscious. Could they have evolved in an environment that had no people claiming to be conscious? Clearly not.
People evolved and claim to be conscious. What environmental condition caused this? If we removed that condition would we have evolved a technological civilization of people who make no claim to being conscious?
The consciousness talk on the tape recorder is contingent on the consciousness talk of people. What is the consciousness talk of people contingent on?
reader Mikael said...
Dear Lubos, I agree that it is not a scientific question to ask whether a machine or other human being experiences consciousness, Maybe if somebody had a theory of consciousness and could link it to something we could measure in the lab the gap could become very small.
In any case I find it a very interesting scientific question to ask whether we can ever build an intelligent machine and what are we missing right now so that we can't do it. You dismissed the possibility that quantum mechanics plays any role. But do you think that an intelligent enough programmer could write a program which passes the Turing test and why are todays programmers failing?
reader KPM said...
Sure it's possible for a species of zombies to evolve a delusional belief in consciousness, and talk about it as a side effect of the evolution of language and linguistic metaphors.
So, if you have a social species with language, and members of that species evolve the ability to answer questions to others about their internal states, and they have short term memory, and they subsequently internalize such an ability as "private thoughts", and their language has first person pronouns and a symbolic metaphorical concept of a "self", these members can easily delude themselves into thinking they have nonexistent consciousness.
Maybe the human species is such a species, and all humans are zombies, including you and me and Lubos ;-)
reader KPM said...
Empirically, it's not possible to tell if Lubos is a zombie or has a soul ;-) So, it makes no scientific sense to talk about Lubos' consciousness ;-) ;-) ;-)
Maybe Lubos **really** is in some quantum superposition from "our", but not "his" subjective viewpoint ;-)
> I would still agree that some "consciousness stripped of the dull material trivialities such as eyes, brains, and microprocessors" exists in some sense
> I feel that "some mystery of pure consciousness remains unresolved" by science
You surprise me. Are you not a materialist then? ;-) You're also deviating from your own party line on the positivistic definition of existence. **gasp**!!! Do you have, **gasp**, faith?
reader KPM said...
You're right. There's no "natural" or "simple" division of physical objects within the universe into observers and non-observers. There's also nothing special about human brains.
That doesn't rule out the possibility the division line lies outside the universe, though.
reader KPM said...
You say quantum physics doesn't need an observer, but without an observer, everything will remain in an uncollapsed superposition, and you get MWI. But you reject MWI.
If everything is subjective, surely a subject is needed?
reader KPM said...
According to consistent histories applied to the universal wavefunction, the probability that we exist is exponentially small.
What?! You heard me right. There were quantum fluctuations of order 10^-5 during the inflationary epoch, but they were quantum. During reheating and after, this superposition remained, albeit in decohered form. The details of the structure formation which followed depends on these CMBR anisotropy quantum fluctuations.
The evolution of life here on earth depended on mutations, and most mutations are caused by quantum events like cosmic rays, radioactivity, or quantum chemical interactions between carcinogens and the DNA. Schrodinger's cat also applies to DNA mutations, you know. Most chain of mutations do not lead to the evolution of humans.
reader KPM said...
Schrodinger's cat applied to bubble chambers.
Here goes. A charged particle in a superposition of positions passed through the cloud chamber. It entangled with water molecules in the chamber leading to bubbles condensing. Photons streamed through the chamber and some of them got entangled with these bubbles. Some of these photons got entangled with the film. The film got entangled with photons in the room, and some of these photons got entangled with your eyes and brain. Oh my god!!! You were in a quantum superposition!!!
reader Luboš Motl said...
Dear KPM, I asked you to reduce the number of your crackpot comments a week ago, you didn't, so I just banned you and all your ways to identify yourself. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518764615058899, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/1061/why-does-gps-depend-on-relativity/1063 | # Why does GPS depend on relativity?
I am reading A Brief History of Time by Stephen Hawking, and in it he mentions that without compensating for relativity, GPS devices would be out by miles. Why is this? (I am not sure which relativity he means as I am several chapters ahead now and the question just came to me.)
-
1
– KennyTM Nov 18 '10 at 13:52
I'm trying to locate my sources on this, but I have read that even if you don't account for general relativity (by slowing down the clocks prior to launch) your GPS would work just fine because the error is the same for all satelites. The only issue would be that the clocks would not be synchronized with the ground, but that is not necessary for calculating your current position. Can anyone confirm this? – João Portela Nov 13 '12 at 11:27
– João Portela Nov 13 '12 at 11:29
– João Portela Nov 13 '12 at 11:38
## 4 Answers
Error margin for position predicted by GPS is $15m$. So GPS system must keep time with accuracy of at least $15m/c$ which is roughly $50ns$.
So $50ns$ error in timekeeping corresponds to $15m$ error in distance prediction.
Hence, for $38\mu s$ error in timekeeping corresponds to $11km$ error in distance prediction.
If we do not apply corrections using GR to GPS then $38\mu s$ error in timekeeping is introduced PER DAY.
You can check it yourself by using following formulas
$T_1 = \frac{T_0}{\sqrt{1-\frac{v^2}{c^2}}}$ ...clock runs relatively slower if it is moving at high velocity.
$T_2 = \frac{T_0}{\sqrt{1-\frac{2GM}{c^2 R}}}$ ...clock runs relatively faster because of weak gravity.
$T_1$ = 7 microseconds/day
$T_2$ = 45 microseconds/day
$T_2 - T_1$ = 38 microseconds/day
use values given in this very good article.
And for equations refer to hyperphysics.
So Stephen Hawking is right! :-)
-
1
You simply discuss special relativity where, where as general relativity is in fact a very important effect for GPS systems. – Noldorin Dec 10 '10 at 20:58
@Noldorin: the main GR correction is included, see $T_2$ – Retarded Potential Mar 28 at 18:04
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
You can find out about this in great detail in the excellent summary over here: What the Global Positioning System Tells Us about Relativity?
In a nutshell:
1. General Relativity predicts that clocks go slower in a higher gravitational field. That is the clock aboard the GPS satellites "clicks" faster than the clock down on Earth.
2. Also, Special Relativity predicts that a moving clock is slower than the stationary one. So this effect will slow the clock compared to the one down on Earth.
As you see, in this case the two effects are acting in opposite direction but their magnitude is not equal, thus don't cancel each other out.
Now, you find out your position by comparing the time signal from a number of satellites. They are at different distance from you and it then takes different time for the signal to reach you. Thus the signal of "Satellite A says right now it is 22:31:12" will be different from what you'll hear Satellite B at the same moment). From the time difference of the signal and knowing the satellites positions (your GPS knows that) you can triangulate your position on the ground.
If one does not compensate for the different clock speeds, the distance measurement would be wrong and the position estimation could be hundreds or thousands of meters or more off, making the GPS system essentially useless.
-
The effect of gravitational time dilation can even be measured if you go from the surface of the earth to an orbit around the earth. Therefore, as GPS satellites measure the time it's messages take to reach you and come back, it is important to account for the real time that the signal takes to reach the target.
-
GPS signals do not return to the satellite, they only go to the receiver AFAIK... – Thomas O Nov 18 '10 at 13:53
1
But the main point still holds, and it is that more time passes on Satellite's clock than your clock back on earth, with respect to either one of you. – Cem Nov 18 '10 at 13:59
3
Interestingly general relativity is not use per se in calculations for GPS systems. Rather, a nice little trick involving special relativity (applying a series of Lorentz transformations in infinitesimal steps) is what it does. This turns out to be sufficiently accurate and a lot easier computationally. – Noldorin Nov 18 '10 at 14:22
2
– endolith Nov 18 '10 at 15:16
2
@endolith : ... if you bring an atomic clock with you ! – Frédéric Grosshans Nov 18 '10 at 18:14
show 1 more comment
After dealing with GPS algorithms for significant part of my lifetime, I'd say in one word: GPS is a precise device. It is really state of the art to determine location from timing (aka DTOA, differential time of arrival), catching electromagnetic waves which are so damn fast.
That has to account for many subtle effects which might be insignificant in simple life - such as athmospheric disturbances, relativistic effects, ridiculously hard electronics design issues, etc.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256588816642761, "perplexity_flag": "middle"} |
http://www.reference.com/browse/Position+Operator | Definitions
Nearby Words
# Position operator
In quantum mechanics, the position operator corresponds to the position observable of a particle. Consider, for example, the case of a spinless particle moving on a line. The state space for such a particle is L2(R), the Hilbert space of complex-valued and square-integrable (with respect to the Lebesgue measure) functions on the real line. The position operator, Q, is then defined by
$Q \left(psi\left(x\right)\right) = x cdot psi \left(x\right)$
with domain
$D\left(Q\right) = \left\{ psi in L^2\left(\left\{mathbf R\right\}\right) ,|, Q psi in L^2\left(\left\{mathbf R\right\}\right) \right\}.$
Since all continuous functions with compact support lie in D(Q), Q is densely defined. Q, being simply multiplication by x, is a self adjoint operator, thus satisfying the requirement of a quantum mechanical observable. Immediately from the definition we can deduce that the spectrum consists of the entire real line and that Q has purely continuous spectrum, therefore no eigenvalues. The three dimensional case is defined analogously. We shall keep the one-dimensional assumption in the following discussion.
## Measurement
As with any quantum mechanical observable, in order to discuss measurement, we need to calculate the spectral resolution of Q:
$Q = int lambda d Omega_Q\left(lambda\right).$
Since Q is just multiplication by x, its spectral resolution is simple. For a Borel subset B of the real line, let $chi _B$ denote the indicator function of B. We see that the projection-valued measure ΩQ is given by
$Omega_Q\left(B\right) psi = chi _B cdot psi ,$
i.e. ΩQ is multiplication by the indicator function of B. Therefore, if the system is prepared in state ψ, then the probability of the measured position of the particle being in a Borel set B is
$|Omega_Q\left(B\right) psi |^2 = | Chi _B cdot psi |^2 = int _B |psi|^2 d mu ,$
where μ is the Lebesgue measure. After the measurement, the wave function collapses to $frac\left\{Omega_Q\left(B\right) psi\right\}\left\{ |Omega_Q\left(B\right) psi |\right\}$, where $| cdot |$ is the Hilbert space norm on L2(R).
## Unitary equivalence with momentum operator
For a particle on a line, the momentum operator P is defined by
$P psi = -i hbar frac\left\{partial\right\}\left\{partial x\right\} psi$
usually written in bra-ket notation as:
$langle x | hat\left\{p\right\} | psi rangle = - i hbar \left\{partial over partial x\right\} psi \left(x \right)$
with appropriate domain. P and Q are unitarily equivalent, with the unitary operator being given explicitly by the Fourier transform. Thus they have the same spectrum. In physical language, P acting on momentum space wave functions is the same as Q acting on position space wave functions (under the image of Fourier transform). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065771698951721, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2011/07/16/the-exterior-derivative-is-a-derivation/?like=1&source=post_flair&_wpnonce=c1c9c39881 | # The Unapologetic Mathematician
## The Exterior Derivative is a Derivation
To further make our case that the exterior derivative deserves its name, I say it’s a derivation of the algebra $\Omega(M)$. But since it takes $k$-forms and sends them to $k+1$-forms, it has degree one instead of zero like the Lie derivative. As a consequence, the Leibniz rule looks a little different. If $\alpha$ is a $k$-form and $\beta$ is an $l$-form, I say that:
$\displaystyle d(\alpha\wedge\beta)=(d\alpha)\wedge\beta+(-1)^k\alpha\wedge(d\beta)$
This is because of a general rule of thumb that when we move objects of degree $p$ and $q$ past each other we pick up a sign of $(-1)^{pq}$.
Anyway, the linearity property of a derivation is again straightforward, and it’s the Leibniz rule that we need to verify. And again it suffices to show that
$\displaystyle d(\alpha_1\wedge\dots\wedge\alpha_k)=\sum\limits_{i=1}^k(-1)^{i-1}\alpha_1\wedge\dots\wedge(d\alpha_i)\wedge\dots\wedge\alpha_k$
If we plug this into both sides of the Leibniz identity, it’s obviously true. And then it suffices to show that we can peel off a single $1$-form from the front of the list. That is, we can just show that the Leibniz identity holds in the case where $\alpha$ is a $1$-form and bootstrap it from there.
So here’s the thing: this is a huge, tedious calculation. I had this thing worked out most of the way; it was already five times as long as this post you see here, and the last steps would make it even more complicated. So I’m just going to assert that if you let $\alpha$ be a $1$-form and $\beta$ be an $l$-form, and if you expand out both sides of the Leibniz rule all the way, you’ll see that they’re the same. To make it up to you, I promise that we can come back to this later once we have a simpler expression for the exterior derivative and show that it works then.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 4 Comments »
1. [...] is that for all exterior forms . This is only slightly less messy to prove than the fact that is a derivation. But since it’s so extremely important, we soldier onward! If is a -form we [...]
Pingback by | July 19, 2011 | Reply
2. After reading these insightful “blaths”, I am wondering if you are going to write about complex manifolds too.
Thanks for great articles.
Comment by RK | August 6, 2011 | Reply
3. [...] this is clearer if we write it in terms of differential forms; since the exterior derivative is a derivation we can [...]
Pingback by | January 12, 2012 | Reply
4. [...] Armstrong: The algebra of differential forms, Pulling back forms, The Lie derivative on forms, The exterior derivative is a derivative, The exterior derivative is nilpotent, De Rham Cohomology, Pullbacks on Cohomology, De Rham [...]
Pingback by | August 21, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337644577026367, "perplexity_flag": "head"} |
http://alanrendall.wordpress.com/2012/02/25/entrainment-by-oscillations/ | Hydrobates
A mathematician thinks aloud
Entrainment by oscillations
In the book of Goldbeter which I have mentioned in several recent posts a concept which occurs repeatedly is that of entrainment. While looking for some more information about this topic I found a paper of Russo, di Bernardo and Sontag (PloS Computational Biology 4, e1000739) which gives an insightful treatment of the subject. The basic idea is to consider two systems which are coupled in some way and to consider the influence of oscillations in one system on the behaviour of the other. It is easy to see how this might be translated into a problem expressed in terms of dynamical systems. A classical example related to this is contained in a story about Christiaan Huygens who was, among other things, the inventor of the pendulum clock in the mid 17th century. Apparently he did not construct clocks himself but had them made by others according to his plans. The well-known story is that he noticed that when two pendulum clocks were placed next to each other the phase of their oscillations became synchronized with, say, one always at the leftmost point of its swing when the other was at the rightmost. Another example is that of the circadian rhythm. There is a 24 hour rhythm in our body and it is interesting to know whether it comes from an intrinsic oscillator or not. Experiments with subjects isolated from the usual rhythm of day and night show that there is an intrinsic oscillator but that its period is closer to 25 hours. Under normal circumstances its period is brought to 24 hours due to the cycle of day and night by entrainment.
The particular mathematical set-up considered in the paper of Russo et. al. is the following. Consider an autonomous dynamical system containing some parameters. Now replace one or more of those parameters by functions of time with period $T$. If solutions of the original system have a suitable tendency to converge to a stationary solution for a given choice of the parameters then solutions of the resulting non-autonomous system converge to periodic solutions of period $T$. In the papers there are nice plots of numerical simulations which give a striking picture of this behaviour. The central result of the paper is a theorem which guarantees this type of behaviour under certain hypotheses. As pointed out in the paper verifying these hypotheses has some similarity to finding a Lyapunov function for an autonomous system. The positive side is that if it can be done it is possible to get strong conclusions. The negative side is that verifying the hypotheses is generally a matter of trial and error. There is no algorithm available for doing that.
The criterion is dependent on the choice of a matrix norm. This is used to define a quantity called the matrix measure $\mu(A)$ of a matrix $A$. The criterion is that the Jacobian of the function defining the dynamical system should have a matrix measure which is bounded above by a negative constant. In that case the system is said to be infinitesimally contracting. The matrix measure is defined by a limiting procedure, $\mu(A)=\lim_{h\to 0}\frac{1}{h}(\|I+hA\|-1)$, but for particular choices of the matrix norm it is possible to calculate in a purely algebraic way. I have no intuitive feeling for what this definition means.
Like this:
This entry was posted on February 25, 2012 at 1:54 pm and is filed under dynamical systems, mathematical biology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
2 Responses to “Entrainment by oscillations”
1. Juliette Hell Says:
February 25, 2012 at 8:20 pm | Reply
Hi!
I did not look up in the paper you quote, but with the few details you wrote here, I have the following intuition…
This matrix measure is the derivative of the function which maps a matrix on its matrix norm, taken at the identity in direction of A. The condition says the slope in this direction is negative, meaning going this way, you’ll find contractions. Furthermore, the flow near an equilibrium for small t has leading order id+tA, i.e. it runs in the direction mentioned above.
My intuition wisely ignored the non-autonomous thing, and why you require the uniform negativity …
Another thing is: if you think of A diagonal, µ(A) negative just means that it has only stable eigenvalues. (at least with the usual matrix norm)
And last but not least, I had to think about the experience Bernold showed in class. The one with the metronomes sitting on a board, the board on two rolling cans, and (anti)synchronization taking place or not, depending on the original tuning of the metronomes. I love it!
2. hydrobates Says:
February 26, 2012 at 9:54 am | Reply
Hi Juliette,
Thanks for your comment. The class demonstration you mention sounds very interesting. I should talk to Bernold about this subject when I see him again.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488340020179749, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/51204/photoionization-equation | # Photoionization equation
I really need help understanding this equation ,i am new to quantum mechanics and i cant understand the math, so i need every single symbol to be explained or given a value if it is a constant , ( lets say X is Hydrogen)
-
1
What is the formula supposed to describe? What are the quantities involved? I can't make any sense about it without any additional information. – Ondřej Černotík Jan 14 at 17:13
## 1 Answer
I don't really think you need to understand a lot of quantum mechanics to make sense of this. This is what I make of it:
$\phi_\infty(\lambda)$: The light intensity as a function of wavelength $\lambda$. Probably refers to the solar light before reaching earth's atmosphere.
$\exp\left[-\sum_m\sigma_m^{a}(\lambda)\int_z^\infty n_m(s)ds)\right]$: Factor representing the transmission of light throgh the atmosphere depending on wavelength. This is basically the Beer-Lambert Law with a summation over all absorbing species in the atmosphere and an integral along the light path.
$\sigma_m^{a}(\lambda)$: The absorption crossection of species $m$ at wavelength $\lambda$.
$n_m(s)$ : The number density (i.e. molecules per unit of volume) of species $m$ at point $s$ along the light path.
$\sigma_X^{(i)}(\lambda)$: Probably the photoioniztion crossection for $X$ at wavelength $\lambda$.
$n_n(X)$: Probably the number density of $X$.
The only thing that i find strange is that there is a sum over wavelengths. I would have expected an integral instead. But then I'm not used to notation and conventions for describing photoioniztion.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9025808572769165, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/103810?sort=votes | ## Reference for subsemigroups of $\mathbb{N}^n$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A well known result about the natural numbers $\mathbb{N}$ says that for any finite subset $A \subset \mathbb{N}$ there exists $R \ge 0$ such that if $n$ is in the subgroup of $\mathbb{Z}$ generated by $A$ and if $n \ge R$ then $n$ is in the semigroup generated by $A$.
Are there any references to a higher dimensional version of this result?
The version I want goes like this.
• Take a finite subset $U$ of $\mathbb{N}^n$. Let $C_U$ be the smallest closed cone in $\mathbb{R}^n$ containing $U$, i.e. all non-negative real linear combinations of $U$. Let $G_U$ be the subgroup of $\mathbb{N}^n$ generated by $U$, i.e. all integer linear combinations. Let $S_U$ be the subsemigroup generated by $U$, i.e. all non-negative integer linear combinations. Then there exists $R>0$ such that for every $v \in G_U$, if the ball around $v$ of radius $R$ is contained in $C_U$ then $v \in S_U$.
-
I don't have any literature in front of me. The right buzzword is affine semigroups because that is what people who study them call them. – Benjamin Steinberg Aug 2 at 19:58
emis.de/journals/SC/2002/6/pdf/… seems to have some info on affine semigroups, the group they span and the polyhedral cone they span. – Benjamin Steinberg Aug 2 at 20:10
2
A higher dimensional analogue of the result you mention in dimension 1 can be found as Exercise 7.15 of Miller and Sturmfels combinatorial commutative algebra book but it is not quite what you want, I think. Let $N=C_U\cap G_U$. Then $N$ is a finitely generated semigroup and there exists according to exercise 7.15 an element $a$ of $S_U$ with $a+N\subseteq S_U$. – Benjamin Steinberg Aug 2 at 20:42
That would do it. Take $R > |a|$. If $B_R(v) \subset C_U$ then $v-a \in C_U \cap G_U = N$ so $v \in a+N \subset S_U$. Great! – Lee Mosher Aug 2 at 21:07
Glad to be of help. Perhaps we should copy this into the answer box so that the software knows it is answered? – Benjamin Steinberg Aug 3 at 0:38
show 1 more comment
## 1 Answer
This combines comments of myself and Lee Mosher.
Exercise 7.15 of Miller and Sturmfels combinatorial commutative algebra book proves the following. Let $N=C_U\cap G_U$. Then $N$ is a finitely generated semigroup and there exists according to exercise 7.15 an element $a$ of $S_U$ with $a+N\subseteq S_U$. Now take $R>|a|$. If $B_R(v)\subseteq C_U$ then $v−a\in C_U\cap G_U=N$ so $v\in a+N\subseteq S_U$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137284159660339, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/252/the-many-body-problem?answertab=oldest | # The Many Body problem
(This is a simple question, with likely a rather involved answer.)
What are the primary obstacles to solve the many-body problem in quantum mechanics?
Specifically, if we have a Hamiltonian for a number of interdependent particles, why is solving for the time-independent wavefunction so hard? Is the problem essentially just mathematical, or are there physical issues too? The many-body problem of Newtonian mechanics (for example gravitational bodies) seems to be very difficult, with no solution for $n > 3$. Is the quantum mechanical case easier or more difficult, or both in some respects?
In relation to this, what sort of approximations/approaches are typically used to solve a system composed of many bodies in arbitrary states? (We do of course have perturbation theory which is sometimes useful, though not in the case of high coupling/interaction. Density functional theory, for example, applies well to solids, but what about arbitrary systems?)
Finally, is it theoretically and/or practically impossible to simulate high-order phenomena such as chemical reactions and biological functions precisely using Schrodinger's quantum mechanics, over even QFT (quantum field theory)?
(Note: this question is largely intended for seeding, though I'm curious about answers beyond what I already know too!)
-
Why do you restrict it to quantum problems ? – Cedric H. Nov 4 '10 at 23:19
You could say restrict, but in many ways it's generalising! In any case, the problem is rather different for quantum mechanics, and certainly more interesting I find. – Noldorin Nov 4 '10 at 23:30
## 5 Answers
First let me start by saying that the $N$-body problem in classical mechanics is not computationally difficult to approximate a solution to. It is simply that in general there is not a closed form analytic solution, which is why we must rely on numerics.
For quantum mechanics, however, the problem is much harder. This is because in quantum mechanics, the state space required to represent the system must be able to represent all possible superpositions of particles. While the number of orthogonal states is exponential in the size of the system, each has an associated phase and amplitude, which even with the most coarse grain discretization will lead to a double exponential in the number of possible states required to represent it. Thus in quantum systems you need $O(2^{2^n})$ variables to reasonable approximate any possible state of the system, versus only $O(2^n)$ required to represent an analogous classical system. Since we can represent $2^m$ states with $m$ bits, to represent the classical state space we need only $O(n)$ bits, versus $O(2^n)$ bits required to directly represent the quantum system. This is why it is believed to be impossible to simulate a quantum computer in polynomial time, but Newtonian physics can be simulated in polynomial time.
Calculating ground states is even harder than simulating the systems. Indeed, in general finding the ground state of a classical Hamiltonian is NP-complete, while finding the ground state of a quantum Hamiltonian is QMA-complete.
-
An informative answer. Interestingly, I've heard that (universal) qunatum computers should be able to simulate other quantum computers in polynomial times, just that classical computers can't. – Noldorin Nov 5 '10 at 16:14
Yes, that is true. Calculating ground states seems to be beyond their reach though. – Joe Fitzsimons Nov 5 '10 at 16:39
Ah I see. I suppose calculating ground states and performing complete quantum simulations of systems typically cover different ranges of application, however, so it's not all bad news. Anyway, cheers for the detail, you seem to be very knowledgeable on the subject; the answer is yours. – Noldorin Nov 9 '10 at 3:22
1
@Noldorin: Thanks. I only know this stuff because I have spent quite a while working in this exact field. By the way,ground states are to some extent less relevant because the systems for which is is computationally hard to calculate the ground state of (at least on a QC) don't cool efficiently either. – Joe Fitzsimons Nov 9 '10 at 3:50
The answer is fairly simple -- classical N-body problem has its solution in $6N$ 1D functions of time, quantum N-body problem has its solution in one complex function, but $3N$-dimensional (not counting spin and similar stuff). Then, there is no wonder why one can find analytical solutions only for trivial problems or at least make $N$ huge and escape into statistical mechanics. And yes, this is only the problem of mathematical complexity here.
From modelling point of view exact solving also seems hopeless, with only memory complexity of $\mathcal{O}(K^{3N})$.
For the rest of the answer I will restrict myself to quantum chemistry/material science, since this is the most exploited region -- this means we are now talking about atoms. First of all, atoms have small and very heavy nuclei, which thus can be treated as almost stationary sources of electrostatic potential; this reduces the problem to electrons only (Born-Oppenheimer approx.). Now, there are two main routes to follow: Hartree-Fock or Density Functional Theory.
In HF, one roughly represents the many-body weavefunction as a combination of some standard base functions -- then one can optimize their contributions to get minimal energy, yet using extended Hamiltonian to adjust the effects of such approximation. In DFT, one encouraged by Hohenberg-Kohn theorems reduces the many body weavefunction to electron probability density field (3-dimensional), and accordingly Shroedinger equation terms into density functionals (and there approximations are applied). Next, it can be either solved as this 3D field or in Kohn-Sham way, which is pretty much Hartree-Fock for DFT (one represents density with base functions). People sometimes are making something analytical here, but those are mostly theories made to support computational approaches.
And finally your last question: those approximate methods (but still ab initio -- there are no experimental parameters there) do predict things like chemical reactions, various spectra and other measurable quantities; accuracy is problematic though. Biology is mostly out of reach because of a time scale; at least there are hybrid methods able to mix for instance the classical simulation of protein motion with quantum simulation of the binding site when it is squeezed enough so something quantum like enzymatic reaction can take place.
-
Looks like a pretty good answer, I'll read it properly tomorrow. In any case, it's important to make clearer that that although the "properties of the solution* are "fairly simple", the solutions themselves are certainly not! – Noldorin Nov 5 '10 at 1:37
Note that H-F, DFT are the main approximation techniques to the quantum many-body problem, though neither are "well-controlled approximations" in the sense that they are used as the first term in a convergent expansion to the actual solution. And I'm not sure what level of computational complexity they reduce the problem to, though that's an important question. – j.c. Nov 5 '10 at 14:42
@j.c. Those are approximated theories rather than approximated ways of solving equations. Reduction of complexity is obvious -- 3N-dim function to a vector of parameters in case of HF or to 3-dim field in case of DFT. – mbq♦ Nov 5 '10 at 17:23
In addition to what mbq said, it might be interesting to know that things get really funny in relativistic quantum mechanics, that is using the Klein-Gordon and the Dirac equation (but without the "second" quantization of Quantum Field Theory). There, there's one wave function per particle sort, so no matter how many particles of one kind you consider, the only thing that changes is the field itself. You only get more degrees of freedom by actually adding another kind of particles. Of course, since Fermions require Spinors, you may end up with other computational issues then...
-
2
The problem here, of course, is that the field modes are continuous variables. – Joe Fitzsimons Nov 5 '10 at 6:47
By which I mean the problem with simulating the system, not a problem with your answer. – Joe Fitzsimons Nov 5 '10 at 7:04
Yeah, I was curious as to whether QFT actually makes things easier in some respect. It's a tricky scenario. – Noldorin Nov 5 '10 at 16:15
1
It definitely can only make things harder as you can encode a discrete system in the CV but not necessarily the other way around. – Joe Fitzsimons Nov 5 '10 at 16:38
Noldorin: QFT would probably make things even more complicated, I was only wondering whether the unquantized relativistic QM equations would yield an advantage over the non-QFT Schrödinger equation, but as @Joe mentions, this may not be the case... – Tobias Kienzler Nov 8 '10 at 8:00
show 2 more comments
The many-body equation is immensely difficult to study, both classically and quantum-mechanically. The late John Pople, of Northwestern University, won a Nobel Prize in 1998 for his numerical models of wave functions of atoms, developing a theoretical basis for their chemical properties. Here is a link:
http://nobelprize.org/nobel_prizes/chemistry/laureates/1998/
-
Thanks for the info. I may just have to read some of Pople's papers some day, out of curiosity. :) – Noldorin Nov 7 '10 at 1:14
On a more abstract level, the problem is linearity versus non-linearity. It's straightforward to solve a number of linear equations, and they always yield an analytic answer. However, non-linear equations produce chaotic behaviour, which cannot be generalised in most cases.
As an example, the 3-body Newtonian problem involves 2C3 = 3 non-linear equations; the nonlinearity comes from the r2 relationship. And 3 non-linear relations are the minimum requirement for a chaotic system.
Similarly, quantum mechanics involves a large number of non-linear equations - given a set of 3 electrons, each will repel the others via a non-linear relation, and with even more complexity than the Newtonian problem where all things are known and determinable.
So, the simple answer is that the problem is mathematics that can't be solved for the general case, which result from the physics, and that the quantum case is indeed worse than the classical one.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433841705322266, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/100573/extending-psc-metrics | ## Extending psc metrics
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $S^1$ denote the circle with the non-trivial spin structure, i.e. $0\neq[S^1]\in\Omega^{Spin}_1$. Considering characteristic numbers it is easy so see that $S^1\times\mathbb{H}P^3$ is spin null bordant, say via $W$. (If you wish you can take $W$ simply connected.) Now let $g$ be the standard metric of positive scalar curvature on $\mathbb{H}P^3$, then $dt^2\times g$ is a metric of positive scalar curvature on $\partial W=S^1\times\mathbb{H}P^3$.
Is it possible to extend $dt^2\times g$ to a metric of positive scalar curvature on $W$?
A possible answer may use the 'obstruction groups' $R_n$ introduced by Stolz (in: Concordance classes of positive scalar curvature metrics). However, there are hardly computable. In addition, eventual index obstructions vanish for dimensional reasons, as $\dim W\equiv6\mod{8}$.
Although I believe that this is a quite hard question I am grateful for any comments in advance.
-
It seems like you expect the answer to be negative, since you're looking for obstructions? – Dylan Thurston Jun 25 at 15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9005711674690247, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/tagged/gaga | ## Tagged Questions
1answer
159 views
### algebraic de Rham cohomology of singular varieties
Hi, Is there a simple example of an (affine) algebraic variety $X$ over $\mathbb C$ where the $H^*_{dR}(X/\mathbb C) = H^*(\Omega^\bullet_{A/\mathbb C})$ differs from the singular …
0answers
126 views
### How do fibers of the functor Algebraic Varieties $\to$ Complex Analytic Spaces look like?
There's already a question (which got several interesting answers) asking about examples of the phenomenon of non (essential) injectivity of the functor $U:Alg\to AnEsp$, mapping e …
1answer
260 views
### Are complex varieties Kahler? - Algebraic, non-projective complex manifolds
Let $X/\mathbb{C}$ a nonsingular proper variety and $X_{an}$ it's associated analytic space. Is $X_{an}$ necessarily Kahler? Certainly we know this if $X$ is projective. A complex …
1answer
475 views
### What is the intuition behind the proof of the algebraic version of Cartan’s theorem A?
I am trying to understand the idea behind the proof of GAGA. A crucial step is the following: Theorem: Let $X=\mathbb{P}^r_{\mathbb{C}}$ (either as a variety or as an analytic spa …
4answers
2k views
### Algebraic de Rham cohomology vs. analytic de Rham cohomology
Let $X$ be a nice variety over $\mathbb{C}$, where nice probably means smooth and proper. I want to know: How can we show that the hypercohomology of the algebraic de Rham complex …
1answer
1k views
### GAGA and Chern classes
My question is as follows. Do the Chern classes as defined by Grothendieck for smooth projective varieties coincide with the Chern classes as defined with the aid of invariant pol …
2answers
757 views
### Topologically contractible algebraic varieties
From a post to The Jouanolou trick: Are all topologically trivial (contractible) complex algebraic varieties necessarily affine? Are there examples of those not birationally eq …
1answer
1k views
### Stein Manifolds and Affine Varieties
When is a Stein manifold a complex affine variety? I had thought that there was a theorem saying that a variety which is Stein and has finitely generated ring of regular functions … | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138840436935425, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/278864/a-family-of-functions | # A family of functions
Does there exist an infinite family of functions which satisfy
$|f^\prime(x)|=1$ and $f(1)=f(-1)=0$?
where a) $f\colon \mathbb{R}\to \mathbb{R}$ b) $f$ is a complex function defined on some open neighbourhood of the closed unit disc in the plane.
-
3
$|f'(x)| = 1$ everywhere? At one point? Where exactly? – Ayman Hourieh Jan 14 at 21:31
Yes, constantly as a complex function. – Drake Jan 14 at 21:34
1
Can you answer a)? – Jonas Meyer Jan 14 at 21:35
So you're interested if its true in case a) or in case b)? – JSchlather Jan 14 at 21:39
1
You should have an idea at least of what goes wrong with (a) by trying to sketch a picture. For (b), because of existence of "$f'(x)$" I guess you are talking about complex differentiability. For such $f$, $f'$ is also complex differentiable. Do you know which complex differentiable functions (on a connected open set) have constant modulus? – Jonas Meyer Jan 14 at 21:44
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378832578659058, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/175880-disproving-existance-limit-infinity.html | # Thread:
1. ## Disproving the existance of a limit at infinity
Hello
I'm given the function x*sin(x), and asked whether it has a limit of some sort as x goes to negative infinity. In other words does it converge to a number or go off to positive infinity or negative infinity.
Based on the graph of the function my intuition is that it doesn't have a limit of any sort since it is constantly fluctuating between very large and very small values.
I think i've managed to prove that it doesn't converge to any number at negative infinity, because on any interval can always supply an x1 = -pi/2 -n*pi for which the value is -x1 and an x2 = -(3/2) * pi -n*pi for which the value is x2, i.e. the function doesn't get closer to any single number.
But I'm having a lot of trouble with the infinite limits, proving that the function doesn't go off to positive or negative infinity...help would be much appreciated
2. Originally Posted by moses
Hello
I'm given the function x*sin(x), and asked whether it has a limit of some sort as x goes to negative infinity. In other words does it converge to a number or go off to positive infinity or negative infinity.
Based on the graph of the function my intuition is that it doesn't have a limit of any sort since it is constantly fluctuating between very large and very small values.
I think i've managed to prove that it doesn't converge to any number at negative infinity, because on any interval can always supply an x1 = -pi/2 -n*pi for which the value is -x1 and an x2 = -(3/2) * pi -n*pi for which the value is x2, i.e. the function doesn't get closer to any single number.
But I'm having a lot of trouble with the infinite limits, proving that the function doesn't go off to positive or negative infinity...help would be much appreciated
Take the sequences $\displaystyle{\{-n\pi\}\,,\,\left\{-(4n-1)\frac{\pi}{2}\right\}$ . We know that if $\lim\limits_{x\to -\infty}x\sin x$ exists, then it is
the same no matter how we choose to make $x\to -\infty$ , so do this through the above two seq's.
Tonio
3. I'm afraid I haven't learned anything about sequences yet...is there another way?
4. Originally Posted by moses
I'm afraid I haven't learned anything about sequences yet...is there another way?
I am truly puzzled by this question.
If you can do it for $x\to -\infty$ it is exactly the same for $x\to \infty$.
After all $(-\infty,0]~\&~[0,\infty)$ are 'copies' of one another as for as the sine function is concerned.
5. Originally Posted by Plato
I am truly puzzled by this question.
If you can do it for $x\to -\infty$ it is exactly the same for $x\to \infty$.
After all $(-\infty,0]~\&~[0,\infty)$ are 'copies' of one another as for as the sine function is concerned.
It probably is a witty question: prove that $\lim\limits_{x\to -\infty}x\sin x=\lim\limits_{x\to\infty}x\sin x$ , since the
function $x\sin x$ is even...
Anyway, I can't see right now a straightforward way to prove what the OP wants with resourcing to sequences.
Tonio
6. Okay, I see I phrased the question pretty unclearly
Basically what I meant was, how can I prove that the limit of xsinx as x goes to negative infinity is not positive infinity, and also that the limit of xsinx as x goes to negative infinity is not negative infinity? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639489650726318, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/174700-find-area-petal-sketch.html | # Thread:
1. ## Find the area of a petal sketch.
Find the area of a petal sketch.
1) r= 4 sin(2theta)
2) r= 4sin(theta)
3) Compare and contrast conjecture.
Please please please help me with this calculus problem. I am so lost in my class. step by step work would be greatly appreciated. The work does not have to be completely accurate, I just need the steps. Thank you in advance.
2. Originally Posted by mymony1027
Find the area of a petal sketch.
1) r= 4 sin(2theta)
2) r= 4sin(theta)
3) Compare and contrast conjecture.
Please please please help me with this calculus problem. I am so lost in my class. step by step work would be greatly appreciated. The work does not have to be completely accurate, I just need the steps. Thank you in advance.
help with #1 ... know how to sketch a polar graph? if not, you need to relearn how ... it's critical for determining area of a polar graph.
note that one petal gets formed for $r = 4\sin(2\theta)$ by values of $\theta$ , $0 \le \theta \le \dfrac{\pi}{2}$
using the form for area of polar curves ... $\displaystyle A = \int_{\theta_1}^{\theta_2} \frac{r^2}{2} \, d\theta$ ...
$\displaystyle A = \int_{0}^{\frac{\pi}{2}} 8\sin^2(2\theta) \, d\theta$
use the power reduction identity ...
$\sin^2(2\theta) = \dfrac{1 - \cos(4\theta)}{2}$ to help in the integration.
$\displaystyle A = 4\int_{0}^{\frac{\pi}{2}} 1 - \cos(4\theta) \, d\theta$
$4\left[\theta - \dfrac{\sin(4\theta)}{4}\right]_0^{\frac{\pi}{2}}$
$4\left[\dfrac{\pi}{2} - 0\right] = 2\pi$
o.k. you try #2.
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098080992698669, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/92781/total-variation-distance-is-often-too-strong-to-be-useful?answertab=votes | Total Variation distance is often too strong to be useful.
While reading lecture notes on Stein's method, there's one example that I cannot prove myself.
For iid random variables $X_1, ..., X_n$ where $\Pr(X_i=1) = \Pr(X_i=-1) = 1/2$, define $S_n=\frac{1}{\sqrt{n}}\sum_{i=1}^n X_i$. Then $S_n$ converges to normal distribution due to Central Limit Theorem. Still, $d_{TV}(S_n,Z)=1$ for all $n$ where $d_{TV}$ is the total variation distance and $Z\sim N(0,1)$.
It's an example of 'total variation distance is often too strong to be useful', but I don't know how to prove it. Any hint or suggestion?
-
HINT Try to show the more general fact that the total variation distance between a continuous distribution and a discrete distribution is $1$. – Srivatsan Dec 19 '11 at 18:39
1 Answer
What's the probability that $S_n$ is an integer multiple of $1/\sqrt{n}$?
-
Let $\mathcal{E}$ be the event that $S_n$ is an integer multiple of $1/\sqrt{n}$. This event corresponds to finitely many points, and finitely many points in a continuous distribution amount to zero. I guess this is what you meant, right? – Federico Magallanez Dec 19 '11 at 22:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316673278808594, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/119348/convexity-of-a-certain-set-of-covariance-matrices | ## Convexity of a Certain Set of Covariance Matrices
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
My question is about a certain set of matrices being convex or not. I'll start with some preliminaries in order to define myself properly. Let $X_1,U,X_2$ be three zero-mean Gaussian random vectors (RVs) of dimension $N\times 1$, that admit the Markov relation $X_1-U-X_2$. Let us use the notations: $\Sigma_i=\mathbb{E}[X_iX_i^T]$, for $i=1,2$, $\Sigma_U=\mathbb{E}[UU^T]$, $\Sigma_{iU}=\mathbb{E}[X_iU^T]$, for $i=1,2$ and finally $\Sigma_{12}=\mathbb{E}[X_1X_2^T]$. The Markov relation is equivalent to the fact that the auto- and cross- covariance matrices of the RV satisfy: $\Sigma_{12}=\Sigma_{1U}\Sigma_U^{-1}\Sigma_{U2}$.
Let us define the matrix:
$\Sigma=\left( \begin{array}{ccc} \Sigma_1 & \Sigma_{1U} & A\\ \Sigma_{1U}^T & \Sigma_U & \Sigma_{2U}^T\\ A^T & \Sigma_{2U} & \Sigma_{2}\end{array}\right)$.
where $A=\Sigma_{1U}\Sigma_{U}^{-1}\Sigma_{U2}$. My question regards the set of all legitimate matrices $\Sigma$, is this set convex? How can one check this?
Thank you all in advance,
Best regards!
-
Obviously not. The data $\Sigma_{U,1,2,1U, U2}$ are independent apart from inequalities that leaves room for an open set. If the set was convex, $A$ would be a linear function of these data, but it is not. A natural question is: what is the convex hull of the set of such matrices? – Denis Serre Jan 19 at 22:24
I'm sorry, but I didn't understand your answer fully. When you say that the data are independent apart from some inequalities do you mean the following: $\Sigma_i-\Sigma_{iU}\Sigma_U^{-1}\Sigma_{Ui}\geq 0$, and the non-negativety constraints, i.e., $\Sigma_i\geq 0$ and $\Sigma_U\geq 0$, where $i=1,2$. Are there any additional relations between the data matrices? What do you mean by "leaves room for an open set"? And finally, why $A$ must be linear if the set is to be convex? There are convex sets that are not linear. I'd appreciate if you could clarify this for me. Thank you in advance! – Ziv Goldfeld Jan 20 at 10:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332728385925293, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/127723/convex-combination-in-compact-convex-sets/127762 | # Convex combination in compact convex sets.
Let's $K\subset\mathbb{R}^n$ compact. If $K$ is convex then who to prove that any point of $K$ is convex combination of one or two extremal points of $K$?
Intuitively, for any closed ball that is true.
-
3
It won't be "one or two". Think of a simplex. In ${\mathbb R}^n$ you need $n+1$ points. – Robert Israel Apr 3 '12 at 21:08
Got it. I was thinking implicitly when the boundary of K coincides with its set of extremal point. For the ball in R ^ n this makes sense. But not for the simplex. Thank you. – Elias Apr 3 '12 at 21:13
## 2 Answers
The fact that in ${\mathbb R}^n$ each point of a compact convex set is a convex combination of at most $n+1$ extreme points is a theorem of Carathéodory. You can prove this by induction on $n$.
The case $n=0$ is easy. For the induction step, if $K$ is a compact convex set in ${\mathbb R}^{n+1}$ and $x \in K$, choosing some extreme point $y$ of $K$ we have $x = t y + (1-t) z$ where $0 \le t \le 1$ and $z$ is a boundary point of $K$. $K$ has a supporting hyperplane $H$ at $z$, and $H \cap K$ is a compact convex set in the $n$-dimensional space $H$ whose extreme points are extreme points of $K$. So represent $z$ as a convex combination $z = \sum_{i=1}^{n+1} c_i z_i$ of at most $n+1$ of these extreme points, and $x = t y + \sum_{i=1}^{n+1} (1-t) c_i z_i$ is a convex combination of at most $n+2$ extreme points of $K$.
-
This is the Krein-Milman Theorem. Just do a search for the proof of this, and you should find the answer you are looking for. The proof can be kind of complicated depending on how much generality you proof it for (it holds in general for locally compact Hausdorff spaces, which can be much nastier than $\mathbb{R}^n$).
Edit: Your claim breaks down even in $\mathbb{R}^2$. Consider the square determined by points (0,0), (0,1), (1,1), (1,0). Note that the extreme points of the square are exactly these corner points. Then consider the point $(1/2,1/3)$. Can you write this point as the convex combination of only one or two points?
-
I hoped that in the case of $\mathbb{R}^n$ in the proof would be easy. Thank you. – Elias Apr 3 '12 at 20:36
– Keaton Apr 3 '12 at 20:44
I'm not interested in the existence of convex combination. But in the amount of extremal points of the convex combination. – Elias Apr 3 '12 at 21:15
1
locally compact should be locally convex. – t.b. Apr 4 '12 at 5:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265901446342468, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/24028?sort=votes | ## What is the difference between the biconditional iff. and equality = ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
I've been used to writing logical transformations using equality, but the other day it struck me that perhaps I should be using the biconditional $\iff$?
So my question is: What is the difference between the biconditional iff. $\iff$ and equality = ? Can they be used interchangeably? And when should one be used instead of the other? Is this matter of style? meaning? or correctness?
Simple Examples to illustrate the point: here p,q are propositions, ^ is "and", v is "or", and ~ is "not":
Using equality:
(1) p ^ (q v r) = (p ^ q) v (p ^ r)
and
(2) p $\Rightarrow$ q = ~ p v q
But is this more correctly written using the biconditional iff.?
(1') p ^ (q v r) $\iff$ (p ^ q) v (p ^ r)
and
(2') p $\Rightarrow$ q $\iff$ ~ p v q
In rendering into English, are these logically equivalent statements? Equal statements (equality being considered as an equivalence relation that then establishes an equivalence class)? Or is it that the very meaning of equality in the context of logical propositions is iff.?
(Can't find the Community Wiki tag! someone please add it, thanks.)
-
4
Probably, the "$\Leftrightarrow$" is part of the language (or syntax) of propositional formulas about which you are talking, while "$=$" is part of the language with which you are talking about propositional formulas. There is a difference, then. – Mariano Suárez-Alvarez May 9 2010 at 16:15
## 5 Answers
Usually the biconditional is denoted by $\leftrightarrow$ and logical equivalence is represented by $\Leftrightarrow$.
Given two compound propositions $P$ and $Q$, the proposition $P \Leftrightarrow Q$ means that $P$ and $Q$ have the same truth value for each possible combination of truth values of the variables of which they are composed. That is, $P \Leftrightarrow Q$ means that $P \leftrightarrow Q$ is a tautology (i.e., a proposition that is always true). It's important to note that $\leftrightarrow$ is a connective, whereas $\Leftrightarrow$ is like an "equals sign" for propositions. More formally, the expression $P \Leftrightarrow Q$ is really a proposition about a proposition, namely that "$P \leftrightarrow Q$ is a tautology", whereas the expression $P \leftrightarrow Q$ is just a compound proposition that may or may not be a tautology.
Likewise, the conditional is usually denoted by $\rightarrow$ and logical implication is represented by $\Rightarrow$. Given two compound proposition $P$ and $Q$, the proposition $P \Rightarrow Q$ means $Q$ is true whenever $P$ is true, i.e., $P \Rightarrow Q$ means that $P \rightarrow Q$ is a tautology.
It seems that you may have confused logical equivalence ($\Leftrightarrow$) with the biconditional connective ($\leftrightarrow$) and logical implication ($\Rightarrow$) with the conditional connective ($\rightarrow$).
-
Hi Amy. Thanks for the explanation. So (2') would be more correctly written as: P --> Q <==> ~P v Q since the intent is to say that the left and right hand expressions are logically equivalent? Regarding equality =, am I properly interpreting your response to mean that = has no place in discussions about propositions? It is --> or <--> when expressing logical connectives between propositions within a single expression, and ==> or <==> when expressing logical relations between distinct expressions? If this is what you mean, then this has been very helpful! (and I'll accept the answ) – AKE May 9 2010 at 17:49
1
Another subtle difference is that $a\leftrightarrow b\leftrightarrow c$ is a formula that holds when one or three of $a,b,c$ are true, while $a\Leftrightarrow b\Leftrightarrow c$ is a shorthand for "$a\Leftrightarrow b$ and $b\Leftrightarrow c$". (In general, the $n$ary connective $\leftrightarrow$ applied to $n$ arguments out of which $m$ evaluate to true holds when $m$ and $n$ have the same parity. – rgrig May 9 2010 at 21:52
@AKE - Yes, that's what I meant :-) – Amy Glen May 9 2010 at 23:52
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The worst problem with using equality in this way is that equality on propositions doesn't have a single meaning. Originally, in Boole's logic, it meant that one formula was obtained from another by a series of algebraic manipulations. Now, it might mean that the formulae belong to the same equivalence class in the Lindenbaum algebra of a logic.
The problem is that these two differ. One might have a theory of algebraic manipulations on formulae that is decidable for a logic that is undecidable: it is not rare in proof theory to say that two formulae are equal iff they have the same negation normal form, for instance, which isan algebraic notion of equality using just De Morgan duality and that negation is involutive. Then equality and logical equivalence do not coincide, as they must with the second interpretation.
Feel free to use equality on propositions if you wish, but do make clear what you are doing. If you follow Amy's advice, there is no need to spell this out.
-
@Charles: appreciate the elaboration; +1 for relating this to Amy's answer. – AKE Aug 19 at 19:18
Another reason to avoid conflating equality with equivalence: whether two formulas are equal should only depend on the formulas themselves. But formulas can become equivalent after some other set of axioms has been assumed, and the equivalence can depend on exactly which other axioms are assumed.
For an elementary example in the language of rings, let $\phi$ say "there are no zero divisors" and let $\psi$ say "every element has a multiplicative inverse". These are not the same formula, trivially. If we assume all the ring axioms and we also assume that there are no more than (say) 16 elements, the formulas become equivalent by Wedderburn's theorem.
But if we do not assume the ring axioms, the formulas are not equivalent. For example, I could make a model with three elements {0,1,2} such that for all $x,y$ we have $xy = 2$, and this would satisfy $\phi$ but not $\psi$. Of course this is not a ring – that's the point. Or, we could assume the ring axioms but not assume there are only a finite number of elements, and again the formulas will be inequivalent.
(The reason for picking 16 is to avoid a technicality with first-order logic. For each $n$ there is a sentence which says that there are no more than $n$ elements, but there is no single sentence which says there are only finitely many elements. One can try to work around this by switching to the language of set theory, but that complicates things in other ways.)
-
@Carl: ok, I understand the example. But are you suggesting that using = is preferable in example (1) in the question -- so an opposing view to Amy's above? – AKE Aug 19 at 19:22
If two predicates describe the same set, does that mean that they are equal or that they are logically equivalent? "$\phi = \psi$" could mean that $\phi$ and $\psi$ are literally the same formula. We write $\Leftrightarrow$ for logical equivalence to avoid ambiguity.
-
In other words, $P(x)\iff Q(x)$ implies that $\forall x \in D, P(x) \leftrightarrow Q(x)$ where $D$ is the relevant domain. Or, that "$\iff$" applies when talking about logic while "$\leftrightarrow$" is used in a formula.
But to address the question as to whether $\iff$ can serve as a replacement for $=$, I think that translating it into English is useful. Replace "$\iff$" with "is necessary and sufficient for" and say "$=$" as "equals" (the same goes for "$\equiv$" and "logically equivalent to") and I think that you'll see that they are different, conceptually and syntacticly, only in certain circumstances, or "when doing logic, we speak logic" (I often use them when taking notes on non-math topics, and my brain knows what I mean regardless of symbol choice :) ).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507547616958618, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/144818/interesting-determinant | # Interesting Determinant
Let $x_1,x_2,\ldots,x_n$ be $n$ real numbers that satisfy $x_1<x_2<\cdots<x_n$. Define \begin{equation*} A=% \begin{bmatrix} 0 & x_{2}-x_{1} & \cdots & x_{n-1}-x_{1} & x_{n}-x_{1} \\ x_{2}-x_{1} & 0 & \cdots & x_{n-1}-x_{2} & x_{n}-x_{2} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ x_{n-1}-x_{1} & x_{n-1}-x_{2} & \cdots & 0 & x_{n}-x_{n-1} \\ x_{n}-x_{1} & x_{n}-x_{2} & \cdots & x_{n}-x_{n-1} & 0% \end{bmatrix}% \end{equation*}
Could you determine the determinant of $A$ in term of $x_1,x_2,\ldots,x_n$?
I make a several Calculation: For $n=2$, we get
\begin{equation*} A=% \begin{bmatrix} 0 & x_{2}-x_{1} \\ x_{2}-x_{1} & 0% \end{bmatrix}% \text{ and}\det (A)=-\left( x_{2}-x_{1}\right) ^{2} \end{equation*}
For $n=3$, we get
\begin{equation*} A=% \begin{bmatrix} 0 & x_{2}-x_{1} & x_{3}-x_{1} \\ x_{2}-x_{1} & 0 & x_{3}-x_{2} \\ x_{3}-x_{1} & x_{3}-x_{2} & 0% \end{bmatrix}% \text{ and}\det (A)=2\left( x_{2}-x_{1}\right) \left( x_{3}-x_{2}\right) \left( x_{3}-x_{1}\right) \end{equation*}
For $n=4,$ we get
\begin{equation*} A=% \begin{bmatrix} 0 & x_{2}-x_{1} & x_{3}-x_{1} & x_{4}-x_{1} \\ x_{2}-x_{1} & 0 & x_{3}-x_{2} & x_{4}-x_{2} \\ x_{3}-x_{1} & x_{3}-x_{2} & 0 & x_{4}-x_{3} \\ x_{4}-x_{1} & x_{4}-x_{2} & x_{4}-x_{3} & 0% \end{bmatrix} \\% \text{ and} \\ \det (A)=-4\left( x_{4}-x_{1}\right) \left( x_{2}-x_{1}\right) \left( x_{3}-x_{2}\right) \left( x_{4}-x_{3}\right) \end{equation*} Finally, I guess that the answer is $\det(A)=2^{n-2}\cdot (x_n-x_1)\cdot (x_2-x_1)\cdots (x_n-x_{n-1})$. But I don't know how to prove it.
-
5
Please remember that in order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level. If this is homework, please add the [homework] tag; people will still help, so don't worry. Also, many users find the use of the imperative ("Prove", "Show", etc) to be rude when asking for help. Please consider rewriting your post. – Arturo Magidin May 14 '12 at 1:40
Your guess would not account for the case $n=2$... – Arturo Magidin May 14 '12 at 1:56
For $n=2$ is trivial. Maybe for $n>2$ – tes May 14 '12 at 1:59
I try math induction, but don't know how to connect n by n matrix with (n+1) by (n+1) matrix – tes May 14 '12 at 2:00
1
– J. M. May 14 '12 at 2:45
show 1 more comment
## 2 Answers
Clearly the determinant is $0$ if $x_i = x_{i+1}$ (because two adjacent rows are identical) or $x_1 = x_n$ (last row is $-$ first row). So the determinant must be a polynomial divisible by $(x_1 - x_2)(x_2 - x_3) \ldots (x_{n-1} - x_n)(x_n - x_1)$. But the determinant has degree $n$, so it is a constant times this product. To determine what the constant is, you might try a special case: $x_i = i$.
EDIT: Thanks to J.M.'s remark, you can show that in that special case the inverse of your matrix $A_n$ looks like this:
$$\pmatrix{ -\frac{1}{2}+\frac{1}{2n-2} & \frac{1}{2} & 0 & 0 & \ldots & 0 & \frac{1}{2n-2}\cr \frac{1}{2} & -1 & \frac{1}{2} & 0 & \ldots & 0 & 0\cr 0 & \frac{1}{2} & -1 & \frac{1}{2} & \ldots & 0 & 0\cr \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \cr 0 & 0 & 0 & 0 & \ldots & -1 & \frac{1}{2}\cr \frac{1}{2n-2} & 0 & 0 & 0 & \ldots & \frac{1}{2} & -\frac{1}{2} + \frac{1}{2n-2}\cr}$$ where the elements on the main diagonal are all $-1$ except for the first and last, those just above and below the diagonal are all $1/2$, the top right and bottom left are $1/(2n-2)$, and everything else is $0$.
-
Note that $x_i<x_{i+1}$ – tes May 14 '12 at 2:03
1
@tes, you didn't use that hypothesis anywhere in your calculations, did you? So the formulas are correct no matter how the variables are related, aren't they? In particular, you're allowed to look at what happens if two are equal, as Robert has done, right? – Gerry Myerson May 14 '12 at 2:24
I just want to prove that A is invertible – tes May 14 '12 at 3:12
@robert: How you can conclude that the determinant must be a polynomial divisible by (x1−x2)(x2−x3)…(xn−1−xn)(xn−x1) – tes May 14 '12 at 3:16
1
Somewhat apropos: Yueh, in these papers discusses tridiagonal matrices similar to the inverse Robert obtained in this answer. I suspect another proof of the determinant's evaluation can be done based on these (e.g. by expressing the determinant as a product of trigonometric functions of angles in progression). – J. M. May 14 '12 at 18:04
show 6 more comments
Expanding Robert solution.
Let $det(A) = P(x)$. Let the polynomial on the right is a multi-variable polynomial $P(x)$.
If $x_1 = x_2$ then $det(A) = 0$ i.e $P(x) = 0$ i.e. $(x_1 - x_2)$ is a factor of $P(x)$.
If $x_2 = x_3$ then $det(A) = 0$ i.e $P(x) = 0$ i.e. $(x_2 - x_3)$ is a factor of $P(x)$.
etc. We calculate possible factors of $P(x)$. Have we calculated all possible factors of $P(x)$?
Let $Q(x) = (x_1 - x_2) (x_2 - x_3) \ldots (x_{n} - x_{1})$
What we know about the degree of $P(x)$? It is $n$, equal to that of $Q(x)$. Thus $Q(x)$ multiplied by some constant factor should give us $P(x)$ i.e. we already have all possible factors of $P(x)$.
A Robert has already mentioned, we should calculate this constant factor.
It follows that if for any $i$, $x_i = x_{i+1}$, then $P(x) =0$ i.e. $det(A) = 0$. Since you alretady have constraints such as $x_1 >x_2 \ldots x_n$, $det(A) \ne 0$.
-
Thank you very much – tes May 14 '12 at 3:38
Q(x)=(x1−x2)(x2−x3)…(xn−xn−1), where is the x? – tes May 14 '12 at 3:43
@tes Messed it up. Let me clear it. – Dilawar May 14 '12 at 3:45
It is not so obvious that the constant is not $0$. – Robert Israel May 14 '12 at 16:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8702432513237, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/1438/reciprocal-lattices/1449 | # Reciprocal Lattices
Is there an easy way to understand and/or visualize the reciprocal lattice of a two or three dimensional solid-state lattice? What is the significance of the reciprocal lattice, and why do solid state physicists express things in terms of the reciprocal lattice rather than the real-space lattice?
-
Good question, actually. The answer is not immediately obvious. – Noldorin Nov 29 '10 at 17:39
## 4 Answers
Its questions like this one that keep me coming back to this site !
Is there an easy way to understand and/or visualize the reciprocal lattice of a two or three dimensional solid-state lattice?
YES ! The reciprocal lattice is simply the dual of the original lattice. And the dual lattice has a simple visual algorithm.
1. Given a lattice $L$, for each unit cell of $L$ find the point corresponding to that cell's "center of mass" (see below).
2. Connect each such "center of mass" to its nearest neighbors.
3. The resulting lattice is the dual of $L$.
To find center of mass of unit cell (we consider 2d case, generalizes to arbitrary dimension):
1. Draw the perpendicular bisectors of the edges which bound the unit cell.
2. For regular lattices these lines should intersect at a single point in the interior of the cell. This point is the "center of mass" of the cell.
Performing these simple steps you find that the dual of a square lattice is also a square lattice, and that the triangular and hexagonal lattices are each others duals ! You can see a nice illustration of this fact here.
What is the significance of the reciprocal lattice, and why do solid state physicists express things in terms of the reciprocal lattice rather than the real-space lattice?
As mentioned by others this has to do with fourier transforms. In solid-state physics we want to understand the excitations (waveforms) that a certain material, whose structure is given by some lattice $L$, can support. For a lattice only certain momenta are allowed due to its discrete structure. These allowed momenta correspond to the vertices of the dual lattice! For more see the wikipedia page or check out the first couple of chapters of little Kittel or Ashcroft and Mermin.
```` Cheers,
````
Edit: This to clarify some doubts about my answer @wsc has expressed in the comments.
First of all, it is incorrect that reciprocal lattice vectors in 3D have dimensions $1/L^2$. Consider a 3D lattice with basis vectors $\{a_i\}$. The reciprocal lattice has basis vectors given by
$$b_i = \frac{1}{2V} \epsilon_i{}^{jk} \, a_j a_k$$
in index notation, with summation convention. A more familiar way to write this is in vector notation:
$$\mathbf{b}_i = 2\pi \frac{\mathbf{a}_j \times \mathbf{a}_k}{\mathbf{a}_i \cdot (\mathbf{a}_j \times \mathbf{a}_k)}$$
where $(i,j,k)$ are cyclic permutations of $(1,2,3)$. We can see that
$$\dim[\mathbf{b}_i] = \frac{\dim[\mathbf{a}]^2}{\dim[\mathbf{a}]^3} = \frac{1}{L}$$
and in terms of the lattice spacing $a$, $\vert\mathbf{b}\vert \sim \frac{1}{a}$. In fact, this is a basic fact true in any dimension.
We can also understand the normalization of the reciprocal lattice vectors by the factor $\mathbf{a}_i \cdot (\mathbf{a}_j \times \mathbf{a}_k)$ as being nothing more than $V$ - the volume of the unit cell. Why? So that the transformation between the lattice and reciprocal lattice vector spaces is invertible and the methods of Fourier analysis can be put to use.
For all regular lattices AFAIK the "dual" and "reciprocal" lattices are identical. For irregular lattices - with defects and disorder - this correspondence would possibly break down.
-
@space_cadet: Thanks for a great answer. This is what I thought the reciprocal lattice was, but I wasn't sure. The biggest difficulty I actually have is trying to apply this concept to graphene. I remember a lot of this type of stuff from my undergrad x-ray crystallography class, but most of those unit cells have atoms on the corners (unlike graphene's unit cell; you can see where the confusion comes from). I still don't understand why the unit cells in reciprocal space have different edge lengths. I'll see if I can summarize my confusion in a question and post a link in another comment – David Hollman Nov 29 '10 at 22:31
This is a great answer. However, surely that would mean for a cubic lattice of sides length $a$, the dual has sides of length $\sqrt{3a^2}$ whereas Kittel gives is as $2\pi/a$ without fully explaining why we would need the extra $2\pi$ factor ("... [the $2\pi$ factors] are are not used by crystallographers but are convenient in solid state physics.") ... – Brendan Jan 3 '11 at 2:07
1
This is totally incorrect: the dual lattice and the reciprocal lattice are not the same thing. This leads to Brendans confusion about the reciprocal lattice constants, which must have units of inverse length. – wsc Jan 3 '11 at 3:50
@wsc then why don't you tell us what the difference is between the two notions? To clear Brendan's confusion, each reciprocal lattice vector is multiplied by a factor of $2\pi/V$ where $V$ is the volume of each unit cell. For a 3D lattice, this implies that the reciprocal lattice vectors have dimension $1/a$, $a$ being the lattice constant of the main lattice. I simply didn't mention this fact in the main answer. Doesn't change anything I've said. – user346 Jan 3 '11 at 4:23
1
We also have to be careful about nomenclature. Mathematicians like to call conjugate variables "dual," and in that sense the fourier conjugate lattice is dual to the real lattice... However us mechanics of a statistical bent very often use the dual lattice that you describe the construction of in your post. This dual lattice is extremely useful, but it is not the same thing as the fourier conjugate lattice, which is what absolutely all physicists mean by "reciprocal lattice" – wsc Jan 3 '11 at 5:05
show 5 more comments
The significance of the reciprocal lattice is tied with diffraction of waves on a crystal.
How do we determine the crystalstructure of some material? We usually do by bombarding a tiny piece of crystal with X-rays or neutrons or another type of wave of appropriate wavelength and properties. We then look at the diffraction pattern. Now, to make a long story short, the pattern that appears will essentially be a pattern in the reciprocal space.
http://en.wikipedia.org/wiki/X-ray_crystallography
-
2
This is pretty brief, but basically right. It's all to do with the idea of Bragg diffraction. k-space (reciprocal space) becomes obviously helpful when considering it from that perspective. – Noldorin Nov 29 '10 at 17:40
To understand why reciprocal space is important it is perhaps useful to illustrate related concept of frequency space.
Anytime one is analyzing waves (be it sound waves, EM waves or any other kind) one can exploit the translational symmetry of the laws of motion. The dual quantity to position is momentum (for massless fields like EM waves this also corresponds to frequency) and because of the said symmetry everything becomes whole lot easier. Instead of working with general wave (which can be a pretty difficult beasts) one instead works with monochromatic (i.e. single frequency) waves. For this special kind of waves the differential equations simplify just to algebraic equations, so the problem becomes easily tractable.
Now, the method explained in the previous paragraph is known as Fourier analysis and Fourier transform and is very general. Put simply, anytime you have some nice symmetry you can use Fourier analysis to move to the dual space where the problem will simplify greatly. When applied to lattices (which have a lot of translational symmetry) one obtains the concept of a reciprocal lattice.
Mathematical note:
From mathematical point of view one exploits that the system is essentially described by integrable functions on some locally compact abelian group $G$. By Peter-Weyl theorem we know that the space of such functions is parametrized by irreducible representations of $G$ which form a dual Pontryagin group $\hat{G}$.
For example when working with periodic functions one is actually working with the functions defined on the circle $S^1$. Now the irreps of $S^1$ are parametrized precisely by integers. So in this case we get the decompositions of a periodic function $$f(x) = \sum_{k \in \mathbb{Z}} f_k \rho_k(x)$$ where $f_k$ are the Fourier modes of $f(x)$ and $\rho_k(x) = \exp(-i k x)$ are irreps of $S^1$. In general the irreps of an LCA group are given by some exponential so this explains the ubiquity of exponentials in physics.
-
Why the down-vote? – Marek Nov 29 '10 at 20:14
Your answer, while mathematically detailed, is off the mark. It doesn't tackle @david's original question which was: " an easy way to understand and/or visualize the reciprocal lattice". Why do we need such technicalities as the Peter-Weyl theorem to explain what a dual lattice is? – user346 Nov 29 '10 at 20:46
@cadet: He also asked what is the significance of the reciprocal lattice and why we don't just work in real-space instead. I think I answered that. As for the mathematics, it was just added as a note (and marked as such) for anyone that might be interested why all of this stuff works. Do you I understand you correctly that adding mathematical notes makes the answer so much worse it deserves to be down-voted? – Marek Nov 29 '10 at 20:55
It seemed to me that you were using a "hammer to break an egg". And again to explain the "significance" or the "why" of reciprocal lattices we hardly need to invoke language about locally compact groups and the peter-weyl theorem, which, in any case, is decipherable by a small minority. And while there are many talented individuals, such as yourself, on this site who have that capacity, there are many more who would feel intimidated rather than helped by how you framed your answer. Or, perhaps I made an error of judgment. If so, I apologize. Feel free to return the favor anytime :) – user346 Nov 29 '10 at 21:20
@cadet: Fair enough, I see your point. As I said already under some other question, I never know at what level to formulate my answers (and actually it's probably harder to explain at low level; that takes a good amount of experience in teaching). But seeing that there are other physical answers already I thought it would be useful for someone to see also the more mathematical point of view. Not sure if my thinking was correct. – Marek Nov 30 '10 at 0:14
show 1 more comment
One problem I had in understanding reciprocal space is the origin in a real space unit cell is at infinity in reciprocal space and vise-versa. How could any real construct be mapped out in reciprocal space in a finite cell that included the origin?
However it is important to remember that we tend to deal with waves in reciprocal space. So for example, when crystallographers use x-rays, neutron diffraction etc. to obtain a pattern in reciprocal space, simply applying the reciprocal conversion (i.e. $2\pi/a$) does not obtain the positions of the atoms, but rather the wavelengths of a set of waves that will denote the positions of the atoms. A wave with wavenumber zero will have infinite wavelength (i.e. is not a wave [is this correct?])
This is basically saying what the others have said about the reciprocal space being the Fourier transform of the real space - waves do not have position that is so easily defined, but they do have well defined wavelength!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426877498626709, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/154240-critique-my-proof.html | # Thread:
1. ## [Solved] ....
Claim: The series $\sum_{n=1}^{ \infty} \frac{2}{n^2 + n}$ converges.
By definition:
The series $\sum_{n=1}^{ \infty} \frac{2}{n^2 + n} = \sum_{n=1}^{ \infty} a_n$ converges if and only if for every $\epsilon > 0$, there exists some $N \in \mathbb{N}$ such that for all $n, m$ with $n > m \geq N$, we have $|a_{m+1} + a_{m+2}+ ...+ a_n| < \epsilon$.
(Attempted) proof:
Let $m \in \mathbb{N}$ be arbitrary but fixed, and let $S_m = \sum_{n=1}^{m} \frac{2}{n^2 + n}$. Then the sequence $(S_m)$ is convergent, and it necessarily follows that $(S_m)$ is Cauchy. That is, for all $\epsilon > 0$, there exists some $N \in \mathbb{N}$ such that for all $n, m \geq N$, we have $|S_{n} - S_{m}| < \epsilon$.
But if $n > m \geq N$, we have $|S_n - S_m| = |a_{m+1} + a_{m+2}+ ...+ a_n| < \epsilon$
which shows, directly from the definition, that the series $\sum_{n=1}^{ \infty} \frac{2}{n^2 + n}$ is convergent. $\square$
Is this satisfactory? Or is there a more simplistic approach that I should be taking?
2. 1) Do you have to do this by definition, or have you already seen the comparison test?
2) If you want to do this by definition, then you didn't prove anything. You stated that $\displaystyle S_m = \sum_{n=1}^{m} \frac{2}{n^2 +n}$ is convergent, which means $\displaystyle \lim_{m \to \infty} S_m = \sum_{n=1}^{m} \frac{2}{n^2+n}$ converges, but that is what you have to prove! What you said is "this series converges because it converges" - this is not a proof.
3. Originally Posted by Defunkt
1) Do you have to do this by definition, or have you already seen the comparison test?
2) If you want to do this by definition, then you didn't prove anything. You stated that $\displaystyle S_m = \sum_{n=1}^{m} \frac{2}{n^2 +n}$ is convergent, which means $\displaystyle \lim_{m \to \infty} S_m = \sum_{n=1}^{m} \frac{2}{n^2+n}$ converges, but that is what you have to prove! What you said is "this series converges because it converges" - this is not a proof.
Well, I've stated that $S_m$ is convergent since m is finite and fixed. The rest of the "proof" was intended to show that this extends to the case in which n goes to infinity.
I did try my hand at the comparison test, but wasn't very successful there either. I'll give it another shot and see what I come up with.
4. If m is finite and fixed then $S_m$ is just a number. It does not depend on n.
The easiest way to prove this is by using the comparison test. I assume you have proved that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^2}$ converges. Now,
$\frac{2}{n^2+n} \le \frac{2}{n^2} \ \forall n \in \mathbb{N}$
and so
$\displaystyle \sum_{n=1}^{\infty} \frac{2}{n^2 + n}$ converges iff $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^2}$ converges, since they both have positive terms.
5. Wow. I literally just made the connection with that particular series before coming back to see your post. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9717325568199158, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/262973/legendre-polynomials-triple-product/262983 | # Legendre Polynomials Triple Product
I have to solve the following integral:
\begin{align} \int_{-1}^{1} \left(x^2 -1\right)^3 P_k(x)\,P_l(x)\, P_m(x) \;dx \end{align} where $P_{k,l,m}$ are Legendre Polynomials
The triple product \begin{align} \int_{-1}^{1} P_k(x)\,P_l(x)\, P_m(x) \;dx = 2 \begin{pmatrix} k & l & m \\ 0 & 0 & 0 \end{pmatrix}^2 \end{align} using the special case of $3j$ symbol form \begin{align} \begin{pmatrix} k & l & m \\ 0 & 0 & 0 \end{pmatrix} &= (-1)^s \sqrt{(2s-2k)! (2s-2l)! (2s-2m)! \over (2s+1)!} {s! \over (s-k)! (s-l)! (s-m)!} \\ & \mbox{for $2s=k+l+m$ even} \\[3pt] \begin{pmatrix} k & l & m \\ 0 & 0 & 0 \end{pmatrix} &= 0 \quad\mbox{for $2s=k+l+m$ odd} \\ \end{align}
I'm sure you should be able to solve this by doing integration by parts but can't seem to get it to work. Any tips?
So using the answer below I think you get the following for step 1 of 3
\begin{multline} \int_{-1}^{1}(x^2-1)^3 P_k P_l P_m = \overbrace{(x^2-1)^3\frac{(P_{k+1} - P_{k-1})}{2k+1} P_l P_m \Big]_{-1}^1}^\text{ = 0}\\ -\int_{-1}^{1} \frac{(P_{k+1} - P_{k-1})}{2k+1}(x^2-1)^2\Big( 6xP_l P_m \\ + (1+l) P_m(P_{l+1} - P_{l-1}) + (1+m) P_l(P_{m+1} - P_{m-1}) \Big) \; dx\\ \end{multline}
Not sure if the formula for integration works as I think the $6xP_lP_m$ term might cause problems?
-
## 2 Answers
You could integrate one of the $P_k(x)$ and take the derivative of the rest. The power of $(1-x^2)$ gets reduce by the fact that $$\partial_x P_l(x) = \frac{(1+l) [ P_{l+1}(x)-x P_l(x) ]}{x^2-1}.$$ You have to apply partial integration a few times and you will generate a bunch of Legendre triple product.
-
Think this way will work, thanks for the help. – Matt Dec 21 '12 at 2:18
Just wondering, where did you get the formula above? – Matt Dec 21 '12 at 15:30
– Fabian Dec 21 '12 at 16:00
thanks for info, I think it might be wrong, wikipedia has a different one (which I think is also wrong) as I worked out the following: \begin{align} P'_n = \frac{n(n+1)}{(1-x^2)(2n+1)} \left[P_{n+1} - P_{n-1} +c \right] \end{align} Which works when I do the differentiation. – Matt Dec 21 '12 at 18:07
got it from the following definition: \begin{align} \frac{d}{dx} \left[ (1-x^2) \frac{d}{ dx} P_n(x) \right] + n(n+1)P_n(x) = 0 \end{align} – Matt Dec 21 '12 at 18:16
show 2 more comments
## Did you find this question interesting? Try our newsletter
I think the best way to approach this is as follows, note that \begin{align} (x^2 -1 ) = \frac{P_2 - 2}{3} \end{align} You can then use the following definition \begin{align} P_kP_l = \sum_{m=|k-l|}^{k+l} \begin{pmatrix} k & l & m \\ 0 & 0 & 0 \end{pmatrix}^2 (2m+1)P_m \end{align} This allows the integral to be written as follows \begin{align} \int_{-1}^{1} (x^2-1)^3P_iP_jP_k \; dx &= \int_{-1}^{1} \frac{1}{9}\left(P_2^3 + . . .-8 \right) P_i P_j P_k \; dx \end{align} The most difficult term to deal with is the $P_2^3 P_i P_j P_k$ \begin{align} P_2^3 P_i P_j P_k &= \sum_{m=0}^{4} \begin{pmatrix} 2 & 2 & m \\ 0 & 0 & 0 \end{pmatrix}^2 (2m+1)P_m P_2 P_i P_j P_k \\ &= \sum_{m=0}^{4} \begin{pmatrix} 2 & 2 & m \\ 0 & 0 & 0 \end{pmatrix}^2 (2m+1) \sum_{n=|m-2|}^{m+2} \begin{pmatrix} 2 & m & n \\ 0 & 0 & 0 \end{pmatrix}^2 (2n+1)P_n P_i P_j P_k \\ &= \sum_{m=0}^{4} \begin{pmatrix} 2 & 2 & m \\ 0 & 0 & 0 \end{pmatrix}^2 (2m+1) \sum_{n=|m-2|}^{m+2} \begin{pmatrix} 2 & m & n \\ 0 & 0 & 0 \end{pmatrix}^2 (2n+1) \sum_{l=|n-i|}^{n+i} \begin{pmatrix} n & i & l \\ 0 & 0 & 0 \end{pmatrix}^2 (2l+1) P_l P_j P_k \end{align} Which can then make use of the usual triple integral. All other terms can be solved for in a similar manner. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7185150980949402, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/2268/supermassive-black-holes-with-the-density-of-the-universe?answertab=active | # Supermassive black holes with the density of the Universe
This question was inspired by the answer to the question "If the universe were compressed into a super massive black hole, how big would it be"
Assume that we have a matter with a uniform density $\rho$. Some mass of this matter may forms a black hole with the Schwarzschild radius:
$\large{R_S=c*\sqrt{\frac{3}{8\pi G\rho}}}$
This equation is easy to get from
$\large{R_S=\frac{2GM}{c^2}}$ and $\large{\rho=\frac{3M}{4\pi R_S^3}}$
For universe density ($9.3*10^{-27} kg/m^3$) Schwarzschild radius of the black hole is 13.9 billion light-years. While radius of observable Universe is 46 billion light-years.
We could be located inside such black hole, but we don't observe its singularity and event horizon.
So why there are no supermassive black holes with the density of the Universe?
Is it means, that the whole Universe is infinite and has uniform density?
P.S. Relative link - Is the Big Bang a black hole?
-
Nobody ever said that there is a singularity or an event horizon. – David Zaslavsky♦ Dec 26 '10 at 8:53
@David, The question about singularity of such black hole is rhetoric of course. There is no any evidence of that. The question is why? – voix Dec 26 '10 at 9:21
@voix: (since I can't post an answer) Your second equation is correct, but the third one isn't, so your first equation is not correct either. The third one assumes that the universe is a sphere, and that its radius is equal to its Schwarzschild radius; neither of these statements is correct. – Bruce Connor Dec 26 '10 at 19:56
@Bruce , third equation refers only to the part of the Universe and that part may be spherical. – voix Dec 26 '10 at 20:12
@voix: true, my mistake. I'm thinking of a couple other possibilities. Why was this closed @David ? – Bruce Connor Dec 26 '10 at 20:32
show 10 more comments
## 1 Answer
You are asking a wrong question. Here is the problem with your reasoning.
You are assuming a Schwarzschild metric and a homogenous distribution of mass. But the Schwarzschild geometry describes a vacuum spacetime. So you can't use it for a spacetime filled with matter. For a cosmological spacetime filled with matter, like our universe, the suitable metric to use would be something else, like the FRW for example.
You could only use the Schwarzschild spacetime if you assumed a sphere of some uniform density $\rho$ and vacuum outside the radius of the sphere.
Let me illustrate how things would work out then. As you can see, a particular density corresponds to a particular $R_s$, lets call it $R_s(\rho)$. So if you had a sphere of matter with a radius $R_1$ grater than $R_s(\rho)$, then you couldn't apply the formula $R_s(\rho)=c\sqrt{\frac{3}{8\pi G \rho}}$. You would have to use the Schwarzschild metric only in the vacuum region outside of the sphere. So you would have then $R_s=\frac{8\pi G\rho R_1^3}{3c^2}$. In order to see how the $R_s$ compares with $R_s(\rho)$, you can replace the density with $\rho=\frac{3c^2}{8\pi G R_s(\rho)}$. So you would get that the Schwarzschild radius for a sphere of uniform density $\rho$ and radius $R_1>R_s(\rho)$ is $R_s=\left(\frac{R_1}{R_s(\rho)}\right)^2R_1$, which is grater than the radius of the sphere. So the sphere is inside its Schwarzschild horizon. If on the other hand, the radius $R_1$ is smaller than $R_s(\rho)$, then the corresponding horizon would have to be inside the sphere. But inside the sphere the Schwarzschild metric doesn't apply. So it isn't necessary that there should be a horizon inside the matter distribution.
If you apply these to the universe and assume for example that the radius of the visible universe is the radius $R_1$ of the sphere, then you would have a horizon radius (using your numbers) that would be almost 10 times the radius of the observable universe. So, the entire universe would have to be in a black hole of radius of 460 billion light-years. So the assumption that we should see black holes with horizons of radii of 13.9 billion light-years is not correct.
If one assumes the above point of view, one could say that the universe is a white hole that is exploding.
I hope that all these are helpful and not confusing.
-
radius in my first equation is a critical radius. Spherical uniform matter with radius more than the critical one should turns into the black hole. But if mass of the uniform matter is enough, it may turns into 3, 5, 10 black holes. You don’t take this into account in your answer. The whole clusters of stars are forming from the uniform matter, why the black holes in my case isn't? – voix Dec 27 '10 at 17:34
First of all, you do understand that you can’t use the Schwarzschild metric to identify a horizon inside the sphere of uniform density, right? If now you would like to treat the problem of the collapse of a sphere of uniform density and zero pressure, then the geometry is described by a Schwarzschild geometry outside and an FRW inside the sphere, with the appropriate matching conditions on the surface. Due to the symmetry of this system, the whole thing will collapse simultaneously and you will not have any fragmentation of the type that you are proposing. – Vagelford Dec 28 '10 at 11:50
So, in the example that I am using the consistent description is that you have a Schwarzschild geometry outside the radius of the observable universe, with a horizon at a radius 10 times that and the interior geometry is FRW without horizons. – Vagelford Dec 28 '10 at 11:51
Now, regarding the fragmentation issue. In order to have it there should be density perturbations and the scale of fragmentation would depend on the wavelength of the perturbations. There is a stability criterion, called the Jeans criterion that tells us what is the minimum scale of stable perturbations. That minimum scale (wavelength) depends on the density and on the speed of sound in the medium. But since you have zero pressure, you also have zero speed of sound and thus the minimum wavelength for stable perturbations is zero. That means that all perturbations are unstable and collapse. – Vagelford Dec 28 '10 at 11:54
So the fragmentation would depend on the spectrum of the perturbations and nothing else. – Vagelford Dec 28 '10 at 11:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291899800300598, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/115940?sort=votes | ## For finite group G and field k of char=p, if P,P′ are projective k[G]-modules with [P]=[P′], is it true that P=P′ ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
That is: is it true that if projective k[G]-modules have same composition factors then they are isomorphic?
This is easy to see for char(k)=0, or if G is a composition of a p-group and a p′-group. Serre in "Linear Representations of Finite Groups" (a remark in 16.2 after Corr.2) states this as a well-known fact: "Indeed we know that the equality [P]=[P′] ... is equivalent to P=P′". But unfortunately no references.
-
I edited tags. Note too that your equal sign in the quote from Serre should be the isomorphism symbol. – Jim Humphreys Dec 10 at 13:20
## 2 Answers
Some clarifications to the question are needed. First, you are referring to Section 16.1 of Serre's book (not 16.2), where he is formulating the main results in Brauer theory. These were originally derived (as in the 1962 Curtis-Reiner text) more concretely in terms of Brauer characters, but then recast in the language of Grothendieck groups. The result you are asking about is formally stated by Serre as Corollary 2 of Theorem 35 in that same section.
Serre's parenthetic remark after Corollary 2 of Theorem 34 in 16.1 refers back to the earlier and more elementary step in his Corollary 2 of Proposition 42 in 14.4.
There are two essential steps here, based always on the fact that you have a triple $(K,A,k)$ involving a residue field $k$ of characteristic $p>0$ coming from a suitable ring $A$ whose field of fractions $K$ has characteristic 0. (The deeper results require some hypotheses on completeness, splitting fields, etc.) First you compare the Grothendieck groups of projective modules for $KG$ and $AG$ (as in Serre's Section 14). Then you build the $cde$-triangle as in Section 16 and obtain there the injectivity of the map $c$ as Geoff indicates. (The later book Methods of Representation theory I by Curtis-Reiner has a more detailed version of all this theory.)
Let me mention that working with a finite group is essential here, since a partial parallel exists for certain finite group schemes (in the guise of restricted Lie algebras) for which the matrix of Cartan invariants has determinant 0 and the behavior of projectives is more complicated.
-
Jim - yes, 16.1, my bad. I still don't get the arugment. Corr. 2 of Prop. 42 in 14.4 says: P=Q if [P]=[Q] in $$P_A[G]$$, not in $$R_A[G]$$. That is: if P and Q have same indecoposable projective factors, then they are isomorphic (which is already stated in the preceding Corr 1.). From this I can't deduct that if P and Q have same irreducible factors, then they are isomophic. As for using the cde-triangle: Serre uses 16.1 as a building step to prove cde properties in Ch. 17, so I am not sure that using it here wouldn't create a cycled reference. Thanks for the ref to Curtis-Reiner. – George Dec 11 at 4:00
I think what I wrote is accurate: your original question gets answered only by Corollary 2 of Theorem 35 in 16.1. (Section 14 is more elementary and preliminary.) Section 16.1 is very concise, so you have to follow up with details of Serre's proofs later on. His formalism is efficient but it takes real work to identify the main points in the proofs and their logical dependence. You might find the concrete Brauer style in the older Curtis-Reiner book to be more attractive. But the theorems are not easy in any case. Good luck. – Jim Humphreys Dec 11 at 18:28
Thanks, Jim, Geoff. Unfortunately, I am not allowed to accept both answers (or I don't know how), so accepting from Jim. – George Dec 11 at 20:11
I think only one answer can be accepted in general. – Geoff Robinson Dec 11 at 23:22
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is true for finite groups, and it is a consequence of the non-singularity of the Cartan matrix (whose determinant is a power of $p$) in the algebraically closed case. The Cartan invariant $c_{ij}$ gives the multiplicity of the $j$-th simple module as a composition factor of the $i$-th projective indecomposable. If there were two non-isomorphic projectives with the same composition factors, the Cartan matrix would certainly be non-singular. I believe the result may have been stated by R. Swan.
-
Geoff - thanks for the hints, I will do the homework. Actually, this should be simpler, or Serre wouldn't be referring to it in his book (which is supposed to be the introduction to these matters). Well, at least now I know that this is true. BTW, from the examples I looked at, it seems that the order of the composition factors is not fixed: if V, W are factors for P, then for the same P it can be both 0 -> V -> P -> W -> 0 and 0 -> W -> P -> V -> 0, with P still being indecomposable (but this probably should be asked as a separate question). – George Dec 10 at 13:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506592750549316, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/199456/matrix-of-linear-transformation-range-of-transformation?answertab=votes | # Matrix of linear transformation (range of transformation)
I'm doing a bit of homework and want to check if I'm doing it right. The question is as follows:
Let $T: P_2(\mathbb{R}) \rightarrow P_3(\mathbb{R})$ be defined by $T(p)(x) = (x-1)p(x)$
We're then asked to compute the matrix of $T$ with respect to the standard basis $\{1, x, x^2\}$. I go through that and get:
$A = \begin{bmatrix}-1&0&0\\ 1&-1&0\\0&1&-1\\0&0&1\end{bmatrix}$
Is this correct? I think it is but I'm not really sure.
However my main question is regarding $range T$. How would one describe the range of $T$ without using the matrix? I tried doing it like
Let $p \in P_3({\mathbb{R})}$ then
$T(p)(x) = (x-1)(a + bx + cx^2)$ for $a,b,c, \in \mathbb{R}$
$T(p)(x) = ax + bx^2 + cx^3 -a -bx -cx^2 = (a-b)x + (b-c)x^2 + cx^3 - a$
So, the range of T would be $\{ (a-b)x + (b-c)x^2 + cx^3 - a : a,b,c \in \mathbb{R}\}$?
-
## 1 Answer
Yes, your matrix is correct. Remember that $x-1$ is a factor of a polynomial $p(x)$ if and only if $p(1)=0$, so you can describe the range of $T$ as the set of cubic polynomials $p(x)$ such that $p(1)=0$. Now notice that if $p(x)=ax^3+bx^2+cx+d$, then $p(1)=a+b+c+d$, so $p(1)=0$ if and only if $a+b+c+d=0$: the range of $T$ consists precisely of the cubic polynomials whose coefficients sum to $0$.
-
I was going over this question again and was trying to deduce a basis for rangeT. Would a basis simply by $\{(-1,0,0), (0,-1,0), (0,0,-1)\}$ from $a = -b -c -d$? – user1520427 Sep 26 '12 at 8:43
@user1520427: The range of $T$ is a subspace of $P_3(\Bbb R)$, so the elements of any basis have to be polynomials, not tuples. One basis that works, based on what I think is your underlying idea, is $\{x^3-x^2,x^3-x,x^3-1\}$. Certainly these are all in the range of $T$; if you show that they’re linearly independent, you’ll know that they’re a basis. I’d choose $\{x^3-1,x^2-1,x-1\}$, though: it looks just a little nicer to work with, though there’s really not much difference. – Brian M. Scott Sep 26 '12 at 8:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9649325609207153, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/dice+probability-distributions | # Tagged Questions
4answers
46 views
### Probability of $X$ out of $N$ dice landing on $M$
The problem is as follows: We have $N$ dice and we throw them on a table. What is the probability that $M$ will fall $X$ times? Specific example: We have $10$ dice and we throw them on a table. What ...
1answer
66 views
### Dice game modelling: Lose everything on “3”, double everything on “1” or “6”
I was recently playing a quite easy dice game: You trow a fair dice: if you get a "3" the next player continues, if you get something else it is up to you to continue. If you continue and you throw ...
1answer
110 views
### what is probability of throwing $n$ dice $m$ times to get at least one 6 for each dice
We have $n$, 6-face fair dice. At each time $n$ dice are thrown independently. I want to calculate the probability of number of times we should throw dices before having at least one 6 from each of ...
1answer
91 views
### Probability of getting any number if I roll the die 4 times.
We have a question to investigate any game between two players that have dice, when the dice are rolled $4$ times what is the probability of getting any number say $4$ or $5$.. note that the highest ...
3answers
782 views
### “Go-first” dice for $N$ players
I'm interested in sets of dice that can be used to determine who "goes first" (hence the name) in an $N$-player game; more generally, I want to determine a complete ordering of the players with a ...
0answers
109 views
### Boggle dice set letter distribution algorithm [duplicate]
Possible Duplicate: Boggle letter probability What algorithm have the creators of the word game Boggle used to come up with these dice sets? ...
0answers
80 views
### Modelling die roll sequence
I have a die with $N$ sides. It has a hidden transitive order which has to be uncovered. At the start of a trial, I throw the die until I get the highest-ranking side. I then throw the die until I get ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391098022460938, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/139123-group-order-p-2-q-2-solvable.html | # Thread:
1. ## Group of order p^2*q^2 is solvable?
Hi! Im having an exam in group theory in few days. Ive lost some of my solutions and cant proof this by myself.
There was also a specific example regarding group of order 36=2^2*3^2. With number of sylow 2-subgroups or 3-subgroups being one its simple. But I dont know how proceed when lets say number of sylow 3-subgroups is 4.
If anyone could help me with this problems i would be happy.
-santtu
2. Originally Posted by santtu
Hi! Im having an exam in group theory in few days. Ive lost some of my solutions and cant proof this by myself.
There was also a specific example regarding group of order 36=2^2*3^2. With number of sylow 2-subgroups or 3-subgroups being one its simple. But I dont know how proceed when lets say number of sylow 3-subgroups is 4.
If anyone could help me with this problems i would be happy.
-santtu
You only have to prove the existence of a non-trivial normal subgroup in a group $G\,,\,\,|G|=36$ , since then both this sbgp. and the resulting factor group are solvable and thus also $G$ is (assuming you already know/can prove that any group of order less than 36 is solvable).
Assume there are 4 Sylow 3-sbgps.: if all of them have trivial intersection we have $4\cdot 8=32$ elements of order 3 or 9, and then the other 4 elements must be the unique Sylow 2-sbgp. which is then normal.
So assume $P\cap Q\neq 1\,,\,\,P,\,Q$ two different Sylow 3-sbgps. It then must be that $|P\cap Q|=3\Longrightarrow$ the normalizer of $P\cap Q$ has at least 15 elements (all the ones in both $P,\,Q$...why?) , so it at least has order 18 (why?), and thus either this normalizer sbgp. or $P\cap Q$ are normal non-trivial sbgps. of $G$ and we're done.
Tonio
Ps. It can be proved, but it's lenghty, that any group of order 36 has at least one Sylow sbgp. normal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949059784412384, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/On_the_Equilibrium_of_Heterogeneous_Substances | # On the Equilibrium of Heterogeneous Substances
In the history of thermodynamics, On the Equilibrium of Heterogeneous Substances is a 300-page paper written by American mathematical-engineer Willard Gibbs. It is one of the founding papers in thermodynamics, along with German physicist Hermann von Helmholtz's 1882 paper "Thermodynamik chemischer Vorgänge." Together they form the foundation of chemical thermodynamics as well as a large part of physical chemistry.[1][2]
Gibbs's Equilibrium marked the beginning of chemical thermodynamics by integrating chemical, physical, electrical, and electromagnetic phenomena into a coherent system. It introduced concepts such as chemical potential, phase rule, and others, which form the basis for modern physical chemistry. American writer Bill Bryson describes Gibbs's Equilibrium paper as "the Principia of thermodynamics".[3]
On the Equilibrium of Heterogeneous Substances, was originally published in a relatively obscure American journal, the Transactions of the Connecticut Academy of Arts and Sciences, in several parts, during the years 1875 to 1878 (although most cite "1876" as the key year).[4] It remained largely unknown until translated into German by Wilhelm Ostwald and into French by Henry Louis Le Chatelier.
## Overview
Gibbs first contributed to mathematical physics with two papers published in 1873 in the Transactions of the Connecticut Academy on "Graphical Methods in the Thermodynamics of Fluids," and "Method of Geometrical Representation of the Thermodynamic Properties of Substances by means of Surfaces." His subsequent and most important publication was "On the Equilibrium of Heterogeneous Substances" (in two parts, 1876 and 1878). In this monumental, densely woven, 300-page treatise, the first law of thermodynamics, the second law of thermodynamics, the fundamental thermodynamic relation, are applied to the predication and quantification of thermodynamic reaction tendencies in any thermodynamic system in a visual, three-dimensional graphical language of Lagrangian calculus and phase transitions, among others.[5] As stated by Henri Louis Le Chatelier, it "founded a new department of chemical science that is becoming comparable in importance to that created by Lavoisier." This work was translated into German by W. Ostwald (who styled its author the "founder of chemical energetics") in 1891 and into French by H. le Chatelier in 1899.[6]
Gibbs's "Equilibrium" paper is considered one of the greatest achievements in physical science in the 19th century and one of the foundations of the science of physical chemistry.[2] In these papers Gibbs applied thermodynamics to the interpretation of physicochemical phenomena and showed the explanation and interrelationship of what had been known only as isolated, inexplicable facts.
Gibbs' papers on heterogeneous equilibria included:
• Some chemical potential concepts
• A Gibbsian ensemble ideal (basis of the statistical mechanics field)
• A phase rule
## Opening section
“ Die Energie der Welt ist konstant. ”
(The energy of the world is constant).
“ Die Entropie der Welt strebt einem Maximum zu. ”
(The entropy of the world tends to a maximum)
Clausius[7]
The comprehension of the laws which govern any material system is greatly facilitated by considering the energy and entropy of the system in the various states of which it is capable. As the difference of the values of the energy for any two states represents the combined amount of work and heat received or yielded by the system when it is brought from one state to the other, and the difference of entropy is the limit of all possible values of the integral:
$\int \frac{\delta Q}{T}$
in which dQ denotes the element of heat received from external sources, and T is the temperature of the part of the system receiving it, the varying values of energy and entropy characterize in all that is essential the effect producible by the system in passing from one state to another. For by mechanical and thermodynamic contrivances, supposedly theoretically perfect, any supply of work and heat may be transformed into any other which does not differ from it either in the amount of work and heat taken together or in the value of the integral:
$\int \frac{\delta Q}{T}$
But it is not only in respect to the external relations of a system that its energy and entropy are of predominant importance. As in the case of simple mechanical systems, such as are discussed in theoretical mechanics, which are capable of only one kind of action upon external systems, namely the performance of mechanical work, the function which expresses the capability of the system of this kind of action also plays the leading part in the theory of equilibrium, the condition of equilibrium being that the variation of this function shall vanish, so in a thermodynamic system, such as all material systems are, which is capable of two different kinds of action upon external systems, the two functions which express the twofold capabilities of the system afford an almost equally simple criterion for equilibrium.
## References
1. Ott, Bevan J.; Boerio-Goates, Juliana (2000). Chemical Thermodynamics – Principles and Applications. Academic Press. ISBN 0-12-530990-2.
2. ^ a b Servos, John, W. (1990). Physical Chemistry from Ostwald to Pauling. Princeton University Press. ISBN 0-691-08566-8.
3. Bryson, Bill (2003). A Short History of Nearly Everything. Broadway Books. pp. 116–17,121. ISBN 0-7679-0818-X.
4. Gibbs, Willard, J. (1876). Transactions of the Connecticut Academy of Arts and Sciences, III, pp. 108-248, Oct. 1875-May 1876, and pp. 343-524, May 1877-July 1878.
5. Gibbs, J. Willard (1993). The Scientific Papers of J. Willard Gibbs - Volume One Thermodynamics. Ox Bow Press. ISBN 0-918024-77-3.
6. Clausius, R. (1865). The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst, 1 Paternoster Row. MDCCCLXVII. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902376651763916, "perplexity_flag": "middle"} |
http://electronics.stackexchange.com/questions/30053/why-voltage-lags-by-90-from-magnetic-flux | Why voltage lags by 90° from magnetic flux?
I was not sure if I should write it here or on Physics board, but I will give a try.
As I understood current (generally emf) can be generated in a system by varying magnetic flux.
For example, let's take AC generator. A loop of wire rotates between N and S magnets.
$\Phi = B \times A \times cos(\Omega)$, where $cos(\Omega)$ is the angle between the magnetic field B and normal of the loop A.
So the maximum flux can be achieved when the normal of the loop is parallel with magnetic field. But why emf at that moment is 0? And why emf is max when flux is 0?
-
1
– noah1989 Apr 16 '12 at 18:30
– Kevin Vermeer Apr 16 '12 at 18:33
Will do, sir. Thank you. – Arturs Apr 16 '12 at 18:35
3 Answers
The rate of change of flux linkage is equal to the emf.
So it's your above equation should be differentiated with the subject of time. In that case you will get a sine equation. So that's the reason.
when you considering the flux linkage, Flux Linkage = << note the additional N there.
$B \times A \times N \times cos(\Omega)$
if we took omega as angular velocity , then $\Omega \times t$ is the angle relative to the starting position. So differentiate it in the subject of 't'.
In your sine train you could see clearly the rate of change of flux[gradient of flux]is max when angle is 90 deg. So that's the reason.
-
That was the missing piece! Thank you for the fast reply. – Arturs Apr 16 '12 at 18:34
Actually, that is incorrect. The maximum flux is the point where the magnetic field is flowing through the CENTER of the loop (i.e., pushing through the hole)
When talking about the normal of a loop we are talking about the normal of the surface (imaginary surface) area of the loop which points NORMAL the 2D plane of that surface area.
as for the EMF,
````EMF = -d(flux)/dt
````
Where the Flux = BA*cos(w*t) (remember w = Omega) (w*t = theta...lets call it theta mod 360)
````the emf = w*B*A*sin(w*t);
````
so when the Normal of the surface area and the magnetic field are parallel that means
````theta = (w*t mod 360) = 0
sin(theta) = 0
````
and the Flux is at a maximum because
````cos(theta) = 1 where theta = 0
````
From this you can see that the when the Flux is at a peak the abs(EMF) is at the smalest value. (I mention Absolute value because the signal shifts between positive and negative of equal magnitude)
-
Really good description. Thank you Adam. – Arturs Apr 16 '12 at 18:52
please note that EMF=-d(flux linkage)/dt not just flux. – sandun dhammika Apr 16 '12 at 18:58
flux/flux-linkage are kind of interchangeable (or are used interchangeably. In most classes, Flux Linkage isn't mentioned unless you study electric drives. The best way to describe it is the TOTAL area that the magnetic field is cutting. So if you have N wire wraps CUTTING the magnetic field your flux become N*B*A*Cos(theta) – CyberMen Apr 16 '12 at 19:10
That 90º lag comes from the derivative with respect to time in the Faraday's law of induction:
(Differential form)
(Integral form)
The left hand side of this last equation is the EMF. If those equations didn't have a derivate with respect to time, |EMF(t)| would be maximum when the apparent |B(t)| was maximum, but the (negative) derivative causes the sin(t) that there would be in EMF(t) to turn into a -cos(t), and that is where the 90º lag comes from.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204501509666443, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/106706/approximation-of-a-non-holomorphic-function-closed | ## Approximation of a non-holomorphic function [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi,
I am unfortunately very new to analysis and this may be a simple question-- if so, please forgive my ignorance.
The function $f(z) = z\bar{z}$ is not holomorphic (since it contains the $\bar{z}$ variable... this much I think I know...). This expression pops up in some of my work, and it is troublesome, because it prevents my functions from being analytic. If possible, I would love to replace this function with a holomorphic approximation of it, but I unfortunately do not know how to go about constructing such an approximation... Can someone provide me with a brief overview of what I might need to know?
Thanks!
-
Intuitively, the absolute value function is very non-holomorphic. A holomorphic function is locally conformal (at lest where it has nonzero derivative), i.e. preserves angles locally, while the absolute value function sends the complex plane to the real line so sends all angles to $0$ or $\pi$. – Alex Becker Sep 9 at 3:52
So I doub't you'll have much luck trying to approximate it with a holomorphic function. – Alex Becker Sep 9 at 3:53
1
This question is not really appropriate for this site - try math.stackexchange.com instead. However, to sharpen Alex Becker's remarks, note that integrating any function holomorphic on the disc round the circle of radius 1/2 gives zero, while integrating the function $f(z)=z\overline{z}$ round the same circle gives a non-zero constant. So any "holomorphic" approximation to $\overline{z}$ can't be a very good one. – Yemon Choi Sep 9 at 4:08
2
Not sure what you're doing, but trying to approximate $\bar{z}$ by a holomorphic function sounds like the wrong approach. You should reconsider what you're doing. – Deane Yang Sep 9 at 4:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097951054573059, "perplexity_flag": "middle"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.