url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/70032/can-one-compare-negative-infinity-and-positive-infinity?answertab=active | # Can one compare negative infinity and positive infinity?
I have always wondered, is negative infinity less than positive infinity? Can I compare them?
-
3
You can declare $- \infty < x < \infty$ for all real numbers $x$. This gives a total order on $[-\infty,\infty]$. – Mark Schwarzmann Oct 5 '11 at 10:13
$+\infty>0 \Rightarrow \ln(+\infty)>\ln(0) \Rightarrow +\infty>-\infty$ .. end of proof :) – pedja Oct 5 '11 at 10:42
## 1 Answer
It depends what you mean by "infinity".
In standard analysis, $\infty$ appears mainly as a placeholder for "perform that otherwise limited computation as if the limit $+\infty$ was a large positive number, but don't stop at any point." For instance, $$\sum_{i=1}^\infty \frac1{i^2}$$ means (naïvely) "sum the numbers $\frac11,\frac14,\frac19,\ldots$ and just don't stop". In much the same way, we can stretch such a computation simply in both directions: $$\sum_{k=-\infty}^\infty 2^{-k^2} = \ldots + 2^{-4} + 2^{-1} + 2^0 + 2^{-1} + 2^{-4} + 2^{-9} + \ldots$$ Both of these sums are well-defined, while $\sum_{k=\infty}^\infty$ wouldn't make any sense.
When just using $\infty$ as such a placeholder, it's not really meaningful to compare anything to it, still it is common to write $$\sum_{i=1}^\infty \frac1{i^2} < \infty$$ but this just means that the sum is well behaved, does not diverge to infinity like, for instance, $$\sum_{i=1}^\infty 5 = 5+5+\ldots \not< \infty.$$ But this is not a comparison of mathematical objects.
On the other hand, it is possible to have $\infty$ as an actual mathematical object on its own, an element of some set. Such a set is called a compactification of the real numbers. It can be defined either with one single infinity representing both $-\infty$ and $+\infty$ (as one equivalence class). In this case the result is homeomorphic to the unit circle, which does not have an ordering at all. Or with distinct elements $-\infty$ and $+\infty$. Then you do, in fact, have $-\infty<+\infty$ if you declare it to be so. That's what Mark Schwarzmann said in his comment.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560061693191528, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/105249/when-is-a-surjective-comodule-endomorphism-an-automorphism | ## When is a Surjective Comodule Endomorphism an Automorphism?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a Hopf algebra $H$, a left $H$-comodule $V$, and a surjective comodule endomorphism $f: V \to V$. Can somebody give:
(i) a set of neccessary, or sufficient, or both neccessary and sufficient, conditions for $f$ to have zero kernel?
(ii) an example of such a comodule map with non-zero kernel?
Thanks in advance guys
-
2
For (ii), take $H=k$, your base field, $V$ any infinite dimensional $k$-vector space, with its obvious $H$-comodule structure, and $f$ any non-injective surjection – Mariano Suárez-Alvarez Aug 22 at 17:35
1
For (i), at the level of generality with which you wrote the question it is difficult to say something useful. One nice criterion for injectivity is that it is enough to check injectivity of the restriction of the map to the socle. – Mariano Suárez-Alvarez Aug 22 at 17:46
How do you define the scole of a comodule, as The sum of the minimal nonzero sub-comodules? – Dyke Acland Aug 22 at 18:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8463749885559082, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/115363/list | ## Return to Answer
4 added 1 characters in body
It is possible to do that using the algorithm known as the Independent Metropolis-Hastings sampler without having to do any transformation and without having to compute the constant normalizing terms.
Assume you can sample from the density $q(x)\propto \exp(g(x))$ where $g(x)$ is a polynomial (say, you are sampling from the normal distribution) and your objective is to get a sample from the density $p(x)\propto \exp(f(x))$ where $f(x)$ is another polynomial (naturally both polynomials need to have negative leading term to insure those densities exist). Then the algorithm will produce a Markov process whose invariant distribution is $p(x)$. Starting from some arbitrary initial value, say $X^0$, let's say that you have already $M$ draws. To get a new sample draw $X^{M+1}$, draw a random variable from $q$. Let's call that draw $X'$. Then accept or rejet $X'$ with probability $$1 \wedge \exp[f(X')-f(X^M)+g(X^M)-g(X')]$$ If you have accepted then set $X^{M+1}=X'$ otherwise set $X^{M+1}=X^M$.
Eventually that Markov process would have converged and you could consider say the last $N$ draws as such a sample from the desired distribution.
A very good introduction to that algorithm is section 7.4 of this excellent book
Edit As indicated by the author of the question in a comment below, if the question is to transform a sample from one continuous distribution having density $q(x)$ into a sample from another continuous distribution having density $p(x)$, then this could be achieved in the following way.
Let $Q$ and $P$ be respectively the CDF's with associated with $q$ and $p$. p$respectively. Assume we have an i.i.d. sample from$X_1$,...,$X_n$from$q$, then$P^{-1}(Q(X_1))$,...,$P^{-1}(Q(X_n))$is an i.i.d. sample from$p\$. (this is straightforward to prove).
Now, the issue in your case is that $P$ or and the quantile function $P^{-1}$ may not be known in closed-form (even the normalizing constant of the density associated with it $P$ may be not be known in closed form.) However, numerical integration and interpolation could work here.
3 edited body; deleted 23 characters in body; added 66 characters in body
It is possible to do that using the algorithm known as the Independent Metropolis-Hastings sampler without having to do any transformation and without having to compute the constant normalizing terms.
Assume you can sample from the density $q(x)\propto \exp(g(x))$ where $g(x)$ is a polynomial (say, you are sampling from the normal distribution) and your objective is to get a sample from the density $p(x)\propto \exp(f(x))$ where $f(x)$ is another polynomial (naturally both polynomials need to have negative leading term to insure those densities exist). Then the algorithm will produce a Markov process whose invariant distribution is $p(x)$. Starting from some arbitrary initial value, say $X^0$, let's say that you have already $M$ draws. To get a new sample draw $X^{M+1}$, draw a random variable from $q$. Let's call that draw $X'$. Then accept or rejet $X'$ with probability $$1 \wedge \exp[f(X')-f(X^M)+g(X^M)-g(X')]$$ If you have accepted then set $X^{M+1}=X'$ otherwise set $X^{M+1}=X^M$.
Eventually that Markov process would have converged and you could consider say the last $N$ draws as such a sample from the desired distribution.
A very good introduction to that algorithm is section 7.4 of this excellent book
Edit As indicated by the author of the question in a comment below, if the question is to transform a sample from one continuous distribution having density $q(x)$ into a sample from another continuous distribution having density $p(x)$, then this could be achieved in the following way.
Let $Q$ and $P$ be respectively the CDF's with associated with $q$ and $p$. Assume we have an i.i.d. sample from $X_1$,...,$X_n$ from $q$, then $P(Q^{-1}(X_1))$,...,$P(Q^{-1}(X_n))$ P^{-1}(Q(X_1))$,...,$P^{-1}(Q(X_n))$is an i.i.d. sample from$p\$. (this is straightforward to prove).
Now, the issue in your case is that $P$ or the quantile function may not be known in closed-form (even the normalizing constant of the density associated with it may be not be known in closed form.) However, numerical integration and interpolation could work here.
2 added 786 characters in body
It is possible to do that using the algorithm known as the Independent Metropolis-Hastings sampler without having to do any transformation and without having to compute the constant normalizing terms.
Assume you can sample from the density $q(x)\propto \exp(g(x))$ where $g(x)$ is a polynomial (say, you are sampling from the normal distribution) and your objective is to get a sample from the density $p(x)\propto \exp(f(x))$ where $f(x)$ is another polynomial (naturally both polynomials need to have negative leading term to insure those densities exist). Then the algorithm will produce a Markov process whose invariant distribution is $p(x)$. Starting from some arbitrary initial value, say $X^0$, let's say that you have already $M$ draws. To get a new sample draw $X^{M+1}$, draw a random variable from $q$. Let's call that draw $X'$. Then accept or rejet $X'$ with probability $$1 \wedge \exp[f(X')-f(X^M)+g(X^M)-g(X')]$$ If you have accepted then set $X^{M+1}=X'$ otherwise set $X^{M+1}=X^M$.
Eventually that Markov process would have converged and you could consider say the last $N$ draws as such a sample from the desired distribution.
A very good introduction to that algorithm is section 7.4 of this excellent book
Edit As indicated by the author of the question in a comment below, if the question is to transform a sample from one continuous distribution having density $q(x)$ into a sample from another continuous distribution having density $p(x)$, then this could be achieved in the following way.
Let $Q$ and $P$ be respectively the CDF's with associated with $q$ and $p$. Assume we have an i.i.d. sample from $X_1$,...,$X_n$ from $q$, then $P(Q^{-1}(X_1))$,...,$P(Q^{-1}(X_n))$ is an i.i.d. sample from $p$. (this is straightforward to prove).
Now, the issue in your case is that $P$ may not be known in closed-form (even the normalizing constant of the density associated with it may be not be known in closed form.) However, numerical integration could work here.
1
It is possible to do that using the algorithm known as the Independent Metropolis-Hastings sampler without having to do any transformation and without having to compute the constant normalizing terms.
Assume you can sample from the density $q(x)\propto \exp(g(x))$ where $g(x)$ is a polynomial (say, you are sampling from the normal distribution) and your objective is to get a sample from the density $p(x)\propto \exp(f(x))$ where $f(x)$ is another polynomial (naturally both polynomials need to have negative leading term to insure those densities exist). Then the algorithm will produce a Markov process whose invariant distribution is $p(x)$. Starting from some arbitrary initial value, say $X^0$, let's say that you have already $M$ draws. To get a new sample draw $X^{M+1}$, draw a random variable from $q$. Let's call that draw $X'$. Then accept or rejet $X'$ with probability $$1 \wedge \exp[f(X')-f(X^M)+g(X^M)-g(X')]$$ If you have accepted then set $X^{M+1}=X'$ otherwise set $X^{M+1}=X^M$.
Eventually that Markov process would have converged and you could consider say the last $N$ draws as such a sample from the desired distribution.
A very good introduction to that algorithm is section 7.4 of this excellent book | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9581085443496704, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Row_vector | # Row vector
In linear algebra, a row vector or row matrix is a 1 × m matrix, i.e. a matrix consisting of a single row of m elements:[1]
$\mathbf x = \begin{bmatrix} x_1 & x_2 & \dots & x_m \end{bmatrix}.$
The transpose of a row vector is a column vector:
$\begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}.$
The set of all row vectors forms a vector space which acts like the dual space to the set of all column vectors, in the sense that any linear functional on the space of column vectors (i.e. any element of the dual space) can be represented uniquely as a dot product with a specific row vector.
## Notation
Row vectors are sometimes written using the following non-standard notation:
$\mathbf x = \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}.$
## Operations
• Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix.
• The dot product of two vectors a and b is equivalent to multiplying the row vector representation of a by the column vector representation of b:
$\mathbf{a} \cdot \mathbf{b} = \begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix}\begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}.$
## Preferred input vectors for matrix transformations
Frequently a row vector presents itself for an operation within n-space expressed by an n by n matrix M:
v M = p.
Then p is also a row vector and may present to another n by n matrix Q:
p Q = t.
Conveniently, one can write t = p Q = v MQ telling us that the matrix product transformation MQ can take v directly to t. Continuing with row vectors, matrix transformations further reconfiguring n-space can be applied to the right of previous outputs.
In contrast, when a column vector is transformed to become another column under an n by n matrix action, the operation occurs to the left:
p = M v and t = Q p ,
leading to the algebraic expression QM v for the composed output from v input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation. The natural bias to read left-to-right, as subsequent transformations are applied in linear algebra, stands against column vector inputs.
Nevertheless, using the transpose operation these differences between inputs of a row or column nature are resolved by an antihomomorphism between the groups arising on the two sides. The technical construction uses the dual space associated with a vector space to develop the transpose of a linear map.
For an instance where this row vector input convention has been used to good effect see Raiz Usmani,[2] where on page 106 the convention allows the statement "The product mapping ST of U into W [is given] by:
$\alpha (ST) = (\alpha S) T = \beta T = \gamma$."
(The Greek letters represent row vectors).
Ludwik Silberstein used row vectors for spacetime events; he applied Lorentz transformation matrices on the right in his Theory of Relativity in 1914 (see page 143). In 1963 when McGraw-Hill published Differential Geometry by Heinrich Guggenheimer of the University of Minnesota, he uses the row vector convention in chapter 5, "Introduction to transformation groups" (eqs. 7a,9b and 12 to 15). When H. S. M. Coxeter reviewed[3] Linear Geometry by Rafael Artzy, he wrote, "[Artzy] is to be congratulated on his choice of the 'left-to-right' convention, which enables him to regard a point as a row matrix instead of the clumsy column that many authors prefer."
## Notes
1. Raiz A. Usmani (1987) Applied Linear Algebra Marcel Dekker ISBN 0824776224. See Chapter 4: "Linear Transformations"
## References
See also: Linear algebra#Further reading
• Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0
• Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8
• Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
• Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8494521975517273, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Combinatorial_explosion | # Combinatorial explosion
For other uses, see Combinatorial explosion (communication).
This article needs attention from an expert in Mathematics. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Mathematics (or its Portal) may be able to help recruit an expert. (December 2008)
In mathematics a combinatorial explosion describes the effect of functions that grow very rapidly as a result of combinatorial considerations.[1]
Examples of such functions include the factorial function and related functions. Pathological examples of combinatorial explosion include functions such as the Ackermann function.
## Example in computing
Combinatorial explosion can occur in computing environments in a way analogous to communications and multi-dimensional space. Imagine a simple system with only one variable, a boolean called A. The system has two possible states, A = true or A = false. Adding another boolean variable B will give the system four possible states, A = true and B = true, A = true and B = false, A = false and B = true, A = false and B = false. A system with n booleans has 2n possible states, while a system of n variables each with Z allowed values (rather than just the 2 (true and false) of booleans) will have Zn possible states.
The possible states can be thought of as the leaf nodes of a tree of height n, where each node has Z children. This rapid increase of leaf nodes can be useful in areas like searching, since many results can be accessed without having to descend very far. It can also be a hindrance when manipulating such structures.
Consider a class hierarchy in an object-oriented language. The hierarchy can be thought of as a tree, with different types of object inheriting from their parents. If different classes need to be combined, such as in a comparison (like A < B) then the number of possible combinations which may occur explodes. If each type of comparison needs to be programmed then this soon becomes intractable for even small numbers of classes. Multiple inheritance can solve this, by allowing subclasses to have multiple parents, and thus a few parent classes can be considered rather than every child, without disrupting any existing hierarchy.
For example, imagine a hierarchy where different vegetables inherit from their ancestor species. Attempting to compare the tastiness of each vegetable with the others becomes intractable since the hierarchy only contains information about genetics and makes no mention of tastiness. However, instead of having to write comparisons for carrot/carrot, carrot/potato, carrot/sprout, potato/potato, potato/sprout, sprout/sprout, they can all inherit from a separate class of tasty whilst keeping their current ancestor-based hierarchy, then all of the above can be implemented with only a tasty/tasty comparison.
## Example in arithmetic
Suppose we take the factorial for n:
$n! = (n)(n-1)...(2)(1)$
Then 1! = 1, 2! = 2, 3! = 6, and 4! = 24. However, we quickly get to extremely large numbers, even for relatively small n. For example, 100! = 9.33262154 × 10157, a number so large that it cannot be displayed on most calculators.
## References
1. Krippendorff, Klaus. "Combinatorial Explosion". Web Dictionary of Cybernetics and Systems. PRINCIPIA CYBERNETICA WEB. Retrieved 29 November 2010. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8977388739585876, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/23798/how-do-i-write-the-energy-of-a-constant-uniform-2d-charge-distribution | # How do I write the energy of a constant, uniform 2D charge distribution?
Let's consider a 2D electromagnetic field defined in a square domain $[0,\Lambda]^2$, with periodic boundary conditions, with a constant charge distribution, uniform all over the aforementioned domain:
$$\mathscr{L}=-\frac{1}{4} F_{\mu \nu} F^{\mu \nu} + j_\mu A^\mu$$ $$j_\mu=(0,k,k) \hspace{26pt} k \in \mathbb{R}$$
I have to calculate the contribution to the energy of the system given by the charge distribution, in the limit of small $k$.
I can't figure out how to write down the energy for the charge distribution in terms of $k$. I always end up with formulas involving the stress-energy tensor, which in turns depends upon $F_{\mu \nu}$ which in turn depends upon the actual dynamics of the system. My intuition, along with analogies with some similar problems tells me that a simple (maybe approximate) elegant answer should be given in terms of $k^2$.
I have tried writing down the partition function for the system in the path integral formalism, performing the gaussian integral and evaluating the term $j_\mu (K^{-1})^{\mu \nu} j_\nu$, but that does not seem to help.
For more fun one can also consider the case where the photon has mass: $\mathscr{L}'=\mathscr{L}+m A_\mu A^\mu$ in the limit of $m \approx \Lambda$, so that Yukawa potential of the interaction for the gapful photon is still quite similar to the Coulomb one.
-
1
– David Zaslavsky♦ Apr 15 '12 at 22:31
David: well, my problem is that I can't write down the energy for the charge distribution in terms of $k$. I always end up with formulas involving the stress-energy tensor, which in turns depends upon $F_{\mu \nu}$ which in turn depends upon the actual dynamics of the system. My intuition, along with analogies with some similar problems tells me that a simple (maybe approximate) elegant answer should be given in terms of $k^2$. – zakk Apr 15 '12 at 23:10
(That's not homework strictly speaking... I came up with this problem analyzing some other almost uncorrelated topic, I don't know about the etiquette here, anyway I have no problem tagging is as such!) – zakk Apr 15 '12 at 23:13
1
(2 comments up) OK, well, the content of that comment would be great to incorporate into your question! The best questions explicitly state what they're asking, like "How can I write down the energy in terms of $k$?" You could also put something like that in the question title. (1 comment up) Are you doing this problem because you need the answer, or to learn the method? In the latter case, the homework tag would be appropriate, even if it's not an actual homework assignment. It's not really a big deal; your question just sounded like it might be a HW question when I read it. – David Zaslavsky♦ Apr 15 '12 at 23:48
1
Great, that helps! I made one small change afterwards, hopefully you don't mind. – David Zaslavsky♦ Apr 16 '12 at 0:01
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436642527580261, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/98093/why-doesnt-induction-extend-to-infinity-re-fourier-series/98115 | # Why doesn't induction extend to infinity? (re: Fourier series)
While reading some things about analytic functions earlier tonight it came to my attention that Fourier series are not necessarily analytic. I used to think one could prove that they are analytic using induction
1. Let $P(n)$ be some statement parametrized by the natural number $n$ (in this case: the $n$th partial sum of the Fourier series is analytic)
2. Show that $P(0)$ is true
3. Show that $P(n-1)\Rightarrow P(n)$
4. (Invalid) conclusion: $P(n)$ continues to be true as we take the limit $n\to\infty$*
Why exactly is the conclusion not valid here? It seems very strange that even though $P(n)$ is true for any finite $n$, it ceases to be valid when I remove the explicit upper bound on $n$. Are there circumstances under which I can make an argument of this form?
Example of invalid proof: Define the truncated Fourier series $F_n(x)$ as the partial sum
$$F_n(x) = \sum_{k=0}^{n} A_k\sin\biggl(\frac{kx}{T}\biggr) + B_k\cos\biggl(\frac{kx}{T}\biggr)$$
where $A_k$ and $B_k$ are the Fourier coefficients for some arbitrary function $f$. Using the facts that $\sin(t)$ and $\cos(t)$ are analytic, and that any linear combination of analytic functions is analytic:
1. $P(n)$ is the statement "$F_n(x)$ is analytic"
2. $F_0(x)$ is clearly analytic because it is a linear combination of sine and cosine functions
3. $F_n(x)$ can be written as the linear combination
$$F_{n}(x) = F_{n-1}(x) + A_n\sin\biggl(\frac{nx}{T}\biggr) + B_n\cos\biggl(\frac{nx}{T}\biggr)$$
So if $F_{n-1}(x)$ is analytic, $F_n(x)$ is analytic.
4. $F(x) \equiv \lim_{n\to\infty} F_n(x)$ is analytic. But $F(x)$ is the Fourier series for $f$; therefore, the Fourier series for $f$ is analytic.
*I'm assuming that $P(n)$ is a statement about some sequence which is parametrized by $n$ and for which taking the limit as $n\to\infty$ is meaningful
-
What does $\lim_{n\to\infty}P(n)$ mean? Boolean statements do not form a metric space, at least not one that makes sense in this context. The problem is not with $\lim_{n\to\infty}P(n)$ (as this is not what you are looking at) but that just because every term in a sequence has a certain property (which you can prove by induction), we cannot assume that the limit has this property (nor do I see any reason to think we could). – Alex Becker Jan 11 '12 at 8:22
– Rajesh D Jan 11 '12 at 8:28
@Alex: yeah, I know I was being a little loose with the notation. By saying "$\lim_{n\to\infty}P(n)$ is true" I didn't literally mean taking the limit of the boolean statement, but rather that $P(n)$ continues to be true in the limit as $n$ goes to infinity. Of course $P(n)$ needs to be some sort of statement for which it is meaningful to take that limit. As for your last sentence, I think you've identified my issue: it seems very natural to me that if every term in a sequence has a property, then so does the limit, and I'm looking for an understanding of why that isn't true. – David Zaslavsky Jan 11 '12 at 8:46
@David : You may sort out the issue without taking any example, but i think reading about uniform and nonuniform convergence of function sequences will do a lot good Please – Rajesh D Jan 11 '12 at 9:07
## 3 Answers
Here is a quote from B. Russell's Introduction to mathematical philosophy, pages 27-28, that I think describes well this limitation of induction:
Mathematical induction affords, more than anything else, the essential characteristic by which the finite is distinguished from the infinite. The principle of mathematical induction might be stated popularly in some such form as "what can be inferred from next to next can be inferred from first to last." This is true when the number of intermediate steps between first and last is finite, not otherwise. Anyone who has ever watched a goods train beginning to move will have noticed how the impulse is communicated with a jerk from each truck to the next, until as last even the hindmost truck is in motion. When the train is very long, it is a very long time before the last truck moves. If the train were infinitely long, there would be an infinite succession of jerks, and the time would never come when the whole train would be in motion. Nevertheless, if there were a series of trucks no longer than the series of inductive numbers..., every truck would begin to move sooner or later if the engine persevered, though there would always be other trucks further back which had not yet begun to move.
There are contexts in which a statement $P(n)$ can be proved for all $n\in\mathbb N$ by induction, and has a counterpart $P(\infty)$ that is false. In other contexts, $P(\infty)$ may be true. But even then, induction on $\mathbb N$ does not prove the $P(\infty)$ case.
Getting back to limits of functions, note for example that:
• A finite sum of continuous functions is continuous.
• A pointwise convergent series of continuous functions need not be continuous.
• But, a uniformly convergent series of continuous functions is continuous.
So in this case, going from finite sums to infinite series requires new tools, different types of convergence, to obtain the desired properties. As for real analytic functions, I don't know what can be said along these lines. For complex analytic functions there are nicer results, such as the fact that a locally uniformly convergent sequence of complex analytic functions is complex analytic. In the real case, to give a stark contrast, every continuous function on a bounded interval is a uniform limit of polynomials (as analytic as you can get), but there are continuous functions that are differentiable nowhere. Similarly, a continuously differentiable function of period $2\pi$ is the uniform limit of its Fourier series, but continuously differentiable functions need not even be twice differentiable, let alone analytic.
-
A trivial case where $P(n)$ is true for all $n\in\mathbf N$ but $P(\infty)$ is false is the statement "$n$ is finite".
-
1
+1, no that is not the trivial case but the main point of induction. The assumptions for induction is that the statement is true for finite case. Once that the statement is no longer about a finite case the assumption is false and any result true or false can be derived. – Arjang Jan 11 '12 at 11:45
An example where "$P(n)$ for all $n$" does not imply $P(\infty)$, not from the realm of analysis:
Let $P(n)$ be "the set $\cup_{k=1}^n [\frac1k, 1]$ is closed." Clearly it is true for every positive integer $n$, since the union is $[\frac1n,1]$.
But $\cup_{k=1}^\infty [\frac1k, 1] = (0, 1]$ is not closed, so $P(\infty)$ is false.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617719650268555, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/35655/measure-on-real-grassmannians | ## Measure on real Grassmannians
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
OK, so I'm reading about this nice measure you can define on a (real) Grassmannian on Wikipedia. Basically, and to save you the trip through the link, consider the Haar measure $\theta$ on $O(n)$, fix a space V in your Grassmannian. Then for any subset A, the measure of A is $$\gamma(A)=\theta(g\in O(n) \mid gV \in A).$$
Fair enough. Two things Wikipedia does not really tell me, though:
1. Where does this construction originate? I would imagine something like this to be fairly folklore, but it would sure be nice if someone has a reference.
2. Does anyone know of especially interesting applications of this measure? Since the Wikipedia article cruelly lacks context, I would really like to see this idea in action.
Thanks in advance!
-
## 4 Answers
A nice application is a Crofton-like formula for the codimension $k=\dim V$ submanifolds of $S^{n-1}\subset\mathbb{R}^n$. If $X$ is such a submanifold, then its $n-k-1$ dimensional measure is the average number of points of $V\cap X$, $V\in\mathrm{Gr}_{n,k}$, with respect to the said measure, multiplied by half the measure of $S^{n-k-1}$ (which has 2 intersection point for almost all $V$).
There also is an affine version with the grassmannian of $k$-dimensional affine subspaces of (euclidean) $\mathbb{R}^n$ and codimension $k$ submanifolds of $\mathbb{R}^n$ (the original Crofton formula is the case $n=2$, $k=1$).
Appropriate keywords would be integral geometry, and perhaps also geometric measure theory.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This measure is the unique $O(n)$ invariant measure on the Grassmannian up to a multiplication by a scalar. The following lecture note explains invariant measures on homogeneous spaces. One application is in harmonic analysis on homogeneous spaces, see for example: the following review article
-
I am sorry but the first link is invalid. – Leonid Petrov Aug 15 2010 at 14:22
1
@Leonid Petrov: Just remove the trailing period from the linked URL in David Bar Moshe's comment, i.e. www-math.mit.edu/~dav/integration.ps – gspr Aug 15 2010 at 14:28
1
For one application, an explicit curve {C(t) : t in [0,oo)}, dense in a Grassmannian of 2-planes in n-space, is the basis for the animation technique in statistical computer graphics known as the Grand Tour. It's important to ensure that as t -> oo, the curve C spends time in any open set U proportional to the invariant measure* of U. * Though the invariant measure on a Grassmannian is unique up to a scalar multiple, the invariant metric is not in the sole case of 2-planes in 4-space. This oriented Grassmannian's metric is the product of two round 2-spheres whose radii may be in any ratio. – Daniel Asimov Aug 15 2010 at 15:45
Thanks Daniel. This sounds like a rather intriguing application, and I don't imagine I could have stumbled upon it by myself. – Thierry Zell Aug 15 2010 at 21:37
David: your lecture notes have been very enlightening. Thanks! – Thierry Zell Aug 16 2010 at 13:20
It's easy to describe the metric that gives rise to this measure: define a map from the Grassmannian of $k$-planes in $\mathbb{R}^n$ to the set of $n$ by $n$ matrices by associating to a $k$-plane $V$ the orthogonal projection $\pi_V$ onto $V$. This embeds the Grassmannian as a real algebraic subvariety of the space of $n$ by $n$ matrices (characterized as the set of symmetric matrices $\pi$ with trace $k$ such that $\pi^2=\pi$) and there is a natural choice of metric given by $d(V,W)=|\pi_V-\pi_W|$ where $|\cdot|$ denotes the sup norm on the space of $n$ by $n$ matrices. It follows from the definition that the metric is $O(n)$-invariant and therefore gives rise to an $O(n)$-invariant measure (up to scalars the one you ask about).
-
First of all, a general fact. Any transitive homogeneous space $X$ of a compact group $K$ has a unique $K$-invariant probability measure. Existence: take the image $\nu$ of the normalized Haar measure $m$ on $K$ under the map $g\mapsto gx_0$ (that's the construction you refer to). The measure $\nu$ is well-defined, since the mass of $m$ is finite, it does not depend on $x_0\in X$ by right invariance of $m$, and is $K$-invariant by left invariance of $m$. Uniqueness: take an arbitrary $K$-invariant measure $\nu'$ on $X$, and consider its convolution $m\ast\nu'$ with the measure $m$ (i.e., the image of the product of $m$ and $\nu'$ under the map $(g,x)\mapsto gx$). Then, on one hand $m\ast\nu'=\nu'$ by $K$-invariance of $\nu'$, on the other hand $m\ast\nu'=\nu$ by the above construction.
Thus, since the Grassmannian in question has a transitive compact group of automorphisms, it carries a "natural" invariant measure. So that "platonically" it is always there - like, for instance, the Riemannian volume on a Riemannian manifold (by the way, as mentioned before, any invariant Riemannian metric on the Grassmannian produces the measure in question in this way).
However, there is one subtlety here which has so far remained unnoticed. In order to define the Grassmannian one needs a linear structure, whereas the orthogonal group $O(n)$ is defined in terms of the Euclidean structure. Therefore, the "canonical" measure we are talking about is only canonical with respect to the given Euclidean structure on the linear space $V=R^n$. So, if we look at the problem from this point of view, we obtain a map which assigns to any Euclidean structure on $V$ a probability measure on the Grassmannian $Gr_k(V)$. In fact, this measure depends only on the projective class of the Euclidean structure (i.e., on the corresponding similarity structure), which are parameterized by the Riemannian symmetric space $S=SL(n,R)/SO(n)$ (equivalently, one can say that we consider only the Euclidean structures on $V$ with the same volume form, whence $SL$ instead of $GL$).
Thus, we have a map $x\mapsto\nu_x$ from $S$ to the space $P(Gr_k)$ of probability measures on $Gr_k$. One can show that this map is an injection, so that it can be used in order to compactify the symmetric space $S$ by taking its closure in the weak$^*$ topology of $P(Gr_k)$. This is an example of a so-called Satake-Furstenberg compactification, which can be defined for an arbitrary non-compact Riemannian symmetric space. In the case of the space $S=SL(n,R)/SO(n)$ all such compactifications are obtained by considering rotation invariant measures on the flag space of $V$ and its equivariant quotients (in particular, Grassmannians). In the general case the role of the flag space is played by the so-called Furstenberg boundary, which is the quotient of the semi-simple Lie group by its minimal parabolic subgroup. The most recent reference for all this is the book by Borel and Ji.
The simplest non-compact symmetric space is the hyperbolic plane. In this case the Furstenberg boundary (the associated "flag space") is just the boundary circle in the disk model. Each point of the hyperbolic plane determines a unique probability measure on the boundary circle invariant with respect to the rotations around this point. These measures appear in the classical Poisson formula for bounded harmonic functions in the unit disk (usually it is written in terms of just a single measure corresponding to the Euclidean center of the disk; the other measures appear in the guise of their Radon-Nikodym derivatives with respect to this one, which is precisely the Poisson kernel).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285386204719543, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/83/how-is-squeezed-light-produced | # How is squeezed light produced?
Ordinary laser light has equal uncertainty in phase and amplitude. When an otherwise perfect laser beam is incident onto a photodetector, the uncertainty in photon number will produce shot noise with poisson statistics.
However, laser light may be transformed into a 'squeezed state', where the uncertainty is no longer equally divided between the two quadratures, resulting in a reduction of shot noise. How is this done?
-
## 3 Answers
Squeezing of laser light generally involves a non-linear interaction, where the nature of the interaction depends on the intensity of the light that is present. An easy to understand example is frequency doubling, which takes two photons from a pump laser, and sends out one photon of twice the frequency.
You can think of the input beam as a stream of photons with some fluctuation in the "spacing" of the photons along the beam. That is, on average you will receive, say, one photon per some unit of time, but sometimes you get two, and sometimes none.
If you send this beam into a nonlinear crystal to do frequency doubling, the doubling will occur only in those instants when you get two photons in one unit of time. In that case, the two photons are removed from the original beam, and produce one photon in the frequency-doubled output beam.
If you look at the transmitted light left behind in the input beam, you will find lower fluctuations in the intensity, because all of the two-photon instants have been removed. Thus, the transmitted beam is "amplitude-squeezed." It's not quite as obvious that the frequency-doubled beam has lower intensity fluctuations, but it, too is amplitude squeezed, because you get photons only at the times when you had two photons in the original beam, and it's exceedingly unlikely that you would get two of those in very close succession (or four photons from the original beam in one instant). So you have a lower intensity in the doubled beam, and also lower fluctuations.
So, for example, your input beam might give the following sequence of photon numbers in one-unit time steps:
1112010112110120
The input beam after the doubling crystal will look like:
1110010110110100
and the doubled output beam will look like:
0001000001000010
Both of those have lower fluctuations than the initial state.
-
Squeezed light can be generated from light in a coherent state or vacuum state by using certain optical nonlinear interactions.
For example, an optical parametric amplifier with a vacuum input can generate a squeezed vacuum with a reduction in the noise of one quadrature components by the order of 10 dB. A lower degree of squeezing in bright amplitude-squeezed light can under some circumstances be obtained with frequency doubling. Squeezing can also arise from atom-light interactions.
References: http://www.squeezed-light.de/body.html#generation
-
Squeezing can be defined as the ratio of uncertainties in the variances of a quadrature operator.
What does this mean?
Say you are working in the coherent state basis, now you choose to write the photon annihilation operator as a sum of two quadratures as follows: $$\hat{x}=(\hat{a}e^{-i\phi }+c.c)/2, \hat{y}=(\hat{a}e^{-i\phi }-c.c)/2i$$ Working in the Heisenberg picture, we can define squeezing to be the ratio of variances of each of these operators at different values of the parameter chosen. In Harmonic generation, these operators are usually parametrized by propagation distance $\zeta$, i.e you ask the question "What is the value of squeezing after the light fields propagate by $\zeta$ ?" You are free to parametrize in time as well.
The point is, your mathematical picture should have something to do with your experiment. In an experiment, $\langle\Delta\hat{x}^2\rangle,\langle\Delta\hat{y}^2\rangle$ take on the meaning of photon number squeezing and phase squeezing. My understanding is that this realization came about by experimental verification.
If you set the initial phase $\phi =0$, then you obtain a canonical decomposition of the $\hat{a}$ operator. For any other value of phase, you need to perform a heterodyne measurement to recover information about squeezing in both quadratures.
There are other interesting questions one can ask about invariants of this system, experimental meaning behind rotating/changing basis, reconstructing the quantum sate by Wigner formalism etc...
Hope this helps.
PS: Please bear in mind that this answer is based on my limited understanding. I'm sure somebody else can chime in with a more accurate/detailed answer.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297526478767395, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/model-categories+homotopy-theory | # Tagged Questions
2answers
77 views
### Cylinder object in the model category of chain complexes
Let $\text{Ch}⁺(R)$ be the category of non-negative chain complexes of $R$-modules where $R$ is a commutative ring. What is a cylinder object, in the sense of model categories, for a given complex ...
0answers
102 views
### Closed model categories in the sense of Quillen [1969] vs the modern sense
The modern definition of (closed) model category differs in two ways from Quillen's 1969 definition: Model categories are now required to be complete and cocomplete, whereas Quillen only asked for ...
2answers
65 views
### The empty set in homotopy theoretic terms (as a simplicial set/top. space)
I am currently confused about the empty set in terms of its path components and how this fits into the Quillen adjunction between topological spaces and simplicial sets. Probably, one of my ...
0answers
95 views
### A fibrant-objects structure on $\bf Top$
One can define (Paragraph 1.5, page 10) a fibrant-object structure on a suitable cartesian closed category of topological spaces $\bf Top$, called the $\pi_0$-fibrant structure: A ...
1answer
92 views
### Why does the definition of homotopy cartesian involve factorisations
Setup: A diagram \begin{matrix} X&{\rightarrow}&Y\\ \downarrow{}&&\downarrow{f}\\ U&{\rightarrow}&V \end{matrix} in a (proper) model category is called homotopy cartesian if ...
1answer
76 views
### Contractible homotopy fibre for CW complexes, categorial construction of the homotopy inverse
Let $f:X\to Y$ be a map of topological spaces. Assume further that the homotopy fibre is contractible. We get a long exact sequence on the homotopy groups and if $X$ and $Y$ are connected $f$ is a ...
2answers
148 views
### Kan fibrations and surjectivity
I have a basic question on the usual model structure on simplicial sets. What is the relation between being a Kan (trivial maybe ?) fibration and surjectivity ? Surjectivity here means either ...
1answer
134 views
### Do we implicitly consider model categories to be locally small?
Do we implicitly consider model categories to be locally small? I have the impression (but am not sure) that many references on model categories assume that all the categories are locally small, but ...
0answers
74 views
### How are injective model structures cofibrantly generated?
I have a question about the injective model structure on functor categories. As background : If $\mathcal{M}$ is a combinatorial model category and $\mathcal{C}$ is a small category, then there are ...
1answer
134 views
### The Notion of “A Homotopy Theory”
Sometimes (specifically in this case I'm looking at Charles Rezk's "A Model for the Homotopy Theory of Homotopy Theory") it seems that people refer to the homotopy category of a model category as a ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881480097770691, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/167675/prove-that-there-exists-analytic-f-such-that-fz-1-barz-on-the-boundar | # Prove that there exists analytic $f$ such that $f(z) = 1/\bar{z}$ on the boundary
I'm doing some self-study in complex analysis, and came to the following question:
Let $D(a,1) \subset \mathbb{C}$ be the disk of radius $1$ with center at $a \in \mathbb{C}$, and let $\partial D(a,1)$ be the boundary of $D(a,1)$. Prove that $|a| <1$ if and only if there exists a function $f$ analytic on $D(a,1)$ and continuous up to $\partial D(a,1)$ such that $f(z) = 1/\bar{z}$ for $z \in \partial D(a,1)$.
I don't know what area of the theory to try to apply here. I know that $|a| <1$ iff $\; 0 \in D(a,1)$, and I know that the function $1/\bar{z}$ is nowhere analytic. However, I can't see how to get to the desired conclusion.
Thanks.
-
2
Note that $\bar z=\bar a + \overline{z-a}$, and on the boundary $z-a$ has unit length. Therefore $\overline{z-a}=\frac{1}{z-a}$ on the boundary (but only there). This should help you construct $f$ -- that is, the "only if" direction. For "if", one approach would be to combine the same construction with something like Cauchy's theorem. – Henning Makholm Jul 7 '12 at 1:08
## 1 Answer
If you know about harmonic functions, then you can use the maximum principle on the real and imaginary parts of $f(z) - {1 / \bar{z}}$ to show that if $a > 1$, then if such a $f(z)$ existed then $f(z)$ would have to be ${1 / \bar{z}}$ for all $z$.
If $a < 1$, I'd use an explicit construction like Henning Makholm suggests.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469544887542725, "perplexity_flag": "head"} |
http://nrich.maths.org/7057/solution | ### Weekly Challenge 43: A Close Match
Can you massage the parameters of these curves to make them match as closely as possible?
### Weekly Challenge 44: Prime Counter
A weekly challenge concerning prime numbers.
### Weekly Challenge 28: the Right Volume
Can you rotate a curve to make a volume of 1?
# Weekly Challenge 33: Crazy Cannons
##### Stage: 5 Short Challenge Level:
Put the first cannon at the origin $(0, 0)$ and the second cannon at the point $(D, 0)$.
Using a constant acceleration of $-g$ in the $y$-direction and $0$ in the $x$-direction it is a simple matter to write down the positions of each cannon ball at a time $t> T$ if we make use of the formula $s=ut+\frac{1}{2}at^2$.
\begin{eqnarray}
(x_1, y_1)&=&\left(100\cos(45^\circ)t, 100\sin(45^\circ)t-\frac{1}{2}gt^2\right)\cr
(x_2, y_2)&=&\left(D-100\cos(30^\circ)(t-T), 100\sin(30^\circ)(t-T)-\frac{1}{2}g(t-T)^2\right)
\end{eqnarray}
As with all mechanics problems, the first part involves a careful setup of the equations. Once I have checked these carefully (... OK, that's done...) we can proceed with the algebra to resolve the equations.
Since I know that the two cannon balls strike each other the plan of attack is to equate the two $x$ and $y$ coordinates. I find that
50 \sqrt{2}t = D-50\sqrt{3}(t-T)
and
50\sqrt{2}t-5t^2=50(t-T)-5(t-T)^2
After some rearrangement, the second of these equations gives me
\begin{eqnarray}
\left(10(\sqrt{2}-1)-2T\right)t &=& -T^2-10T\cr
\Rightarrow t = \frac{T^2+10T}{2T-10(\sqrt{2}-1)}
\end{eqnarray}
Since for a collision to occur we must have $t> 0$, which implies that
T> 5(\sqrt{2}-1).
Thus, there is a minimum value of $T$ (which might be greater than $5(\sqrt{2}-1)$; it is not less than this value). Now, for a collision to occur in the air the $y$ coordinate at the point of collision must be positive. The expression for the first cannon ball quickly gives us the inequality
$$t< 10\sqrt{2}.$$
This gives us a more complicated inequality for $T$ as
\frac{T^2+10T}{2T-10(\sqrt{2}-1)}< 10\sqrt{2}
Rearranging we see that
$$T^2+10(1-2\sqrt{2})T+100(2-\sqrt{2})< 0$$
Values of $T$ which satisfy this equation are those lying between the two roots
T_{1, 2} = \frac{10(2\sqrt{2}-1)\pm\sqrt{(10(1-2\sqrt{2})^2-4(100(2-\sqrt{2}))}}{2}
Thus,
10(\sqrt{2}-1) < T< 10\sqrt{2}
I used a spreadsheet to plot the values of $D$ against $T$. The range of permissible values is
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8440896272659302, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/26707 | ## Examples of common false beliefs in mathematics. [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.
Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are
(i) a bounded entire function is constant; (ii) sin(z) is a bounded function; (iii) sin(z) is defined and analytic everywhere on C; (iv) sin(z) is not a constant function.
Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.
A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x.
Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.
-
46
I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 2010 at 0:55
12
The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – To be cont'd May 22 2010 at 9:04
13
wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – S. Sra Sep 20 2010 at 12:39
14
It's a thought -- I might consider it. – gowers Oct 4 2010 at 20:13
21
Meta created meta.mathoverflow.net/discussion/1165/… – quid Oct 8 2011 at 14:27
show 8 more comments
## 169 Answers
The following false belief enjoyed a certain success in the '70. (See R.S.Palais, Critical point theory and the minimax principle for an account.)
A second countable, Hausdorff, Banach manifold is paracompact.
Regular is necessary, otherwise there are counterexamples!
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
By googling one sees that each of the following statements has a significant number of believers:
(1) the vector space {0} has no basis,
(2) the empty set is a basis of {0} by convention,
(3) the statements "{0} has no basis" and "the empty set is a basis of {0}" are equivalent,
(4) the statements "{0} has no basis" and "the empty set is a basis of {0}" are NOT equivalent,
(5) the statement "the empty set is a basis of {0}" is an immediate consequence of the definitions of the terms involved.
I think that we'll all agree that the 5 beliefs are not ALL true. My personal religion is to believe in (4) and (5). I don't think I'll ever understand the arguments in favor of (1), (2) or (3).
-
1
I feel like there are a lot of areas in mathematics in which the empty set is interpreted in a certain way (for example, the empty product is one, the empty sum is zero, the empty set has one map into any non-empty set, etc). Given each of these particular situations locally, I might agree that it is a convention in each case. However, given the ubiquity of such "conventions," one might think that there is a uniform description of what the empty set really "means" in these contexts. If this becomes the case, then I might argue for (5), which would follow from this conception of the empty set. – David Corwin Jul 7 2010 at 23:48
show 4 more comments
A common misbelief for the exponential of matrices is `$AB=BA \Leftrightarrow \exp(A)\exp(B) = \exp(A+B)$`. While the one direction is of course correct: `$AB=BA \Rightarrow \exp(A)\exp(B) = \exp(A+B)$`, the other direction is not correct, as the following example shows: `$A=\begin{pmatrix} 0 & 1 \\ 0 & 2\pi i\end{pmatrix}, B=\begin{pmatrix} 2 \pi i & 0 \\ 0 & -2\pi i\end{pmatrix} $` with `$AB \neq BA \text{ and} \exp(A)=\exp(B) = \exp(A+B) = 1$`.
-
6
A more elementary, and I would bet more common, mistake is to believe that exp(A+B)=exp(A) exp(B) with no hypotheses on A and B. – David Speyer Sep 27 2010 at 13:40
1
Related to the mistake mentionned by David, the fact that the solution of a vector ODE $x'(t)=A(t)x(t)$ should be $$\left(\exp\int_0^tA(s)ds\right)x(0).$$ – Denis Serre Oct 20 2010 at 10:31
A common belief of students in real analysis is that if $$\lim_{x\to x_0}f(x,y_0),\qquad\lim_{y\to y_0}f(x_0,y)$$ exist and are both equal to $l$, then the function has limit $l$ in $(x_0,y_0)$. It is easly to show counter-examples. More difficult is to show that also the belief $$\lim_{t\to 0}f(x_0+ht,y_0+kt)=l,\quad\forall\;(h,k)\neq(0,0)\quad\Rightarrow\quad\lim_{(x,y)\to(x_0,y_0)}f(x,y)=l$$ is false. For completeness's sake (presumably anybody who ever taught calculus has seen it, but it's easily forgotten) the standard counterexample is $$f(x,y)=\frac{xy^2}{x^2+y^4}$$ at $(0,0$).
-
As is well known, if $V$ is a vector space and $S, T \subset V$ are subspaces, then $S \cup T$ is a subspace iff $S \subset T$ or viceversa. However, $S \cup T \cup U$ can be a subspace even if no two spaces are contained in each other (think finite fields...)
-
3
But only finite fields... – darij grinberg Oct 19 2010 at 8:42
Draw the graph of a continuous function $f$ (from $\mathbb{R}$ to $\mathbb{R}$). Now draw two dashed curves: one which everywhere a distance $\epsilon$ above the graph of $f$ and one which is everywhere a distance $\epsilon$ below the graph of $f$. Then the open $\epsilon$-ball around $f$ (with respect to the uniform norm) is all functions which fit strictly between the two dashed curves.
-
6
Surely this is true if you are talking about the closed ball, and only just barely false for the open ball (and if we were talking about functions from $[a,b]$ to $\mathbb{R}$ it would be true)? Or else I am one of those with the false belief... – Nate Eldredge Oct 10 2010 at 18:26
show 2 more comments
By definition, an asymptote is a line that a curve keeps getting closer to but never touches. The teaching of this false belief at an elementary level is standard and nearly universal. Everybody "knows" that it is true. A tee-shirt has a clever joke about it. In the course of describing the function $f(x) = \dfrac{5x}{36 + x^2}$, I mentioned about an hour ago before a class of about 10 students that its value at 0 is 0 and that it has a horizontal asymptote at 0. One of them accused me of contradicting myself. What of $y = \dfrac{\sin x}{x}$? And even with simple rational functions there are exceptions, although there the curve can touch or cross the asymptote only finitely many times. And $3 - \dfrac{1}{x}$ gets closer to 5 as $x$ grows, and never reaches 5, so by the widespread false belief there would be a horizontal asymptote at 5.
-
1
For this to be a false definition, it would have to be a definition in the first place. And this means you have to define a "curve" first, and then define "get closer" and "touch". – Laurent Moret-Bailly Mar 6 2011 at 16:01
5
@Laurent: It's hard to imagine a comment more irrelevant to what happens in classrooms than yours. – Michael Hardy Mar 7 2011 at 4:38
1
It happens to be the literal meaning of the word asymptote "not together falling". You could say that it is a bad choice of name, but for hyperbolas it worked just fine and then it was mercilessly generalized. – thei Apr 8 2011 at 14:35
show 1 more comment
Sequence ${a_n}$ has a limit $A$ in $\mathbb{R}$ and a limit $B$ in $\mathbb{Q}_p$. Then $A$ is rational iff $B$ is rational.
-
2
Or: if a sequence has a rational limit in Q_p and in Q_r, then they're the same. – Qiaochu Yuan May 5 2010 at 4:16
show 1 more comment
There are cases that people know that a certain naive mathematical thought is incorrect but largely overestimate the amount by which it is incorrect. I remember hearing on the radio somebody explaining: "We make five experiments where the probability for success in every experiment is 10%. Now, a naive person will think that the probability that at least one of the experiment succeed is five times ten, 50%. But this is incorrect! the probability for success is not much larger than the 10% we started with."
Of course, the truth is much closer to 50% than to 10%.
(Let me also mention that there are various common false beliefs about mathematical terms: NP stands for "not polynomial" [in fact it stands for "Nondeterministic Polynomial" time]; the word "Killing" in Killing form is an adjective [in fact it is based on the name of the mathematician "Wilhelm Killing"] etc.)
-
9
And the Killing field has nothing to do with Pol Pot. – Nate Eldredge May 5 2010 at 14:40
2
Unfortunately I often slip up in class and say that the Killing vector field $T$ kills the metric term (well, I use the verb kills when a differential operator hits something and makes it zero, because, you know, bad terms are always "the enemy"). I'm not sure how much damage I did to the students' impressions... – Willie Wong May 5 2010 at 17:19
2
"Kills" is one of those terms I hear mathematicians use surprisingly often. The other one is "this guy." I never really understood the prevalence of either. – Qiaochu Yuan May 6 2010 at 7:38
14
"Guy" is a pretty standard English colloquialism for "person"; combine this with humans' tendency to anthropomorphize and this usage is understandable. (Though we shouldn't anthropomorphize mathematical objects, because they hate that.) – Nate Eldredge May 6 2010 at 14:51
6
In the only lecture I saw by David Goss he started with "guy", quickly went to something like "uncanny fellow" and then stayed with "sucker" for most of the talk. I don't know what those poor Drinfeld modules had done to him the day before :-) – Peter Arndt May 19 2010 at 12:24
show 8 more comments
I just realized yesterday that, given $A \to C, B \to C$ in an abelian category, the kernel of $A \oplus B \to C$ is not the direct sum of the kernels of $A \to C, B \to C$.
-
"A 'random' number field has large class number"
I've heard this belief quite a few times. Usually random means taking a not-too-small degree (7?) and then somehow taking integer coefficients (around 10,000?).
But in fact class number tend to be much smaller than one expects. Usually they are logarithmic in the size of the discriminant.
The main reasons for the belief are the common examples of fields given in undergraduate and early graduate courses - imaginary quadratic fields and cyclotomic fields. In more advanced courses students see abelian extensions and CM-fields, which also have special arithmetic properties that make their class groups somewhat larger. In the courses I have taken the actual size of 'random' number fields was not addressed, and, say, the Cohen-Lenstra heuristics were not mentioned.
-
A common false belief is that all Gödel sentences are true because they say of themselves they are unprovable. See Peter Milne's "On Goedel Sentences and What They Say", Philosophia Mathematica (III) 15 (2007), 193–226. doi:10.1093/philmat/nkm015
-
Here's one from basic set theory. Let k be a cardinal and consider the operation "adding k", meaning
l |--> k+l
on cardinals. We know that this operation "stabilizes" to the identity after k, that is, for any l>k, we have l+k = l. Similarly, the "multiplying by k" operation,
l |--> l * k
stabilizes to the identity after k.
Everyone also knows that if l is an infinite cardinal then l^2 is equipotent to l, and more generally l^n is equipotent to l for every natural number n. I.e. all the finite power functions stabilize to the identity at omega.
Well, obviously "exponentiation by omega" also stabilizes at some point, right? Like, l^omega is equal to l for sufficiently large l? Look, we probably already have the stabilization point at 2^omega.
Right?
-
1
Victor, I held this belief for a good while when first learning set theory. I tried proving it a couple of times and failed, but I was in that stage just after I'd gotten the hang of basic cardinality arguments and they all seemed simple, so I figured it was just a matter of small details. – Pietro KC Jun 10 2010 at 9:01
2
But it turns out that k^l is intimately linked with the cofinality of k, which is the length of the shortest unbounded sequence in k. For example, cof(omega) = omega, since sequences of length less than omega are finite, and thus bounded in omega. Similarly, cof(aleph_1) is aleph_1, since any countable sequence in aleph_1 is bounded. It's not immediately obvious that some cardinal k has cof(k) < k, but aleph_omega does! Anyway, the relevant theorem is that k^cof(k) > k, so there are arbitrarily large k s.t. k^omega > k. – Pietro KC Jun 10 2010 at 9:06
show 1 more comment
Before reading about it, I really thought that if $f \colon [0,1] \times [0,1] \to [0,1]$ is a function with the following properties:
1. for any $x \in [0,1]$ the function $f_x\colon [0,1] \to [0,1]$ defined by $f_x(y)=f(x,y)$ is Lebesgue measurable, and also the function $f^y \colon [0,1]\to[0,1]$ defined by $f^y(x)=f(x,y)$ is Lebesgue measurable, for all $y \in [0,1]$;
2. both $\varphi(x)=\int_0^1 f_x d\mu$ and $\psi(y)=\int_0^1 f_y d\mu$ are Lebesgue measurable.
Then the two iterated integrals $$\int_0^1\varphi(x)dx \mbox{ and } \int_0^1\psi(y)dy$$ should be equal. This is false (see Rudin's "Real and Complex Analysis", pag. 167), at least if you assume the continuum hypothesis.
-
1
I really like this example from Rudin's book. Do you know if there exist such an example that does not use the continuum hypothesis (or if it's even possible to find one)? – Malik Younsi Jul 28 2010 at 13:39
2
I don't know, but this could be a good questions for MO! – Ricky Jul 28 2010 at 14:28
Here's one I was reminded recently during lunch in the common room.
A maximal abelian subalgebra of a semisimple Lie algebra is a Cartan subalgebra.
This is true for compact real forms of semisimple Lie algebras, but fails in general. The missing condition is that the subalgebra should equal its normaliser.
-
Complex variables: "An entire function that is onto and locally one-to-one is globally one-to-one."
Counterexample: `$f(z) := \int_0^z \exp(\zeta^2)\,d\zeta$`
I'll leave the proof that this is indeed a counterexample as a pleasant exercise.
(I believe this example is due to Lawrence Zalcman.)
-
1- A very common mistake that 1st year students (but not even a single mathematician) think that it is true is "a transitive and symmetric relation on a set is reflexive". But as the empty set is a transitive and symmetric relation but not reflexive on any non-empty set. Of course there lots of non-trivial examples also.
2- Another common mistake is that the expression "countable union of countable sets is again countable" is independent of axiom of choice (AC). Many people make the proof of this statement without mentioning axiom of choice. Indeed, in his holly book Algebra, Lang proves this statement just by taking an ordering from each countable set and continues without the mentioning AC.
-
2
For big-list questions, it's usually best to post independent answers as separate answers. – Nate Eldredge Dec 2 2010 at 15:13
1
+1 for #2. Baby Rudin is another offender. And many authors use so-called "diagonalization tricks" for proving compactness theorems like Arzela-Ascoli and Prohorov, which typically reduce to the compactness of $[0,1]^\mathbb{N}$. – Nate Eldredge Dec 2 2010 at 15:21
show 1 more comment
If $E$ is a contractible space on which the (Edit: topological) group $G$ acts freely, then $E/G$ is a classifying space for $G$.
A better, but still false, version:
If $E$ is a free, contractible $G$-space and the quotient map $E\to E/G$ admits local slices, then $E/G$ is a classifying space for $G$.
(Here "admits local slices" means that there's a covering of $E/G$ by open sets $U_i$ such that there exist continuous sections $U_i \to E$ of the quotient map.)
The simplest counterexample is: let $G^i$ denote $G$ with the indiscrete topology (Edit: and assume $G$ itself is not indiscrete). Then G acts on $G^i$ by translation and $G^i$ is contractible (for the same reason: any map into an indiscrete space is continuous). Since $G^i/G$ is a point, there's a (global) section, but it cannot be a classifying space for $G$ (unless $G={1}$). The way to correct things is to require that the translation map $E\times_{E/G} E \to G$, sending a pair $(e_1, e_2)$ to the unique $g\in G$ satisfying $ge_1 = e_2$, is actually continuous.
Of course the heart of the matter here is the corresponding false belief(s) regarding when the quotient map by a group action is a principal bundle.
-
(*) "Let $(I,\leq)$ be a directed ordered set, and $E=(f_{ij}:E_i\to E_j)_{i\geq j}$ be an inverse system of nonempty sets with surjective transition maps. Then the inverse limit `$\varprojlim_I\,E$` is nonempty."
This is true if $I=\mathbb{N}$ ("dependent choices"), and hence more generally if $I$ has a countable cofinal subset. But surprisingly (to me), those are the only sets $I$ for which (*) holds for every system $E$. (This is proved somewhere in Bourbaki's exercises, for instance).
Of course, other useful cases where (*) holds are when the $E_i$'s are finite, or more generally compact spaces with continuous transition maps.
-
It took me a bit too long to realize that these two beliefs are contradictory:
• Period 3 $\Rightarrow$ chaos: if a continuous self-map on the interval has a period-3 orbit, then it has orbits of all periods.
• The black dots on each horizontal slice of this picture above $x=a$ show the location of the periodic points of the logistic map $f_a(y) = ay(1-y)$:
You can clearly see a 3-cycle in the light area towards the right; yet we know that if there is a 3-cycle in that slice then there must be a cycle of any period in that slice... so where are they?
(The other cycles are there of course, but they are repelling and hence are not visible. You can see artifacts from these repelling cycles near the period-doubling bifurcations in this picture)
-
Here are mistakes I find surprisingly sharp people make about the weak$^{*}$ topology on the dual of $X,$ where $X$ is a Banach space.
-It is metrizable if $X$ is separable.
-It is locally compact by Banach-Alaoglu.
-The statement $X$ is weak`$^{*}$` dense in the double dual of $X$ proves that the unit ball of $X$ is weak$^{*}$ dense in the unit ball of the double dual of $X.$
The first two are in fact never true if $X$ is infinite dimensional. While both statements in the third claim are true, the second one is significantly stronger, but a lot of people believe you can get it from the first by just "rescaling the elements" to have norm $\leq 1.$ (Although the proof of the statements in the third claim is not hard). The difficulty is that if $X$ is infinite dimensional then for any $\phi$ in the dual of $X,$ there exists a net $\phi_{i}$ in the dual of $X$ with $\|\phi_{i}\|\to \infty$ and $\phi_{i}\to \phi$ weak$^{*},$ so this rescaling trick cannot be uniformly applied. Really these all boil down to the following false belief:
-The dual of $X$ has a non-empty norm bounded weak$^{*}$ open set.
Again when $X$ is infinite dimensional this always fails.
-
1
I think $M(T)$ is not metrizable in the weak$^{*}$ topology, and in fact my claim that this fails for every infinite dimensional Banach space i also think is true. The rough outline of the proof I saw was this: 1. If $X^{*}$ is weak$^{*}$ metrizable, then a first countabliity at the origin argument implies that $X^{*}$ has a translation invariant metric given the weak$^{*}$ topology. 2. One can characterize completeness topologically for translation-invariant metrics, and see directly that if $X^{*}$ had a translation-invariant metric given the weak$^{*}$ topology it would be complete. – Benjamin Hayes Oct 12 2011 at 3:42
show 3 more comments
The cost of multiplying two $n$-digit numbers is of order $n^2$ (because each digit of the first number has to be multiplied with each digit of the second number).
A lot of information is found on http://en.wikipedia.org/wiki/Multiplication_algorithm .
The first faster (and easily understandable) algorithm was http://en.wikipedia.org/wiki/Karatsuba_algorithm with complexity $n^{log_2 3} \sim n^{1.585}$.
Basic idea: To multiply $x_1x_2$ and $y_1y_2$ where all letters refer to $n/2$-digit parts of $n$-digit numbers, calculate $x_1 \cdot y_1$, $x_2\cdot y_2$ and $(x_1+x_2)\cdot(y_1+y_2)$ and note that this is sufficient to calculate the result with three such products instead of four.
-
1
It would be better if these misconceptions would come with explanations how things really are... – darij grinberg Apr 10 2011 at 18:28
1
Along these lines: there is a widespread misapprehension that multiplication is the same thing as a multiplication algorithm (whichever one the speaker learned in elementary school). – Thierry Zell Apr 10 2011 at 19:25
4
At least it's better than people thinking multiplication is constant-time. :P – Harry Altman Apr 10 2011 at 19:35
show 3 more comments
"the quadratic variation of a Brownian motion between $0$ and $T$ is equal to $T$"
this is only true that if $\mathcal{D}^N$ is a nested sequence of partitions of $[0,T]$ (with mesh size going to $0$) then the quadratic variation of a Brownian motion along these partitions converges towards $T$, almost surely. If we define the quadratic variation of a continuous function $f$ as we would like to, $$Q(f,[0,T]) = \sup_{0=t_0<\ldots, t_n=T } \sum |f(t_k)-f(t_{k+1})|^2,$$ then the Brownian paths have almost surely infinite quadratic variation.
This was something I had never noticed until I read the wonderful book "Brownian motion" by Peter Morters and Yuval Peres.
-
As a student, I thought (for quite a while) that our textbook had stated that tensoring commutes with taking homology groups. It wasn't until calculating the homology groups of the real projective plane over rings Z and Z/2Z that I realized my mistake.
-
Two very common errors I see in (bad) statistics textbooks are
(i) zero 3rd moment implies symmetry (though generally stated in terms of "skewness", where skewness has just been defined as a scaled third moment)
(ii) the median lies between the mean and the mode
(I have seen a bunch of related errors as well.)
Another one I often see is some form of claim that the t-statistic goes to the t-distribution (with the usual degrees of freedom) in large samples from non-normal distributions.
Even if we take as given that the samples are drawn under conditions where the central limit theorem holds, this is not the case. I have even seen (flawed) informal arguments given for it.
What does happen is (given some form of the CLT applies) Slutzky's theorem implies that the t-statistic goes to a standard normal as the sample size goes to infinity, and of course the t-distribution also goes to the same thing in the limit - but so, for example, would a t-distribution with only half the degrees of freedom - and countless other things would as well.
The first two errors are readily demonstrated to be false by simple counterexample, and to convince people that they don't have the third usually only requires pointing out that the numerator and denominator of the t-statistic won't be independent if the distribution is non-normal, or any of several other issues, and they usually realize quite quickly that you can't just hand-wave this folk-theorem into existence.
-
Consider the following well-known result: Let $(E,\leq)$ be an ordered set. Then the following are equivalent: (i) Every nonempty subset of $E$ has a maximal element. (ii) Every increasing sequence in $E$ is stationary.
It is immediate that (i) implies (ii). To prove the converse, one assumes that (i) is false and then "constructs step by step" a strictly increasing sequence.
The common mistake (which I have seen in textbooks) is to describe the latter construction as a proof by induction. In fact, the construction uses the axiom of choice (or at least the dependent choice axiom).
(As a special case, I don't think ZF can prove that every PID is a UFD.)
-
"If a field $K$ has characteristic 0 and $G$ is a group, then all $KG$-modules are completely reducible."
True for finite groups but very false in general.
-
Inversion is an automorphism of a group. ('Cause it, like, preserves the conjugacy classes and all that...)
-
I don't know how common this is, but I've noticed it half an hour ago in some notes I had written: If $J$ is a finitely generated right ideal of a not necessarily commutative ring $R$, and $n$ is natural, then $J^n$ is finitely generated, isn't it?
No, it isn't. For an example, try $R=\mathbb Z\left\langle X_1,X_2,X_3,...\right\rangle$ (ring of noncommutative polynomials) and $J=X_1R$.
-
A degree $k$ map $S^n\to S^n$ induces multiplication by $k$ on all the homotopy groups $\pi_m(S^n)$.
(Not sure if this is a common error, but I believed it implicitly for a while and it confused me about some things. If you unravel what degree $k$ means and what multiplication by $k$ in $\pi_m$ means, there's no reason at all to expect this to be true, and indeed it is false in general. It is true in the stable range, since $S^n$ looks like $\Omega S^{n+1}$ in the stable range, "degree k" can be defined in terms of the H-space structure on $\Omega S^{n+1}$, and an Eckmann-Hilton argument applies.)
-
2
If $n$ is even and $x \in \pi_{2n-1}(S^n)$ and $f$ a degree $k$ map and $H$ the Hopf invariant, then $H(f_* (x)) = k^2 H(x)$. A related misbelief: if $M$ is a framed manifold and $N\to$M a finite cover, of degree $d$. Then the framed bordism classes satisfy $[N]=d [M]$. Completely wrong. – Johannes Ebert Apr 14 2011 at 9:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 161, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441908597946167, "perplexity_flag": "middle"} |
http://mathhelpforum.com/trigonometry/52519-solved-verify-equation-identity.html | # Thread:
1. ## [SOLVED] Verify that the equation is an identity.
$Sin 4\alpha = 4 Sin\alpha Cos\alpha Cos 2\alpha$
2. Originally Posted by something3k
$Sin 4\alpha = 4 Sin\alpha Cos\alpha Cos 2\alpha$
start on the left hand side and use the double angle formula for sine: $\sin 2 \alpha = 2 \sin \alpha \cos \alpha$
note that $\sin4 \alpha = \sin [2(2 \alpha)]$
3. so would that equal $4 \sin\alpha \cos\alpha?$
4. Originally Posted by something3k
so would that equal $4 \sin\alpha \cos\alpha?$
no, follow the rule: sin(2A) = 2*sinA*cosA
5. Hello, something3k!
Identity: . $2\!\cdot\!\sin\theta\!\cdot\!\cos\theta \:=\:\sin2\theta$
$\sin4\alpha \:= \:4\!\cdot\!\sin\alpha\!\cdot\!\cos\alpha\!\cdot\! \cos2\alpha$
Or start on the right side . . .
$4\!\cdot\!\sin\alpha\!\cdot\!\cos\alpha\!\cdot\!\c os2\alpha \;=\;2\cdot\underbrace{2\!\cdot\!\sin\alpha\!\cdot \!\cos\alpha}_{\text{This is }\sin2\alpha}\cdot\cos2\alpha$
. . . . . . . . . . . . . $= \;\underbrace{2\!\cdot\!\sin2\alpha\!\cdot\!\cos2\ alpha}$
. . . . . . . . . . . . . $=\qquad \sin4\alpha$
6. ahh i seee, thanks a lot you guys i am thinking a little too hard. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8014259338378906, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Voltage_gain | # Gain
(Redirected from Voltage gain)
For other uses, see Gain (disambiguation).
In electronics, gain is a measure of the ability of a circuit (often an amplifier) to increase the power or amplitude of a signal from the input to the output, by adding energy to the signal converted from some power supply. It is usually defined as the mean ratio of the signal output of a system to the signal input of the same system. It may also be defined on a logarithmic scale, in terms of the decimal logarithm of the same ratio ("dB gain"). A gain greater than one (zero dB), that is, amplification, is the defining property of an active component or circuit, while a passive circuit will have a gain of less than one.
Thus, the term gain on its own is ambiguous. For example, "a gain of five" may imply that either the voltage, current or the power(wattage) is increased by a factor of five, although most often this will mean a voltage gain of five for audio and general purpose amplifiers, especially operational amplifiers, but a power gain for radio frequency amplifiers. Furthermore, the term gain is also applied in systems such as sensors where the input and output have different units; in such cases the gain units must be specified, as in "5 microvolts per photon" for the responsivity of a photosensor. The "gain" of a bipolar transistor normally refers to forward current transfer ratio, either hFE ("Beta", the static ratio of Ic divided by Ib at some operating point), or sometimes hfe (the small-signal current gain, the slope of the graph of Ic against Ib at a point).
The term has slightly different meanings in two other fields. In antenna design, antenna gain is the ratio of power received by a directional antenna to power received by an isotropic antenna. In laser physics, gain may refer to the increment of power along the beam propagation direction in a gain medium, and its dimension is m−1 (inverse meter) or 1/meter.
## Logarithmic units and decibels
### Power gain
Power gain, in decibels (dB), is defined by the 10 log rule as follows:
$\text{Gain}=10 \log \left( {\frac{P_{\mathrm{out}}}{P_{\mathrm{in}}}}\right)\ \mathrm{dB}$
where Pin and Pout are the input and output powers respectively.
A similar calculation can be done using a natural logarithm instead of a decimal logarithm, and without the factor of 10, resulting in nepers instead of decibels:
$\text{Gain} = \ln\left( {\frac{P_{\mathrm{out}}}{P_{\mathrm{in}}}}\right)\, \mathrm{Np}$
### Voltage gain
When power gain is calculated using voltage instead of power, making the substitution (P=V 2/R), the formula is:
$\text{Gain}=10 \log{\frac{(\frac{{V_\mathrm{out}}^2}{R_\mathrm{out}})}{(\frac{{V_\mathrm{in}}^2}{R_\mathrm{in}})}}\ \mathrm{dB}$
In many cases, the input and output impedances are equal, so the above equation can be simplified to:
$\text{Gain}=10 \log \left( {\frac{V_\mathrm{out}}{V_\mathrm{in}}} \right)^2\ \mathrm{dB}$
and then the 20 log rule:
$\text{Gain}=20 \log \left( {\frac{V_\mathrm{out}}{V_\mathrm{in}}} \right)\ \mathrm{dB}$
This simplified formula is used to calculate a voltage gain in decibels, and is equivalent to a power gain only if the impedances at input and output are equal.
### Current gain
In the same way, when power gain is calculated using current instead of power, making the substitution (P = I 2R), the formula is:
$\text{Gain}=10 \log { \left( \frac { {I_\mathrm{out}}^2 R_\mathrm{out}} { {I_\mathrm{in}}^2 R_\mathrm{in} } \right) } \ \mathrm{dB}$
In many cases, the input and output impedances are equal, so the above equation can be simplified to:
$\text{Gain}=10 \log \left( {\frac{I_\mathrm{out}}{I_\mathrm{in}}} \right)^2\ \mathrm{dB}$
and then:
$\text{Gain}=20 \log \left( {\frac{I_\mathrm{out}}{I_\mathrm{in}}} \right)\ \mathrm{dB}$
This simplified formula is used to calculate a current gain in decibels, and is equivalent to the power gain only if the impedances at input and output are equal.
The "current gain" of a bipolar transistor, hFE or hfe, is normally given as a dimensionless number, the ratio of Ic to Ib (or slope of the Ic-versus-Ib graph, for hfe).
In the cases above, gain will be a dimensionless quantity, as it is the ratio of like units (Decibels are not used as units, but rather as a method of indicating a logarithmic relationship). In the bipolar transistor example it is the ratio of the output current to the input current, both measured in Amperes. In the case of other devices, the gain will have a value in SI units. Such is the case with the operational transconductance amplifier, which has an open-loop gain (transconductance) in Siemens (mhos), because the gain is a ratio of the output current to the input voltage.
### Example
Q. An amplifier has an input impedance of 50 ohms and drives a load of 50 ohms. When its input ($V_\mathrm{in}$) is 1 volt, its output ($V_\mathrm{out}$) is 10 volts. What is its voltage and power gain?
A. Voltage gain is simply:
$\frac{V_\mathrm{out}}{V_\mathrm{in}}=\frac{10}{1}=10\ \mathrm{V/V}.$
The units V/V are optional, but make it clear that this figure is a voltage gain and not a power gain. Using the expression for power, P = V2/R, the power gain is:
$\frac{V_\mathrm{out}^2/50}{V_\mathrm{in}^2/50} = \frac{V_\mathrm{out}^2}{V_\mathrm{in}^2}=\frac{10^2}{1^2}=100\ \mathrm{W/W}.$
Again, the units W/W are optional. Power gain is more usually expressed in decibels, thus:
$G_{dB}=10 \log G_{W/W}=10 \log 100=10 \times 2=20\ \mathrm{dB}.$
A gain of factor 1 (equivalent to 0 dB) where both input and output are at the same voltage level and impedance is also known as unity gain.
## See also
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927853524684906, "perplexity_flag": "middle"} |
http://jdh.hamkins.org/maximalityprinciple/ | # A simple maximality principle
Posted on September 25, 2011 by
• J. D. Hamkins, “A simple maximality principle,” J. Symbolic Logic, vol. 68, iss. 2, pp. 527-550, 2003.
````@article{Hamkins2003:MaximalityPrinciple,
AUTHOR = {Hamkins, Joel David},
TITLE = {A simple maximality principle},
JOURNAL = {J. Symbolic Logic},
FJOURNAL = {The Journal of Symbolic Logic},
VOLUME = {68},
YEAR = {2003},
NUMBER = {2},
PAGES = {527--550},
ISSN = {0022-4812},
CODEN = {JSYLA6},
MRCLASS = {03E35 (03E40)},
MRNUMBER = {1976589 (2005a:03094)},
MRREVIEWER = {Ralf-Dieter Schindler},
DOI = {10.2178/jsl/1052669062},
URL = {http://projecteuclid.org/getRecord?id=euclid.jsl/1052669062},
month = {June},
eprint = {math/0009240},
}````
In this paper, following an idea of Christophe Chalons, I propose a new kind of forcing axiom, the Maximality Principle, which asserts that any sentence$\varphi$ holding in some forcing extension $V^{\mathbb{P}}$ and all subsequent extensions $V^{\mathbb{P}*\mathbb{Q}}$ holds already in $V$. It follows, in fact, that such sentences must also hold in all forcing extensions of $V$. In modal terms, therefore, the Maximality Principle is expressed by the scheme $(\Diamond\square\varphi)\to\square\varphi$, and is equivalent to the modal theory S5. In this article, I prove that the Maximality Principle is relatively consistent with ZFC. A boldface version of the Maximality Principle, obtained by allowing real parameters to appear in $\varphi$, is equiconsistent with the scheme asserting that $V_\delta$ is an elementary substructure of $V$ for an inaccessible cardinal $\delta$, which in turn is equiconsistent with the scheme asserting that ORD is Mahlo. The strongest principle along these lines is the Necessary Maximality Principle, which asserts that the boldface MP holds in V and all forcing extensions. From this, it follows that $0^\sharp$ exists, that $x^\sharp$ exists for every set $x$, that projective truth is invariant by forcing, that Woodin cardinals are consistent and much more. Many open questions remain.
This entry was posted in Publications and tagged forcing, forcing axioms, large cardinals, maximality principle, modal logic by Joel David Hamkins. Bookmark the permalink. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905136585235596, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/91439?sort=newest | ## Two groups acting on a set.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we are given a set S of points on which two different groups G and G' (given by sets of generating permutations) act. Is there an efficient algorithm for finding generators the largest pair of subgroups H and H' of G and G' whose action on S coincide?
-
2
I don't understand this question. We have two different groups $H$ and $H'$ acting on the same set $S$. What does it mean for these actions to "coinicide"? – Steven Landsburg Mar 17 2012 at 2:38
@Steven: $H$ may coincide with $H'$ even if $G\ne G'$. In general $H$ and $H'$ can act with kernels $N, N'$ and the actions of $H/N$ and $H'/N'$ may coincide (i.e. these two permutation groups may be the same). – Mark Sapir Mar 17 2012 at 2:43
3
In fact the question asks for an algorithm of finding the intersection of two subgroups of $S_n$ (I assume the set to be finite). Each subgroup is given by generating permutations. I guess the problem is NP-hard (at least). – Mark Sapir Mar 17 2012 at 2:49
I couldnt understand why this question has a vote to close and then by accident my stubby finger hit to vote to close when I was checking the reason. Sorry. This is a good question. I vote to undo my accidental vote. – Benjamin Steinberg Mar 17 2012 at 16:59
1
Benjamin Steinberg: The vote to close was mine, for the reason given in my comment above. I see now that the question is both meaningful and interesting, though I continue to believe that the wording makes it unnecessarily obscure. – Steven Landsburg Mar 17 2012 at 19:38
## 2 Answers
This paper studies the (easier) problem of checking if the intersection of two subgroups of $S_n$ is trivial. In particular, it is shown that the graph isomorphism problem polynomially reduces to that problem. The converse reduction is not known. Thus the problem of checking that the intersection is trivial is at least as hard as the graph isomorphism problem. It is difficult but not known to be NP-hard.
Update Also look at this ICM talk by Babai.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is known that the following three computational problems for subgroups $G$ of $S_n$ are polynomially equivalent:
1. Computing (generators of) the centralizer $C_G(g)$ of an element $g \in S_n$ (and also testing $g,h \in S_n$ for conjugacy in $G$).
2. Computing (generators of) the setwise stabilizer of a subset of the set of size $n$ on which $S_n$ acts (and also testing two such subsets for being in the same orbit under $S_n$).
3. Computing (generators of) the intersection of $G$ with another subgroup $H \le S_n$.
As Mark says, these are all at least as difficult as graph isomorphism.
The proofs are clever but basically elementary and interesting, so I recommend them! One reference is:
E.M. Luks, Permutation groups and polynomial-time computation'', in L. Finkelstein and W.M. Kantor (eds.), Groups and Computation, Dimacs Series in Discrete Mathematics and Theoretical Computer Science vol. 11, American Math. Soc., 139-176, 1993.
I have just noticed that Luks has a recently published book with the same title, which I have not seen yet.
Added later: It should also be mentioned that the implementations of the above algorithms in GAP and Magma involve backtrack searches, and so are potentially exponential, but in practice they run fast for most examples of moderate degree.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447321891784668, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/144366-possible-values-complex-integral.html | # Thread:
1. ## Possible values for a complex integral
Hello,
How many possible values are there for the following integral-
$\oint_{C}\frac{dz}{(z-z_1)(z-z_2)..(z-z_n)}$
(I'm guessing the answer is not n, but don't see why)
Should also add that C is a closed contour that does not go through any of the points $z_i$
2. Looks like $2^n$ to me if the poles are simple and residues do not cancel one or more of each other.
3. Originally Posted by dudyu
Hello,
How many possible values are there for the following integral-
$\oint_{C}\frac{dz}{(z-z_1)(z-z_2)..(z-z_n)}$
(I'm guessing the answer is not n, but don't see why)
Should also add that C is a simple closed contour that does not go through any of the points $z_i$
Otherwise there are infinitely many possible values!
4. Thanks for the replies.
Well, assuming poles are simple and residues don't cancel each other, I take it there are 2 possible values per point $z_i$ . Why's that?
5. Just start drawing circles around each combination of poles and include one circle that does not include any and keep in mind I would have gotten this one wrong as per Bruno up there.
6. Got it. Thanks very much! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495028257369995, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/85046/interesting-applications-of-martingale-brown-motion-diffusion-percolation-theo | ## Interesting applications of [Martingale/Brown motion/diffusion/percolation ] theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is motivated by an exercise called "The Star-ship Enterprise's Problem" in Williams's book "probability with martingales", it can be stated as follows:
Suppose the control system on the spaceship has gone wonky. All that one can do is to set a distance to be travelled. The spaceship will then move that distance in a randomly chosen direction, then stop. The object is to get into the Solar system, a ball of radius $r$. Initially, the spaceship is at a distance $R(>r)$ from the sun. It can be proven with the help of martingale theory that the probability
$$P(\text{the spaceship gets into Solar system })\leq r/R$$ You can find one proof here.
So I wonder if there are some other examples in probability theory, they are interesting enough(of course interesting is an subjective manner) , can be easily formulated and understood by ordinary people, and are also nice applicaitions of Martingale/Brown motion/diffusion/percolation theory?
Here I add another well-known examples: The Equidistribution Problem in number theory, it can be solved by ergodic theory. It has a nice formulation as the reflection of a billiard ball on the table, see Hardy's book "An introduction to number theory".
-
Brownian scaling follows from the fact that the square of a simple random walk minus the number of steps is a martingale. – Steve Huntsman Jan 6 2012 at 16:31
3
community wiki? – Kevin O'Bryant Jan 6 2012 at 19:28
Martingales, Brownian motion, diffusions and percolation are some of the major workhorses in contemporary probability theory. This question is essentially, "Interesting applications of probability theory?" which is absolutely too general for the site. I recommend you reformulate this into a much more precise question. What specific applications do you have in mind? – Tom LaGatta Jan 6 2012 at 22:30
@Tom LaGatta "...Any question of interest to a wide class of mathematicians (such as this one) has a right to be posted here. – Tom LaGatta Jan 15 2010 at 22:11" :):) – Alexander Chervov Jan 7 2012 at 17:28
The thing that makes the question puzzling is that while there are fairly natural connections between martingales, Brownian motion, and diffusions, percolation is really apparently unrelated to the others (which is not to say that, for example, one cannot find martingale techniques used in percolation theory...) – mfolz Jan 8 2012 at 8:32
## 3 Answers
There are some more examples in Williams' book; my favorite is the "abracadabra" problem, which I state like this.
Pick a random number in $[0,1)$, and looking at its decimal expansion, the expected number of digits you need to examine before finding the first "12183" is strictly less than the expected number to find "12381". Most everyone finds this surprising!
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In computer graphics to generate textures like mountains, clouds, one can use things similar to trajectories of Brownian motion. To my taste these pictures are quite nice:
http://www.gameprogrammer.com/fractal.html
Or search on "RMD = random midpoint displacement" algorithm - plenty pages on web.
Going into more mathematical details: in one dimension RMD can generate franctional Brownian bridges. However 2-dimensional process generated by RMD is not 2-d fractional brownian motion (since it is NOT rotation invariant) however it might not be important.
-
The Gambler's Ruin Problem is a nice motivator for martingale techniques (the wikipedia solution is really a martingale solution in disguise, but not totally rigorous -- it can be made so by using the Optional Stopping Theorem for martingales).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161525368690491, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/9523/far-field-intensity-from-scattering-of-small-particles/13294 | # Far-field intensity from scattering of small particles
Howdy, I'm building a simulation for looking at the light field underwater. In order to verify my simulation, I'm looking for some data showing the far-field intensity that comes from single scattering from many small particles in suspension. I suspect Mie theory plays a part here, but I'm having a hard time finding some results, rather than doing all the derivations myself.
In other words, I want to know the power distribution on a plane after a beam of light has been scattered by a bunch of small particles through a volume. I know Oregon Medical has a nice online simulation that produces scattering phase functions (http://omlc.ogi.edu/calc/mie_calc.html), but that doesn't give me the power on a plane - only the scattering profile from individual particles. I'm fine with only a single scattering result.
I want to do initial verification using a fixed particle size. Having a hard time finding a reference with this data. Help?
-
## 4 Answers
The main problem about a rigorous solution to such a scattering proplem is that computations are extremely demanding. Just imagine you have a wavelength $\lambda$ of some $400$nm to $700$nm for visible light (from here):
Now, to do physically meaningful simulations, you will need a sub-wavelength lattice which makes any computational cell above, say $10\,\mu m^3$ not accessible since you have in the order of one million grid points.
## Approximative Approaches
But of course there can be ways out of it if you are willing to make some approximations which will largely depend on the characteristics of the particles you are looking at. It is best to assume that we only have spherical particles since we can apply Mie theory in this case.
### Large Particles
First of all, let us consider particles which are much larger than the wavelength. Then, the radius $R$ times the wave vector $k=2\pi/\lambda$ is much bigger than one, $$kR\gg1$$ which basically means that one observes reflection at a plane interface. You can implement these particles using geometrical optics (mixed with Fresnel reflection if you like) since nothing really wave-like will happen as in this image (taken from here):
### Small Particles
Second, the particles should be much smaller than the wavelength, $$kR\ll1\,.$$ Then, everything what is observed is a sum of dipolar responses of the particles in the so-called Rayleigh-scattering. Then,
the intensity of light scattered by a single small particle from a beam of unpolarized light of wavelength $\lambda$ and intensity $I_0$ is given by:
$$I=I_0(1+\cos^2\theta)\frac{(kR)^6}{2(kr)^2}\left(\frac{n_p^2-1}{n_p^2+2}\right)$$
where I have chosen the variables to be consistent with the used terminology and $r$ is the distance to the object, $\theta$ is the scattering angle and $n_p$ is the sphere's refractive index. Here is an image of such a situation with some metal particles also having quadrupolar excitation (from here):
### A Mean Field Approach - Effective Permittivity
If you have a lot of these small objects, you may use the Clausius-Mossotti relation which gives you an effective permittivity $\epsilon_p=n_p^2$ depending on the concentration of the particle in some volume: $$\epsilon_{eff} = \epsilon_p + \frac{n\alpha}{1-\frac{n\alpha}{3\epsilon_p}}$$ where $\alpha$ is the polarizability of the sphere, for details see e.g. Electromagnetic mixing formulas and applications by Sihvola. This would be something like a mean-field approach. You can make some very neat effects using this effective approach since it allows you to calculate a continuous refraction around some particle streams under water.
However, if the particles size is in the order of the wavelength, $$kR\approx 1$$ then you may have to take higher multipole moments into account which may be a very demanding task.
For much more on the subject I would recommend Bohren & Huffmanns classic Absorption and Scattering of Light by Small Particles.
Sincerely
-
Very nice answer. – Colin K Aug 8 '11 at 21:04
Thank you @Colin :) – Robert Filter Aug 9 '11 at 9:14
Excellent answer, @Robert. I posted an answer showing the experimental data I found and was ultimately able to use, but I'm choosing your response as the best answer. – gallamine Aug 10 '11 at 13:51
@gallamine: Thank you very much. You may also cross-check the information in the thesis to those given in this answer. There shouldn't be too much contradiction :) Greets – Robert Filter Aug 11 '11 at 8:50
Well, the Oregon Medical site does give you (almost) the power on a plane. There's a linear plot of Magnitude vs. angle that you can convert to a plane using x=arctan(angle). On the other hand, calculating Mie scattering is rather simple. Just check Boren & Huffman "Absorption and Scattering of Light by Small Particles", where they give the explicit formulas and several approximations.
Anyway, I was wondering if what you're trying to do is right. Are you trying to calculate the transmission through clear, calm water? Because in that case, you are using an incorrect approach. Mie scattering is the electromagnetic solution for a single spherical (or elliptical) particle. It works ok if you have many particles, but not too many. If multiple scattering (waves that scatter on more than one particle) becomes important, it's not longer valid. I'm not sure what is the right approach for liquids or solids, but calculating individual particles isn't. Possibly mean-field theory or effective medium, depending on what are you exactly after.
-
I'll check out the book. For my simulation, I'm computing the power distribution on a plane from a laser propagating through scattering medium. Looking at the Oregon Medical site, it's unclear what the input light field looks like for their computations. Do you happen to know? I'm using Mie scattering to verify my Monte Carlo-type model. The actual simulation will be for underwater light scattering. I was trying to choose something easy to check against first. – gallamine May 24 '11 at 18:48
I was able to find experimental and simulated data for the plane intensity from multiple scattering of small (1, 5 and 10 $\mu$m spheres in the Thesis of Edouard Berrocal. His thesis can be downloaded here.
-
Seems an interesting work, especially the introduction of some dynamics into the system by an assumed random walk which lead to the Monte-Carlo approach. – Robert Filter Aug 11 '11 at 8:55
For small particles, compared to light wavelength, start here WP Rayleigh scattering Use a montecarlo method and search for: raytrace, photon mapping, rendering underwater, etc, etc, In the simulation of light a whole field of searching and reading is waiting for you.
There are a lot of free packages and one that is unbiased, physically based, very complete and opensource is the LuxRender one. Find there 'fog' and 'dust' topics.
Render the view from under water, looking above, is even more complicated because the waves will make it much more difficult to simulate. Sometimes, to render the wavy nature of the ocean surface is used a fractal approach. Rendering the atmhosphere in some sunsets also needs Rayleigh scattering. There exist some approaches that simulate the fog, flames, dust, with an approximation easier to implement although not physically correct.
I think that you will not need to account for polarization (missing in LuxRender). Mie theory is for larger, in relation to wavelength, spheroid particles, like rain, etc.
The current version of LuxRender is able to use the graphics card computation power to boost the simulation (OpenCL in a decent card). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354851841926575, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/207684/integrating-a-power-series?answertab=active | # Integrating a power series
I am not a pure mathematician, so I would appreciate some help from people who have done analysis! Can we have some function which is analytic (which I believe just means expressible as a power series) but the term by term integral does not converge or somehow goes wrong? If so, what are the requirements for the function so that this does not happen?
Also, suppose we don't do the integration term by term, then would it always be true that an integral is well-defined/exists where the power series of the function "works"?
-
## 1 Answer
A power series $\sum_i a_ix^i$ has radius of convergence at least $r$ whenever $a_nr^n$ has at most sub-exponential growth as $n\to\infty$, that is if $a_nr^n=o(\lambda^n)$ for every $\lambda>1$. Taking a formal primitive gives $\sum_{i>0} \frac{a_{i-1}}ix^i$, and it is easy to see that $\frac{a_{n-1}}nr^n=o(\lambda^n)$ if and only if $a_nr^n=o(\lambda^n)$, so the radius of convergence is unchanged by taking a primitive.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9669449925422668, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/tagged/distributions+normality | # Tagged Questions
2answers
215 views
### Simple question about the asymptotics of estimators
Consider any arbitrary estimator called $\hat{M}$ (e.g., regression coefficient estimator or specific type of correlation estimator, etc) that satisfies the following asymptotic property: ...
2answers
454 views
### Can we see shape of normal curve somewhere in nature?
I do not want to know if some phenomena in nature have normal distribution, but whether we can somewhere see shape of normal curve as we can see it for example in Galton box. See this figure from ...
5answers
734 views
### How to determine whether data is slightly or extremely non-normally distributed?
I'm a PhD student and doing a research on regression analysis. My question is how to determine whether the data is slightly, moderately or extremely non-normally distributed?
3answers
239 views
### Is there any test for a null hypothesis of non-normality?
I'm currently looking for a test having for null hypothesis that the sample does not come from observing a normally distributed random variable. In other words, I'd like to know if there's a test ...
1answer
234 views
### What happens if you reject normality of residuals when estimating with least square ?
What happens if you reject normality of residuals when estimating with least square ? Is it too important to have normality on the residuals?
4answers
3k views
### Interpretation of Shapiro test
I'm pretty new to statistics and I need your help. I have a small sample looking as follows: H4U 0.269 0.357 0.2 0.221 0.275 0.277 0.253 0.127 0.246 I run the Shapiro test ...
2answers
1k views
### Testing normality
I have a large dataset (500000 data, V1 column include all the data). x <- read.csv("mydata.csv", header=F) hist(x) Which gives: Looking at the data, I ...
3answers
150 views
### Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
What's the best way to approximate $Pr[n \leq X \leq m]$ for two given integers $m,n$ when you know the mean $\mu$, variance $\sigma^2$, skewness $\gamma_1$ and excess kurtosis $\gamma_2$ of a ...
5answers
15k views
### How to perform a test using R to see if data follows normal distribution
I have a data set with following structure: a word | number of occurrence of a word in a document | a document id How can I perform a test for normal ...
5answers
3k views
### If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
I'm sure I've got this completely wrapped round my head, but I just can't figure it out. The t-test compares two normal distributions using the Z distribution. That's why there's an assumption of ...
9answers
1k views
### How do I figure out what kind of distribution represents this data on ping response times?
i've sampled a real world process, network ping times. The "round-trip-time" is measured in milliseconds. Results are plotted in a histogram: Ping times have a minimum value, but a long upper tail. ...
4answers
244 views
### Modeling success rate with gaussian distribution
In many papers I see data representing a rate of success (i.e a number between 0 and 1) modeled as a gaussian. This is clearly a sin (the range of variation of the gaussian is all of R), but how bad ...
1answer
242 views
### Approximating density function for a non-normal distribution
My question is actually quite short, but I'll have to start by describing the context since I am not sure how to directly ask it. Consider the following "game": We have a segment of length n ("large ...
1answer
12k views
### What is the difference between the Shapiro-Wilk test of normality and the Kolmogorov-Smirnov test of normality?
What is the difference between the Shapiro-Wilk test of normality and the Kolmogorov-Smirnov test of normality? When will results from these two methods differ?
7answers
4k views
### What is normality?
In many different statistical methods there is an "assumption of normality". What is "normality" and how do I know if there is normality? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9054330587387085, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/research-level?page=2&sort=newest&pagesize=15 | # Tagged Questions
The research-level tag applies to questions that arise in graduate and post-secondary work. These questions often require domain-specific knowledge and could not be answered from a general source or may be beyond the level typically covered by Wikipedia and other popular sources. research-level ...
learn more… | top users | synonyms
0answers
167 views
### Quasi 1D insulators with strong spin-orbital interaction
We know that the spin-1 chain realizes the Haldane phase which is an example of symmetry protected topological (SPT) phases (ie short-range entangled phases with symmetry). The Haldane phase is ...
2answers
445 views
### Majorana zero mode in quantum field theory
Recently, Majorana zero mode becomes very hot in condensed matter physics. I remember there was a lot of study of fermion zero mode in quantum field theory, where advanced math, such as index ...
10answers
1k views
### What is spontaneous symmetry breaking in QUANTUM systems?
Most descriptions of spontaneous symmetry breaking, even for spontaneous symmetry breaking in quantum systems, actually only give a classical picture. According to the classical picture, spontaneous ...
1answer
119 views
### How can you distinguish between projections of quantum states?
Consider this problem in quantum cryptography: We have two pure states $\phi_1,\phi_2$ as input and constants $0 \leq \alpha <\beta \leq 1$, where "Yes instances" are those for which ...
1answer
176 views
### The bijective correspondence between a symmetric polynomial and edge excitation of the fractional quantum hall droplet
I am recently reading Xiao-Gang Wen's paper (http://dao.mit.edu/~wen/pub/edgere.pdf) on edge excitation for fractional quantum hall effect. On page 25, he claimed that it is easy to show that there ...
2answers
280 views
### What causes a Phase-Transition
A phase transition occurs when for example, heat is applied continuously to a liquid and after a certain time it converts into a gas. How does this process work in detail? Is their a chain reaction ...
1answer
585 views
### What is the relationship between string net theory and string / M-theory?
I've just learned from this one of Prof. Wen's answers that there exists a theory called string net theory. Since I've never heard about this before it picks my curiosity, so I`d like to ask some ...
3answers
437 views
### Is there any quantum-gravity theory that has flat space-time and gravitons?
Many quantum-gravity theories are strongly interacting. It is not clear if they produce the gravity as we know it at low energies. So I wonder, is there any quantum-gravity theory that a) is a well ...
1answer
269 views
### Graphene Moebius Strip
I'm refering to the Paper: PHYSICAL REVIEW B 80, 195310 (2009) "Möbius graphene strip as a topological insulator" Z. L. Guo, Z. R. Gong, H. Dong, and C. P. Sun. The paper is also available as a ...
0answers
135 views
### Toda equations and surface operator
I would like to know the reason why the equation (14) in the paper by Yamada is called the Toda equation. \begin{equation} \left[\frac12\sum_{i=1}^N\left(y_i\frac{\partial}{\partial ...
2answers
196 views
### Branch-point twist fields and operator insertions on a Riemann manifold
I am having trouble understanding how Eq (2.6) in this paper (PDF) $$Z[\mathcal{L},\mathcal{M}_{n}]\propto\langle\Phi(u,0)\tilde{\Phi}(v,0)\rangle_{\mathcal{L}^{(n)},\mathbb{R}^{2}}$$ generalizes to ...
1answer
110 views
### precise definition of “moduli space”
I'm curious what the precise definition of the moduli space of a QFT is. One often talks about the classical moduli space, which then can get quantum corrections. Does this mean the quantum moduli ...
2answers
189 views
### Poincare Symmetry in QFT
Given that spacetime is not affine Minkowskispace, it does of course not possess Poincare symmetry. It is still sensible to speak of rotations and translations (parallel transport), but instead of ...
0answers
109 views
### What is a Hilbert space filter?
In a recent paper, Side-Channel-Free Quantum Key Distribution, by Samuel L. Braunstein and Stefano Pirandola. Phys. Rev. Lett. 108, 130502 (2012). doi:10.1103/PhysRevLett.108.130502, ...
2answers
91 views
### fitting free QFTs into the Haag-Kastler algebraic formulation
Has the free Klein-Gordon quantum field theory been fitted into the Haag-Kastler algebraic framework? (Actually, John Baez told me "yes", and he should know.) If so, can you describe the basic ...
1answer
158 views
### partial trace with sparse matrices
Let $\rho_{ABCD}$ be a sparse matrix of 4 systems each in a $d$-dimensional Hilbert space. For $d<7$ in a reasonable time (few seconds) I able to perform the partial trace $\rho_{AD}$ using the ...
3answers
88 views
### QED as a Wightman theory of observable fields? With a collision theory?
[Note: I'm using QED as a simple example, despite having heard that it is unlikely to exist. I'm happy to confine the question to perturbation theory.] The quantized Aᵘ and ψ fields are non-unique ...
3answers
99 views
### What is the physical difference between states and unital completely positive maps?
Mathematically, completely positive maps on C*-algebras generalize positive linear functionals in that every positive linear functional on a C*-algebra $A$ is a completely positive map of $A$ into ...
0answers
55 views
### Why/When can the gauge superfield and/or chiral superfield kinetic term in $(2,2)$ SUSY be ignored?
This is in reference to the argument given towards the end of page $61$ of this review paper. There for the path-integral argument to work the author clearly needed some argument to be able to ignore ...
2answers
117 views
### Could motives aid in the study of the Navier-Stokes equations?
Recently, mathematicians and theoretical physicists have been studying Quantum Field Theory (and renormalization in particular) by means of abstract geometrical objects called motives. Amongst these ...
1answer
286 views
### Geometric picture behind quantum expanders
A $(d,\lambda)$-quantum expander is a distribution $\nu$ over the unitary group $\mathcal{U}(d)$ with the property that: a) $|\mathrm{supp} \ \nu| =d$, b) \$\Vert \mathbb{E}_{U \sim \nu} U \otimes ...
1answer
257 views
### About the definition/motivation/properties of the twisted chiral superfield in ${\cal N}=2$ theories in $1+1$ dimensions
The following is in the context of the ${\cal N}=2$ supersymmetry in $1+1$ dimensions - which is probably generically constructed as a reduction from the ${\cal N}=1$ case in $3+1$ dimensions. In ...
0answers
97 views
### From vertex function to anomalous dimension
In a $d$ dimensional space-time, how does one argue that the mass dimension of the $n-$point vertex function is $D = d + n(1-\frac{d}{2})$? Why is the following equality assumed or does one prove ...
1answer
49 views
### States diagonal in the tensor product of Bell states.
Bell-diagonal states are 2-qubit states that are diagonal in the Bell basis. Since those states lie in $\mathbb{C}^{2} \otimes \mathbb{C}^{2}$, the Peres-Horodecki criterion is a sufficient condition ...
1answer
68 views
### Asymptotic Completeness, generalized free fields, and the relationship of thermodynamics with infinity
Asymptotic completeness is a strong constraint on quantum field theories that rules out generalized free fields, which otherwise satisfy the Wightman axioms. If we were to take a limit of a list of ...
2answers
56 views
### Heuristics for definitions of open and closed quantum dynamics
I've been reading some of the literature on "open quantum systems" and it looks like the following physical interpretations are made: Reversible dynamics of a closed quantum system are represented ...
1answer
22 views
### Time Evolution of a Manifold Embedding
Given a smooth manifold $\mathcal{M}$ with a simplicial complex embedding $\mathsf{S}$, what specific tools or methods can be used to give an analysis of the time evolution of the manifold given some ...
1answer
29 views
### Gravitating sigma models
I am looking for a review or book on sigma models in (super)gravity theories, which arise from dimensional reduction.
1answer
74 views
### $\pm$ (light-cone?) notation in supersymmetry
I would like to know what is exactly meant when one writes $\theta^{\pm}, \bar{\theta}^\pm, Q_{\pm},\bar{Q}_{\pm},D_{\pm},\bar{D}_{\pm}$. {..I typically encounter this notation in literature on ...
1answer
39 views
### Spectrum of Free Strings
As far as I understand, both in bosonic and superstring theory one considers initially a free string propagating through D-dimensional Minkowskispace. Regardless of what quantization one uses, at the ...
1answer
72 views
### What is the connection between extra dimensions in Kaluza-Klein type theories and those in string theories?
This follows to some extent from a question I asked previously about the flaws of Kaluza-Klein theories. It appears to me that Kaluza-Klein theories attach additional dimensions to spacetime that are ...
2answers
73 views
### Gauge invariant scalar potentials
If $\Phi$ is a multi-component scalar field which is transforming in some representation of a gauge group say $G$ then how general a proof can one give to argue that the potential can only be a ...
2answers
102 views
### Why aren't the spin-3/2 fields in the (3/2,0)+(0,3/2) representation?
Why is it that spin-$\frac 32$ fields are usually described to be in the $(\frac 12, \frac 12)\otimes[(\frac 12,0)\oplus(0,\frac 12)]$ representation (Rarita-Schwinger) rather than the \$(\frac ...
1answer
201 views
### A certain gluon scattering amplitude
I am stuck with this process of calculating the tree-level scattering amplitude of two positive helicity (+) gluons of momentum say $p_1$ and $p_2$ scattering into two gluons of negative (-) helicity ...
1answer
184 views
### Gauge invariance and the form of the Rarita-Schwinger action
in Weinberg Vol. I section 5.9 (in particular p. 251 and surrounding discussion), it is explained that the smallest-dimension field operator for a massless particle of spin-1 takes the form of a field ...
1answer
25 views
### Low-energy gluodynamics as a string
Does anyone know of a (most likely heuristic) derivation of the use of the string sigma model action to model the soft gluonic interactions between color charges? I'm familiar with the classic ...
1answer
40 views
### Functional relations for Kochen-Specker proofs
Many proofs of the Kochen-Specker theorem use some form of the following argument (from Mermin's "Simple Unified Form for the major No-Hidden-Variables Theorems" ) [I]f some functional relation ...
6answers
105 views
### Multiqubit state tomography by performing measurement in the same basis
For a $n$-qubit state $\rho$ we perform all projective measurement consisting of one-particle measurements in the same basis, that is, p_{i_1i_2\ldots i_n}(\theta,\varphi) = \text{Tr}\left \{ \rho ...
1answer
109 views
### Monte Carlo integration over space of quantum states
I am currently facing the problem of calculating integrals that take the general form $\int_{R} P(\sigma)d\sigma$ where $P(\sigma)$ is a probability density over the space of mixed quantum states, ...
2answers
163 views
### Equivalence of definitions of ADM Mass
ADM Mass is a useful measure of a system. It is often defined (Wald 293) $$M_{ADM}=\frac{1}{16\pi} \lim_{r \to \infty} \oint_{s_r} (h_{\mu\nu,\mu}-h_{\mu\mu,\nu})N^{\nu} dA$$ Where $s_r$ is two ...
1answer
452 views
### Entanglement in time
Quantum entanglement links particles through time, according to this study that received some publicity last year: New Type Of Entanglement Allows 'Teleportation in Time,' Say Physicists at The ...
0answers
131 views
### Magnetic monopole and electromagnetic field quantization procedure
From the Maxwell's equations point of view, existence of magnetic monopole leads to unsuitability of the introduction of vector potential as $\vec B = \operatorname{rot}\vec A$. As a result, it was ...
1answer
65 views
### Some more questions about the BCFW reduction
This question is a continuation of this previous question of mine and I am continuing with the same notation. One claims that one can actually split this $n$-gluon amplitude such that there is just ...
1answer
34 views
### Tip of a spreading wave-packet: asymptotics beyond all orders of a saddle point expansion
This is a technical question coming from mapping of an unrelated problem onto dynamics of a non-relativistic massive particle in 1+1 dimensions. This issue is with asymptotics dominated by a term ...
3answers
890 views
### Are elementary particles actually more elementary than quasiparticles?
Quarks and leptons are considered elementary particles, while phonons, holes, and solitons are quasiparticles. In light of emergent phenomena, such as fractionally charged particles in fractional ...
0answers
71 views
### Instantons and Borel Resummation
As explained in Weinberg's The Quantum Theory of Fields, Volume 2, Chapter 20.7 Renormalons, instantons are a known source of poles in the Borel transform of the perturbative series. These poles are ...
3answers
88 views
### Analyticity and Causality in Relativity
A few weeks ago at a conference a speaker I was listening to made a comment to the effect that a function (let's say scalar) cannot be analytic because otherwise it would violate causality. He didn't ...
0answers
98 views
### Measure of Lee-Yang zeros
Consider a statistical mechanical system (say the 1D Ising model) on a finite lattice of size $N$, and call the corresponding partition function (as a function of, say, real temperature and real ...
1answer
129 views
### The difference between projection operators and field operators in QFT?
Is there a good reference for the distinction between projection operators in QFT, with an eigenvalue spectrum of $\{1,0\}$, representing yes/no measurements, the prototype of which is the Vacuum ...
1answer
94 views
### Renormalization of the R-charge?
In general I would like to know as to known or what is/are the standard references about R-charge renormalization in supersymmetric theories. When does it do so and what is expected or known to be ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021018743515015, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/tagged/interactive?sort=faq | # Tagged Questions
The interactive tag has no wiki summary.
learn more… | top users | synonyms
3answers
9k views
### Get a “step by step” evaluation in Mathematica
Is it possible in Mathematica to get a step-by-step evaluation of some functions; that's to say, outputting not only the result but all the stages that have led to it? Example : Let's say I want to ...
1answer
241 views
### How to create interrelated sliders?
Say I want a slider that controls the value of $x$ and another slider that controls the value of $2x$, how would I go about it?
2answers
153 views
### Restrict Sensitivity of Locators in LocatorPane
We can restrict the movement of locators in a LocatorPane as follows: In the following example, the first locator's movement is confined to the x-axis and the ...
3answers
340 views
### Extracting the coordinate of a particular point of interest from a ListPlot
Is there a way to obtain the coordinate of a point of interest in a ListPlot? As an example, I have a list containing many sets of 2D coordinates and the plot ...
1answer
225 views
### Want Interactive PlotRange of Graphics[{}]
I am trying to do: ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012230634689331, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/23323?sort=votes | ## How to compute irreducible representation of Lie algebra in the framework of BBD
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We know Beilinson-Bernstein established the following famous equivalence:
$D-mod_{G/B}\rightarrow U(g)-mod_{\lambda}$,where $G$ is algebraic group and $B$ is Borel subgroup, $G/B$ is flag variety of finite dimensional Lie algebra $g$, $\lambda$ is central character.
This equivalence means that one can study representations of Lie algebra $g$ via D-modules. But How?
My question: Is there machinery in the framework of BBD to construct irreducible representations of $g$ explicitly?
I am aware that there is Riemann-Hilbert correspondence to describe the correspondence between Perverse sheaves and holonomic D-modules. It seems that it is possible to know the irreducible objects in category of Perverse sheaves.(I guess in this case,we will know the irreducible representations corresponding to holonomic modules) But even in this case, I did not find appropriate reference. I wonder whether somebody compute some concrete examples such as flag variety of $sl_2$($P^1$).
Further question: I also want to know the answers for affine Lie algebra case(Frenkel-Gaitsgory established analogue of BB-equivalence for critical level affine Lie algebra). Does this work give new class of irreducible representations of affine Lie algebra?
REMARK: What I want to know is the advantage to use D-module theory to construct representations.(if we can)For example, for some general Lie algebra $g$, we consider flag variety $X$. Then we consider $D-mod_{X}$, how to use algebraic geometric machinery on this category to construct irreducible representations of $g$ explicitly?(I would like to know is there any construction(in BBD)which can describe representations)
-
Your initial formulation of an "equivalence" has no role for `$\lambda$` on the left side, so needs some modification. It helps here to cite a particular source. – Jim Humphreys May 3 2010 at 12:50
You should replace D-mod(G/B) with $\lambda$-monodromic differential operators, or weakly $\lambda$-equivariant D-modules on G/N. Also, I may be missing out on recent progress, but I think you should specify that G is semisimple and $\lambda$ is a regular dominant weight. Finally, I don't see why the D appears in BBD. – S. Carnahan♦ May 3 2010 at 14:34
2
Maybe he also wants to use perverse sheaves, in which cae, we needs D for "Deligne". – Ben Webster♦ May 3 2010 at 17:28
## 3 Answers
I'm far from being an expert on BBD and related algebraic geometry, but the question about "construction" of irreducible representations in the infinite dimensional context has to be approached with great care. Although direct constructions are possible in a few special cases, the main problems for Lie groups or their Lie algebras usually require an indirect approach.
For example, in the BGG category (say with integral weights) there is an easy construction of Verma modules using induction methods; the formal character is also easy to exhibit. But the unique simple quotient cannot in general be constructed even by sophisticated methods. Instead you try to imitate the BGG approach to the finite dimensional character formulas: in effect, express the unknown formal character as a `$\mathbb{Z}$`-linear combination of the known Verma characters. This can provide an effective algorithm based on the partial ordering of weights. The integral coefficients here still need to be found. Kazhdan-Lusztig predicted these in terms of recursively computable polynomial values at 1, but so far only the geometric methods of Beilinson-Bernstein or Brylinski-Kashiwara have been able to prove this.
Similar predictions are made by Lusztig in prime characteristic for representations of semisimple algebraic groups, but only proved for large primes (and with no definite prediction for primes less than the Coxeter number). Analogues for quantum enveloping algebras at a root of unity have by now been attacked successfully using the characteristic 0 geometric methods. In the Lie group direction, Vogan and others have pressed further with partial success in the spirit of the KL Conjecture. But all of these problems are extremely difficult. At the end of the day, very few concrete constructions of irreducible representations are found. Usually one settles for some kind of "character" information. While the classical Borel-Weil theorem inspires some of the later moves, the story gets much more complicated.
Concerning the added "further question", my remarks apply equally well to affine Lie algebras in most cases. But the situation there for the critical level or for the somewhat parallel finite dimensional modular theory mentioned above is less settled. There has been a lot of recent progress in both cases, for example in the modular theory by work of Bezrukavnikov, Mirkovic, Rumynin. In all cases, the results obtained by localization or other geometric methods lead mainly to multiplicity formulas and recursive computation of characters rather than explicit construction of simple modules. But in finite dimensional cases, even their dimensions have been elusive.
ADDED: Going back to the questions asked, the work of Beilinson-Bernstein and others does not directly construct new representations. But it's an essential part of the working out of character formulas for various classes of irreducible representations, translated into composition factor multiplicity problems for induced modules such as Verma modules. This is where the reformulation of problems in the language of algebraic geometry and sheaf theory has been important for representation theory. No algebraic method is known (or expected) for giving an explicit construction of the infinite dimensional irreducibles. Characterizing them abstractly as quotients of Verma modules or the like gives very litle information about the characters and can't be viewed as a construction.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Hello, I want to try to answer the first question you pose (at least I can answer the question I believe you are asking) -- sorry it's so far past when you asked it, but perhaps (if you know the answer yourself now) it will be useful to someone else to just have this information up anyway? It sounds like you are asking "what is the D-module version of IC-sheaves?" In the BGG setting, Beilinson-Bernstein localization gives an equivalence between $B$-equivariant $D_{G/B}$-modules and highest weight $\mathcal{U}(\mathfrak{g})$-modules with trivial infinitesimal character. We can twist the D-module picture I am about to describe to obtain modules with other infinitesimal characters. Irreducible $B$-equivariant $D_{G/B}$-modules are constructed as follows: Let $Q$ be a $B$-orbit on $G/B$ and $i: Q\to G/B$ its inclusion. Let $\tau$ be the structure sheaf of $Q$ (we can replace $B$ by more general groups in which case $\tau$ must range over all irreducible connections on $Q$, by which I mean specifically "line bundle with connection" so that we are working with $D_Q$-modules). The D-module direct image $i_+ \tau$ has a unique irreducible submodule $L(\tau)$, and this construction in fact establishes a bijection between orbits $Q$ (more generally, pairs $(Q,\tau)$) and irreducible objects $L(\tau)$. Under Riemann-Hilbert, the $L(\tau)$ are precisely the IC sheaves, and upon taking global sections we recover irreducible representations. The sheaves $i_+\tau$ have the co-Verma modules as their global sections and you can even recover the (co-)BGG resolution using D-module constructions alone (it's the Cousin resolution -- see Hartshorne's Residues and Duality).
A good reference which has this all written down in the context of Harish-Chandra modules (and for arbitrary twists) is the paper "Localization and standard modules for real semi-simple Lie groups I: The duality theorem" -- the BGG version of the entire paper should work exactly the same but with $B$ instead of $K$ and "co-Verma" instead of "co-standard."
-
To make your reference explicit, the paper is by Hecht, Milicic, Schmid, Wolf (Invent. Math. 90, 1987), online at gdz.sub.uni-goettingen.de But I still don't understand in what sense any of this machinery "constructs" irreducible representations of the Lie algebra explicitly as Shizhuo asks. The original questions for Lie algebras (or Lie groups) focus on computation of suitable "characters", for which the geometric categories mainly provide the combinatorial setting needed to carry out indirect recursions. – Jim Humphreys Apr 1 2011 at 19:19
Thanks for the link -- I was unsure of where to find the paper online. And of course you are right that I haven't constructed irreducible representations so much as identified their D-module counterpart under localization, but based on the OP's comments about perverse sheaves and Riemann-Hilbert, this seemed to me to be what he was actually asking for. Apologies all around if that is not the case! – S Kitchen Apr 9 2011 at 5:48
There could be different ways to give meaning to the phrase "explicit construction".
In an algebro-geometric sense, an expicit construction comes from more classical Borel-Weil-Bott theorem of which BDD is an abstract generalization. There's a number of proof in the literature, e.g the one by Jacob Lurie (on his home page).
According to the BWB, you can get the (finite-dimensional) representation by taking the global sections of one of the equivariant bundles $\mathcal O(\lambda)$.
Another way to construct the representation would be to start with some simple $\mathfrak g$-modules and combine them to get your represenation. In this way, BDD helps by establishing a correspondence between simple equivariant D-modules and Verma modules. Therefore, the resolution for a bundle $\mathcal O(\lambda)$ corresponds to a construction in the category of $\mathfrak g$-modules, the one called Bernstein-Gelfand-Gelfand resolution, giving rise to Weyl character formula
As an example, the $\mathfrak{sl}_2$ modules correspond to equivariant D-modules on a $\mathbb P^1$, which has two cells. Therefore, a BGG resolution for an $\mathfrak{sl}_2$-module has two terms. Since a Verma module for $\mathfrak{sl}_2$ with an integer weight $\lambda$ has (I think) exactly one vector of each weight $\lambda' \le \lambda$, you can picture it as a ray on a weight lattice; the picture then becomes [segment] = [ray] - [ray].
-
Thank you. But there are two problems: 1. What Borel-Weil gave is just finite dimensional representations, using this construction we do not need D-modules story 2. One can use Verma module directly, constructing irreducible quotients to get representations. There is not necessary to introduce D-module theory either. Maybe I should make the question clearer, what I want to know is in which situation, we almost can not avoid D-module theory(which means using D-module theory we can construct representations much more efficiently)? – Shizhuo Zhang May 3 2010 at 7:36
I don't think you'll get anything this way that you couldn't get by Verma modules... although that, again, depends on what types of constructions you're looking for. Perhaps you should really split the question into different ones... – Ilya Nikokoshev May 3 2010 at 19:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923546552658081, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/111999?sort=oldest | ## Local fractional derivative that doesn’t vanish on differentiable functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Riemann-Liouville fractional derivative is a nonlocal fractional derivative that doesn't vanish in general on differentiable functions. Kolwankar-Gangal fractional derivative is local but vanishes on any differentiable function. Is there some local fractional derivative that doesn't vanish on differentiable functions in general and for which $$D^{\alpha} x^{n \alpha} = \frac{\Gamma(n\alpha+1)}{\Gamma((n-1)\alpha+1)} x^{(n-1)\alpha}$$ holds for any $x > 0$?
-
## 2 Answers
I'm not sure, but maybe you could investigate the Yang local fractional derivative?
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Not a translation-invariant one!
Indeed, we would have:
`\[ \frac{ \Gamma( 2) } {\Gamma(\frac{3}{2}) } \sqrt{x+1} = D^{1/2} (x+1)= D^{1/2}x + D^{1/2} 1 = \frac{ \Gamma( 2) } {\Gamma(\frac{3}{2}) } \sqrt{x} +\frac{\Gamma(1)} {\Gamma(\frac{1}{2} ) } \frac{1}{\sqrt{x}} \]`
which is obviously false.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422410726547241, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/topological-vector-spaces?sort=votes&pagesize=15 | # Tagged Questions
The study of vector space with a topology which makes the maps which sums two vectors and which multiply a vector by a scalar continuous. It's a natural generalization of normed spaces.
3answers
575 views
### When do weak and original topology coincide?
Let $X$ be a topological vector space with topology $T$. When is the weak topology on $X$ the same as $T$? Of course we always have $T_{weak} \subset T$ by definition but when is \$T \subset ...
2answers
603 views
### When is a notion of convergence induced by a topology?
I'm interested in sufficient conditions for a notion of sequential convergence to be induced by a topology. More precisely: Let $V$ be a vector space over $\mathbb{C}$ endowed with a notion $\tau$ of ...
3answers
446 views
### Topology on the general linear group of a topological vector space
Let $K$ be a topological field. Let $V$ be a topological vector space over $K$ (if it makes things convenient, you may assume it is finite dimensional). Naive Question: Is there a canonical way of ...
1answer
200 views
### Is the standard structure of a topological vector space on reals unique?
The standard stucture of a topological vector space on reals is this given by the metric d(x,y)=|x-y| on the vector space $\mathbb{R},$ with the field of scalars $\mathbb R$ with standard topology. I ...
0answers
256 views
### Differential forms on fuzzy manifolds
This post will take a bit to set up properly, but it is an easy read (and most likely easy to answer); in any event, please bear with me. Question In the usual setting of open subsets of ...
1answer
148 views
### If weak topology and weak* topology on $X^*$ agree, must $X$ be reflexive?
Let $X$ be a Banach space and suppose that the weak topology on $X^*$ agrees with the weak* topology on $X^*$. Must $X$ be reflexive? To prove the contrapositive, it will suffice to assume that $X$ ...
1answer
304 views
### Isomorphisms of Fréchet Spaces
What is the proper notion of an isomorphism between Fréchet spaces? Obviously it should be a linear map. I'm just worried about the analytic structure. Should one be able to order the seminorms on ...
1answer
256 views
### Semi-Norms and the Definition of the Weak Topology
When I was searching for the definition of the weak topology, I found two different definitions. One defines the weak topology in terms of a family of semi-norms, while the other defines it in terms ...
1answer
430 views
### Hahn-Banach theorem: 2 versions
I have a question regarding the Hahn-Banach Theorem. Let the analytical version be defined as: Let $E$ be a vector space, $p: E \rightarrow \mathbb{R}$ be a sublinear function and $F$ be a subspace of ...
2answers
550 views
### If $A$ and $B$ are compact, then so is $A+B$.
This is an exercise in Chapter 1 from Rudin's Functional Analysis. Prove the following: Let $X$ be a topological vector space. If $A$ and $B$ are compact subsets of $X$, so is $A+B$. My guess: ...
3answers
397 views
### Do continuous linear functions between Banach spaces extend?
Just wondering... Let $E$, $G$ be Banach spaces, let $U\subset E$ be a subset of $E$, and let $f:U\rightarrow G$ be a continuous linear function. Can $f$ be extended to a continuous linear function on ...
2answers
694 views
### Dual of a dual cone
Any hint on how to prove the following please: Let $K$ be a convex cone, and $K^*$ its dual cone. Prove that $K^{**}$ is the closure of $K$. Thanks!
1answer
168 views
### Do the notions of weak and weak* convergence coincide for $\ell^1(\mathbb{N})$?
As my friends and I were studying for our real analysis final exam yesterday, we were playing with various examples and found ourselves asking this question: The space $\ell^1(\mathbb{N})$ is the ...
1answer
72 views
### Learning Aid for Basic Theorems of Topological Vector Spaces in Functional Analysis
I am self-teaching myself the basics of functional analysis (e.g. topological vector spaces), and frankly I am starting to get a migraine sorting out/organizing in my head all of the ...
1answer
116 views
### Rotation of $\mathbb{R}^3$ by using quaternion
Express the rotation of $\mathbb{R}^3$ by $\frac{\pi}{3}$ about the $x=y=z$ axis by using quaternions and identifying $\mathbb{R}^3$ with $(i,j,k)$-space. Thoughts: From my point of view, every ...
1answer
218 views
### Contractibility of convex set
Suppose that $\Omega$ is a convex open subset of an infinite dimensional vector space $E$ such that $\Omega$ is not contained in any finite dimensional subspace of $E$. Let $Q_m\subset \Omega$ denote ...
2answers
465 views
### Does every $\mathbb{R},\mathbb{C}$ vector space have a norm?
Is there a canonical way to define on any vector space over $\mathbb{K}=\mathbb{R},\mathbb{C}$ a norm ? (Or, if there isn't, can someone give me an example of a vector space over $\mathbb{K}$ that is ...
3answers
133 views
### Can continuity of inverse be omitted from the definition of topological group?
According to Wikipedia, a topological group $G$ is a group and a topological space such that $$(x,y) \mapsto xy$$ and $$x \mapsto x^{-1}$$ are continuous. The second requirement follows from the ...
1answer
205 views
### Generic topology on a vector space?
For a (possibly infinite-dimensional) vector space $V$, I thought about the following topology $\tau$: Let $O \in \tau$ if every $x \in O$ has the property that for every $v \in V$, there is an ...
1answer
304 views
### The dual of a Fréchet space.
Let $\mathcal{F}$ be a Fréchet space (locally convex, Hausdorff, metrizable, with a family of seminorms ${\|~\|_n}$). I've read that the dual $\mathcal{F}^*$ is never a Fréchet space, unless ...
2answers
68 views
### Alternate definition for boundedness in a TVS
Let $X$ be a topological vector space over $\mathbb R$ or $\mathbb C$. A subset $B\subset X$ is defined to be bounded if for any open neighborhood $N$ of $0$ there is a number $\lambda>0$ ...
0answers
111 views
### Isomorphism between spaces of sections.
Let $M$ be a compact manifold and let $E_i \xrightarrow{\Large \pi_i} M$, $i = 1, 2$, be two (real or complex) vector bundles of the same rank $k$ over $M$. Assume we have metrics $g_1, g_2$ on $E_1$, ...
4answers
101 views
### Books on locally convex topological vector spaces
My friend asked me for a good book about locally convex topological vector space. I'm not familar with this. Could you give me some good references on it?
3answers
145 views
### Is $C([0,1])$ a compact space?
Is $C([0,1])$ (I guesss with the max-norm) a compact space? I have to know that because I want to apply Arzela Ascoli.
2answers
93 views
### Question about Topological Vector Spaces
Let $E$ be a Topological Vector Space and $U$ a bounded set of $E$ with $0\in U$, i.e. given any neighborhood $W$ of the origin, there exist $\alpha>0$ such that $\alpha U\subset W$. Is it true ...
2answers
158 views
### Local base of a topological vector space
I would like to prove that if $B$ is local base for a topological vector space $X$, then every member of $B$ contains the closure of some member of $B$. I would appreciate if somebody can guide me ...
1answer
79 views
### local convexity of $L_p$ spaces
wiki says The spaces $L_p([0, 1])$ for $0 < p < 1$ are equipped with the F-norm they are not locally convex, since the only convex neighborhood of zero is the whole space Why is this so? ...
1answer
242 views
### Sequential and topological duals of test function spaces
Given a test function space, in particular $\mathcal{S}=\mathcal{S}(\mathbb{R}^n)$ (the Schwartz space) or $\mathcal{D}=\mathcal{D}(\mathbb{R}^n)$ (the space of compactly supported smooth test ...
1answer
558 views
### Convex functions and families of affine functions
I know that the supremum of a family of affine functions is convex. Just wondering if it is true (and if so how one proves) that the converse -- any $C^1$ convex function is the supremum of some ...
1answer
88 views
### Pseudonormable Product Spaces
I want to prove that a product $\prod_{i\in I}X_i$ of topological vector spaces is pseudonormable only if a finite number of the factor spaces are also pseudonormable and the rest have the trivial ...
1answer
114 views
### Bounded and compact sets in a subspace of $\mathbb R^{\mathbb N}$
Let $$X= \{u=(u_1, u_2, \ldots): u_n \ne 0 \text{ only for a finite number of terms}\}\subseteq\mathbb R^\mathbb N,$$ with the topology inherited from $\mathbb R^\mathbb N$ (the "pointwise ...
0answers
98 views
### Evaluation map is not continuous always.
Let $E$ be a not normable locally convex space, define $$F: E'\times E\to \mathbb R$$ $$(f,e)\to f(e)$$ I have to show that $F$ is not continuous when $E'\times E$ is given product topology. I was ...
4answers
125 views
### Question on Topological vector space 1
I have numbered this question as (1) because I will be posting series of questions where I don't understand. I hope its allowed. I want to prove the following : If $X$ is a topological vector ...
2answers
156 views
### “The two notions of boundedness coincide for locally convex spaces”
From Wiki The boundedness condition for linear operators on normed spaces can be restated. An operator is bounded if it takes every bounded set to a bounded set, and here is meant the more ...
1answer
1k views
### “Every linear mapping on a finite dimensional space is continuous”
From Wiki Every linear function on a finite-dimensional space is continuous. I was wondering what the domain and codomain of such linear function are? Are they any two topological vector ...
2answers
222 views
### Example of a topological vector space
I have the following question: give an example of a topological vector space $E$ with subspace $M$ and $N$, such that $E = M \oplus N$ algebraically, but not topologically (so $E \ncong M \sqcup N$). ...
2answers
215 views
### If you know the convergent sequences, how do you know the open sets?
I have a homework problem which I feel should be simple but is actually surprisingly tricky. This is why I love math sometimes.... Let $X$ be a normed linear space. Suppose $\|\cdot\|_1$ and ...
2answers
184 views
### Closed Bounded but not compact Subset of a Normed Vector Space
Consider $\ell^\infty$ the vector space of real bounded sequences endowed with the sup norm, that is $||x|| = \sup_n |x_n|$ where $x = (x_n)_{n \in \Bbb N}$. Prove that \$B'(0,1) = \{x \in l^\infty ...
1answer
108 views
### Confused by proof in Rudin Functional Analysis, metrization of topological vector space with countable local base
I'm working through Rudin's Functional Analysis, and I am confused by a step in his proof for Theorem 1.24, which states that if X is a topological vector space with a countable local base, then there ...
1answer
106 views
### Connected components that are relatively open in $\sigma(T)$
Let $T$ be an bounded linear operator on a Banach space $X$. Suppose the spectrum of $T$, $\sigma(T)$ has infinitely many connected components, then $\sigma(T)$ must contain infinitely many ...
1answer
164 views
### Constructing a countable family of seminorms in a metrizable LCS.
Here's some context before my question. Let $\mathbb{V}$ be a topological vector space, which is Hausdorff and such that its topology is generated by some arbitrary family of seminorms ...
1answer
249 views
### Finding the topological complement of a finite dimensional subspace
I know that for any finite dimensional subspace $F$ of a banach space $X$, there is always a closed subspace $W$ such that $X=W\oplus F$, that is, any finite dimensional subspace of a banach space is ...
1answer
194 views
### Uniqueness of the derivative in locally convex topological vector space
I need a hint of proof of uniqueness of the derivative in locally convex topological vector space (it's asserted in Lang's "Introduction to differentiable manifolds"). Define derivative of a function ...
1answer
186 views
### Why This Map is Closed?
Consider the following definition of closed maps, defined in the book Nonlinear Programming by Bazaraa et al.: Let $X$ and $Y$ be nonempty closed sets in $\mathbb{R}^p$ and $\mathbb{R}^q$, ...
1answer
146 views
### Finest topology on a space of banach space operators
let $X$ be some Banach space. Let $L(X)$ be the set of continuous operators on $X$ into $X$. Let $(\tau_i)_i$ be a set of topologies on $L(X)$ s.t. $L(X)$ is topological vector space (i.e. addition ...
1answer
211 views
### Is any Banach space a dual space?
Let $X$ be a Banach space. Is there always a normed vector space $Y$ such that $X$ and $Y^*$ are isometric or isomorphic as topological vector spaces (that is, there exists a linear homeomorphism ...
1answer
56 views
### Openness of linear mapping 2
I quote a previously asked question : Let $X$ be a topological vector space over the field $K$, where $K=\mathbb{R}$ or $K=\mathbb{C}$, and let $\mathbb\{f\colon X\rightarrow K^n\}$ (\$n \in ...
1answer
121 views
### Existence of balanced neighborhoods in a topological vector space
I'm wondering about the following: Let$\ X$ be a topological vector space. Then one could pick balanced neighborhoods$\ W$ and$\ U$ of$\ 0$ such that $\ \overline{U} + \overline{U} \subset W$, ...
1answer
229 views
### How to define the derivative of Radon measures
Let $M$ be the positive borel measures on a hausdorff topological space $X$, which are finite on compacts sets $--$ i.e. the real cone of radon measures. I am given a definition of a derivative of ...
0answers
104 views
### Curiosities about the content of a rare book: Topological Vector Spaces by A. Grothendieck
The book is a celebrated and highly influential book by A. Grothendeck, which was published in 1954, in French and for various reasons, it has been out of print since 1973. I am very much interested ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 155, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214916825294495, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=4172167 | Physics Forums
## Question about Bloch's theorem
In my textbook, it's doing an example of Bloch's theorem. They're solving it pretty generally. They first solve the problem of the wave function for a single one of the potential bumps (the potential structure that's being repeated), where the potential everywhere except the bump is 0. So they have a wavefunction something like this in the 0 potential regions:
$$\psi(x) = Ae^{iKx} + Be^{-iKx}$$
And the potential repeats every $a$. Then they say, due to Bloch's theorem, we know that
$\psi(x + a) = e^{ika}\psi(x)$
"for appropriate k" (note, lowercase k). I get what the uppercase K is (the wavevector of the wavefunction, and that, but what's the lowercase k? And how is it determined?
Thanks!
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics
The (lowercase) ##k## is the so-called crystal momentum. Say your system contains ##N## such copies of the potential profile that is being repeated after every ##a## distance. Then using the Born-von Karman boundary conditions you will get ##k = m\frac{2\pi}{Na}## where ##m## goes from ##0## to ##N-1##. And as you may know ##m=0## and ##m=N## are equivalent. As for the relation between ##K## and ##k## I can only think of a mathematical one. You can look at the mathematical relation in this document: http://faraday.ee.emu.edu.tr/eeng245/KronigPenney.pdf The ##\alpha## and ##\beta## (defined in eq. (4) and (5)) are the different values of ##K## in the region without and with bumps respectively. Now, these ##\alpha##'s and ##\beta##'s are given as a function of ##\varepsilon## in eq. (19) and (20) (##\varepsilon## is defined in just the next line). The relation between ##\alpha## and ##\beta## and ##k## is shown in eq. (25) and (26) (this is compactly written in eq. (27)). Now, eq. (27) represents a transcendental equation which has solutions only for specific values of ##E## and ##k##. The fact that only certain values of ##E## and ##k## are permitted can be graphically visualized in Figure 2, where the LHS and RHS of eq. (27) are plotted. In other words, only the portion of the oscillating function inside the two horizontal lines is a valid. If you recall that ##\varepsilon## (##=E_0/V##) is nothing but scaled energy. Therefore in range of ##E##'s where the oscillating function goes outside the window there is a gap in energy. Now if we resurrect our ##k## from the jungle of formulas and substitutions, you can make a plot of ##E## vs. ##k## and get Figure 3. The gap that you saw in the previous figure can also be observed here.
Quote by PhysTech The (lowercase) ##k## is the so-called crystal momentum. Say your system contains ##N## such copies of the potential profile that is being repeated after every ##a## distance. Then using the Born-von Karman boundary conditions you will get ##k = m\frac{2\pi}{Na}## where ##m## goes from ##0## to ##N-1##. And as you may know ##m=0## and ##m=N## are equivalent. As for the relation between ##K## and ##k## I can only think of a mathematical one. You can look at the mathematical relation in this document: http://faraday.ee.emu.edu.tr/eeng245/KronigPenney.pdf The ##\alpha## and ##\beta## (defined in eq. (4) and (5)) are the different values of ##K## in the region without and with bumps respectively. Now, these ##\alpha##'s and ##\beta##'s are given as a function of ##\varepsilon## in eq. (19) and (20) (##\varepsilon## is defined in just the next line). The relation between ##\alpha## and ##\beta## and ##k## is shown in eq. (25) and (26) (this is compactly written in eq. (27)). Now, eq. (27) represents a transcendental equation which has solutions only for specific values of ##E## and ##k##. The fact that only certain values of ##E## and ##k## are permitted can be graphically visualized in Figure 2, where the LHS and RHS of eq. (27) are plotted. In other words, only the portion of the oscillating function inside the two horizontal lines is a valid. If you recall that ##\varepsilon## (##=E_0/V##) is nothing but scaled energy. Therefore in range of ##E##'s where the oscillating function goes outside the window there is a gap in energy. Now if we resurrect our ##k## from the jungle of formulas and substitutions, you can make a plot of ##E## vs. ##k## and get Figure 3. The gap that you saw in the previous figure can also be observed here.
Ok, I see the connection between ##K## (and therefore ##E## of the electron) and ##k##, but what does that actually signify? Like, suppose I put an electron of energy ##E## into the periodic potential. This determines ##K## through ##E = \frac{\hbar^2 K^2}{2m}##, and let's say that this ##K## is in an allowed energy band. Then it determines ##k## through that relation. But what is this ##k##? What's going on in the crystal that relates to this ##k##? Is it the wave vector of the phonons that accompany this electron or something? I'm very confused.
Thanks!
## Question about Bloch's theorem
Yes, you can associate ##k## as the wave vectors of phonons. Lattice vibrations give an intuitive feel for what ##k## represents in that specific context. However, ##k## can be defined much more generally.
Similar to how you have quantum numbers ##(n,l,m,s)## (principal, azimuthal, magnetic, spin) in an atomic system, you can define a new set of quantum number ##(n,k,s)## (band index, crystal momentum, spin) in a crystalline system. The logic behind introducing new quantum number ##k## can be seen by looking at the three different representations of the plot of ##E## vs. ##k##. You can refer to Figure 6.1 of:
http://www.springer.com/cda/content/...539-p174100757
You can notice that part (a) is similar to Figure 3 of the previous document (except here they have included the negative ##k## regions as well). The gap at the points ##n\pi/a## (where ##n## is any integer) can be seen as the forbidden states which resulted from the periodic potential. You can notice that, besides the gaps, the plot is a parabola. This parabola corresponds to the dispersion of a free electron ##E_k = \hbar^2k^2/2m##. In other words, the physical momentum of the electron is given by ##p = \hbar k##.
But now, if we move the branches of the plot that are green, red, purple along the ##k## axis by an amount ##2\pi/a## we would get part (b). Let us now introduce another quantum number known as the band index ##n##. The blue, green, red, purple headband indices 1, 2, 3, 4 respectively. Then note that I can write the physical momentum of the electron as ##p = \hbar k + \hbar (n-1)G## where ##G = 2\pi/a## is the reciprocal lattice vector (here ##k## is a quantity defined to be limited to ##-\pi/a \le k < \pi/a##. It doesn't matter where you put the ##\le## since ##-\pi/a## and ##\pi/a## are equivalent). Note: we are still working under the assumption that there are no gaps. After introducing the gaps the above inequality will not hold exactly. Alternatively, I could just define the regular wave vector as ##p = \hbar k^\prime## such that ##k^\prime = k + (n-1)G##. Then we have ##E_{k^\prime} = \hbar^2 k^{\prime 2}/2m##. Once we know what the band index is, there is no need to keep track of ##k^\prime##. In other words, ##k^\prime## carries a lot of redundancy. For a given band index, only the quantity ##k## can uniquely determine ##p## and as a result ##E##. This ##k## is the so-called crystal momentum. It is a property of the lattice since it is defined to lie within ##-G/2 < k \le G/2##. This region in k-space is also known as the first Brillouin zone (also referred to as just the Brillouin zone).
Aside: Another way to state the redundancy with using ##k^\prime## is to show that the Bloch eigenstates are equivalent if you translate along the ##k## axis by ##G##. You can replace ##k \rightarrow k+G## and plug it into the expression for the Bloch eigenstate. You will see that you will simply pick up a phase factor of ##exp(iGx)##. And as you may know, in quantum mechanics, two states differing by a phase factor are equivalent.
Unfortunately, all I did was make some more mathematical arguments to define what ##k## is. But hopefully some of the physical arguments that motivated the definition of ##k## is much more insightful than noticing ##k## as just some intermediate variable in your calculation. Evidently, in the latter case one may wonder if ##k## is some arbitrary mathematical artifact. Well, in the end you could say that ##k## is an artificial construction. But as you pointed out before, you can gain some physical intuition of ##k## when you think of phonon modes. If you're interested, you can refer to the book:
http://www.amazon.com/Wave-Propagati.../dp/0486600343
to get some more insight. The good thing about this book is that you approach this whole idea of crystal momentum and Brillouin zones from a completely different perspective.
Hey, thank you so much for the help. I have a related, more practical question now. I wanted to do a problem with a repeating square well potential. Just a simple ##V(x) = U_0## for ##0<x<a##, ##V = 0## for ##a<x<2a## or something. I thought to use a way in my book to solve it, but the solution I found used something totally different (perturbation theory). In my book they show a general way of solving these -- you look at a single period of the periodic potential. You then view the general solution as a linear combo of the equation of a particle scattering from the left, and a particle scattering from the right: ##\psi = A\psi_l + B\psi_r##, ##\psi_l = e^{iKx} + re^{-iKx}## for x in the free space to the left of the the potential and ##\psi_l = te^{iKx}## for x in the free space to the right of the potential (and vice versa for ##\psi_r##). ##K## is just related to the energy of the electron through ##E = \hbar^2 K^2/2m##. Then they use Bloch's theorem and say ##\psi(x+a) = e^{ika}\psi(x)## where ##a## is the periodicity and k is "some k" (clearly, the one we've been talking about). From here it's assumed that you can solve your single period scattering problem for ##t##, and including boundary conditions they get a relation between ##k##, ##K##, and ##t## of the form ##cos(Ka + \delta)/|t| = ka## (where ##\delta## is just the phase of complex ##t##). This gives that famous picture that shows the origin of the allowed energy bands (that I of course can't find now). The question asked me, for a given ##k## (small k), find the energy gap between the first and second bands. To me, this just meant, solve for t (not hard, classic 1D scattering problem), plug it into that equation, and find the two lowest values of ##K## that satisfy it, and find the difference between their corresponding energies. Does that make sense? Thank you, sorry for the long winded reply.
This approach more or less seems like the one adopted by the link I pasted in my first post: http://faraday.ee.emu.edu.tr/eeng245/KronigPenney.pdf I don't think that's making use of perturbation theory; this seems like an exact calculation. If you look at the nearly-free electron model then you will find the use of perturbation theory. You can, however, determine the energy gap between the first and second bands even using the above method (the nearly-free electron perturbative method is much simpler though). Once you have found ##t## you can solve for the values ##K## and ##k## that satisfy the cosine equation and make a plot of ##E## vs. ##k## similar to Fig. 3 of the above link. From now on let me start making references to that figure since the plot that you'll get will be similar to this one. In Fig. 3 you can observe that the first branch (or band) exists in the interval 0 to ##\pi##, the second one exists in the interval ##\pi## to ##2\pi## and so on. Therefore, the energy gap between the first and second bands is the (vertical spacing) energy spacing between the points at ##\pi## in those two branches (bands).
Quote by PhysTech This approach more or less seems like the one adopted by the link I pasted in my first post: http://faraday.ee.emu.edu.tr/eeng245/KronigPenney.pdf I don't think that's making use of perturbation theory; this seems like an exact calculation. If you look at the nearly-free electron model then you will find the use of perturbation theory. You can, however, determine the energy gap between the first and second bands even using the above method (the nearly-free electron perturbative method is much simpler though). Once you have found ##t## you can solve for the values ##K## and ##k## that satisfy the cosine equation and make a plot of ##E## vs. ##k## similar to Fig. 3 of the above link. From now on let me start making references to that figure since the plot that you'll get will be similar to this one. In Fig. 3 you can observe that the first branch (or band) exists in the interval 0 to ##\pi##, the second one exists in the interval ##\pi## to ##2\pi## and so on. Therefore, the energy gap between the first and second bands is the (vertical spacing) energy spacing between the points at ##\pi## in those two branches (bands).
Ah, I meant that the other solution I found to this problem was perturbation, but I wanted to do it without it.
Actually, I made a mistake before; the equation relating ##k##, ##K##, and ##t## should be: ##\frac{cos(Ka + \delta)}{|t|} = cos(ka)## (where ##t## is going to be a function of ##K## usually).
The graph I was talking about in my book is like this:
The top and bottom lines are just 1 and -1 to show that the values of K that make the curve go above and below them (respectively) can't possibly make that equation true, so those ranges are the band gaps.
So, just to make it explicit so I'm sure: If we want to find the band gap for one distinct value of ##k##, I would draw a line across this graph at ##\pm cos(ka)## and find the intersection with the curve. This would give me the values of ##K## corresponding to this ##k##, and then these are easily turned into ##E##'s, like so:
Is that correct?
Thank you for all the help!
No, that is not the correct ##\Delta K## (##= K_1 - K_2##). That ##\Delta K## corresponds to range of energies ##\Delta E = \hbar^2 K_1^2/2m - \hbar^2 K_2^2/2m## that are allowed in the dispersion relation. You want to determine the ##\Delta E## for which the states are forbidden. That region will correspond to consecutive intersections of the red curve with the cyan line only or the purple line only; NOT the region between the intersection of the red curve with the purple line followed by the intersection of the red curve with the cyan line. But then what value of ##k## should you use? For each ##k## you will get a different ##\Delta K## (or ##\Delta E##). The band gap is defined as the ##k## for which ##\Delta E## in minimum. In this example, this point will occur at the edge of the Brillouin zone. In this example, ##k = 1/2 \times 2\pi/b##, where ##b## is the size of your unit cell. Since you mentioned that the (barrier region) ##V(x) = V_0## region has a width ##a## and the ##V(x)=0## region also has width ##a##, your unit cell size will be ##b=2a##. Therefore you should find the gap at ##k = \pm \pi/2a##. You should recheck your expression (more specifically your RHS where I think you might have ##\cos(2ka)## instead of ##\cos(ka)##). For ##k = \pm \pi/2a## the RHS will be zero. In other words, your two horizontal lines will collapse onto each other.
Quote by PhysTech No, that is not the correct ##\Delta K## (##= K_1 - K_2##). That ##\Delta K## corresponds to range of energies ##\Delta E = \hbar^2 K_1^2/2m - \hbar^2 K_2^2/2m## that are allowed in the dispersion relation. You want to determine the ##\Delta E## for which the states are forbidden. That region will correspond to consecutive intersections of the red curve with the cyan line only or the purple line only; NOT the region between the intersection of the red curve with the purple line followed by the intersection of the red curve with the cyan line. But then what value of ##k## should you use? For each ##k## you will get a different ##\Delta K## (or ##\Delta E##). The band gap is defined as the ##k## for which ##\Delta E## in minimum. In this example, this point will occur at the edge of the Brillouin zone. In this example, ##k = 1/2 \times 2\pi/b##, where ##b## is the size of your unit cell. Since you mentioned that the (barrier region) ##V(x) = V_0## region has a width ##a## and the ##V(x)=0## region also has width ##a##, your unit cell size will be ##b=2a##. Therefore you should find the gap at ##k = \pm \pi/2a##. You should recheck your expression (more specifically your RHS where I think you might have ##\cos(2ka)## instead of ##\cos(ka)##). For ##k = \pm \pi/2a## the RHS will be zero. In other words, your two horizontal lines will collapse onto each other.
Ahhh of course, that makes sense. Sorry, stupid of me. So the first ##\Delta K## that determines the ##\Delta E## should be:
So, you're saying that the ##k## at which ##\Delta E## is a minimum will be at the ##k## that makes the argument of the RHS biggest, i.e., 1.
I realized I actually made a mistake in copying the problem to simplify it -- in the book a whole period is ##a##, while the way I wrote the potential here it's ##2a## like you pointed out.
So wait, this should give you an exact (numerical) answer if I do it right? In that case, is that what I should see the approximations and lower order corrections to the energy from the 'nearly free electron' perturbation method approach? I'd like to do this out and confirm it if that's the case.
Thank you so much!
Yes, the ##\Delta K## you've shown in the updated figure is correct. Yes, if you do it the numerical calculations correctly you should get the exact solution. Now, comparing the exact solution to the perturbative one is a little tricky. In the Kronig-Penney model, which is discussed in the document I sent earlier (more like pasted the link to it), you treat the periodic potential as strong. I must admit that I did not read the expression ##\cos(Ka+\delta)/|t|=\cos(ka)## (from your second last post) very carefully. I thought that it was just the simplified form of the expression given in that document. Could you please give me the reference to the book you got it from? I'm beginning to think that your expression has some more assumptions embedded in it (in order to bring it to that simplified form). Anyways, for now let us simply assume that we're using the expression in the that document. If that is the case, then the pertubative treatment will fail! The Kronig-Penney model solved in that document is applicable to a strong potential. In other words the total energy (##E##) of the electron will be comparable to amplitude (##V_0##)of the periodic potential. In order for the nearly-free electron picture to work, we need the total energy (or kinetic energy) ##E\,(\text{or}\,E_\text{kin}) >> V_0##
Quote by PhysTech Yes, the ##\Delta K## you've shown in the updated figure is correct. Yes, if you do it the numerical calculations correctly you should get the exact solution. Now, comparing the exact solution to the perturbative one is a little tricky. In the Kronig-Penney model, which is discussed in the document I sent earlier (more like pasted the link to it), you treat the periodic potential as strong. I must admit that I did not read the expression ##\cos(Ka+\delta)/|t|=\cos(ka)## (from your second last post) very carefully. I thought that it was just the simplified form of the expression given in that document. Could you please give me the reference to the book you got it from? I'm beginning to think that your expression has some more assumptions embedded in it (in order to bring it to that simplified form). Anyways, for now let us simply assume that we're using the expression in the that document. If that is the case, then the pertubative treatment will fail! The Kronig-Penney model solved in that document is applicable to a strong potential. In other words the total energy (##E##) of the electron will be comparable to amplitude (##V_0##)of the periodic potential. In order for the nearly-free electron picture to work, we need the total energy (or kinetic energy) ##E\,(\text{or}\,E_\text{kin}) >> V_0##
Hey, it's from Ashcroft and Mermin. Here are the pages in question: http://imgur.com/a/87HlI
Thread Tools
| | | |
|-----------------------------------------------------|------------------------------------|---------|
| Similar Threads for: Question about Bloch's theorem | | |
| Thread | Forum | Replies |
| | Quantum Physics | 2 |
| | Atomic, Solid State, Comp. Physics | 2 |
| | Advanced Physics Homework | 1 |
| | Quantum Physics | 11 |
| | Quantum Physics | 25 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988173007965088, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/98403/computing-chern-classes-for-products-of-varieties/98481 | ## Computing chern classes for products of varieties
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm currently facing the problem of computing chern classes for Varieties. More precisely the product of such varieties.
Let $C_i$ be a variety in $\mathbb{CP}^2$ given by the Weierstraß $\wp$-map. I want to construct a product of 3 such varieties. Nothing fancy, just $C_1\times C_2 \times C_3$ and calculate it's chern classes.
I'm specifically interested in the second chern class of the Tangent Bundle and some Vector bundle. However, I'm having real trouble actually starting some calculation.
In the case of a single variety in $\mathbb{CP}^2$ I could have used the splitting principle and used the fact that the Normal bundle is a Line Bundle to calculate the total chern class.
However in the case of 3 such varieties, the first problem that arises is, I don't even know where it lies in? According to Segre Embedding I'd say $\mathbb{CP}^{26}$, but that seems a bit high. Perhaps $\mathbb{CP}^4$ would suffice? $\mathbb{CP}^{2+2+2}$? However, this would only help me with the Tangent Bundle.
Could anyone give me some pointers on how to calculate the total chern class in such a case or some reference where it is done in a similar case? Thanks!
-
2
I think you are confused about terminology. You say you are interested in a toric variety in $\mathbb{P}^2$ given by the Weierstrass $\wp$ map. The map $\wp$ parametrizes a cubic curve. This is a genus $1$ curve, not a toric variety. (It is, topologically, a torus.) Are you interested in are chern classes of tangent bundles to products of cubic curves? If so, the answer is very easy -- they're all zero. This is because the tangent bundle is a trivial bundle. If you do care about toric varieties after all, there are good answers, but I'm not sure they're what you want. – David Speyer May 30 at 19:59
Thanks for clarifying! I edited my post accordingly. – Michael Kissner May 30 at 20:50
## 3 Answers
It appears that you are assuming that your varieties $C_{i}$ are smooth (you seem to assume that since you are talking about the tangent bundle). In this case each $C_{i}$ is an elliptic curve (I guess, this is what you meant by "toric") and so $C_{1}\times C_{2}\times C_{3}$ is a three dimensional abelian variety. Since it is a group its tangent bundle is trivial and all of its Chern classes are zero. For any other bundle, it will depend on how the bundle is defined. If you have more information about your other bundle it should be easy to figure out the answer.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For the product of three varieties in $\mathbb {CP}^2$, the Segre map gives an embedding into $\mathbb {CP}^{26}$. This may be unsatisfying. Since the variety in question has dimension $3$, an argument similar to the one that shows all curves embed in $\mathbb {CP}^3$ should give an embedding into $\mathbb {CP}^7$.
But that's not the right way to go about this. Instead, you should use the fact that the tangent bundle of the product of three varieties/manifolds is the direct sum of the pullback of the tangent bundles of the varieties. Total chern classes are multiplicative on direct sums, so you only need to consider the tangent bundles of the varieties in $\mathbb {CP}^2$. But it seems like you understand those.
Aren't all toric varieties in $\mathbb {CP}^2$ except $\mathbb {CP}^1$ singular?
How to calculate the Chern class of the vector bundle depends on what form you have that vector bundle in. I don't believe there's a simple theory of vector bundles on toric varieties, the same way there is a simple theory of line bundles - otherwise there would be a simple theory of vector bundles on $\mathbb {CP}^n$, when in fact there are open problems!
-
@Will Although the theory of vector bundles on toric varieties is not as simple as that of line bundles, there is a very good theory. See the expository sections of arxiv.org/abs/math/0605537 – David Speyer May 30 at 20:45
As I say in my comment above, I don't think you are actually interested in Chern classes of toric varieties. But, if you are:
For a smooth proper toric variety $X$, the $k$-th Chern class of the tangent bundle is represented by the sum of the codimension $k$ toric subvarieties. For example, if $X = \mathbb{P}^2$, then $c_1$ is $3 \cdot [\mathrm{line}]$ and $c_2$ is $3 \cdot [\mathrm{point}]$. I'm having trouble finding you a reference for this, because everyone wants to prove more complicated things! But it isn't hard to prove directly. Let coordinates on the torus be $(t_1, \ldots, t_n)$, for $t_i \in \mathbb{C}^{\ast}$. Let $\theta_i$ be the tangent vector field $t_i \partial/(\partial t_i)$. These extend to sections of $T_{\ast} X$. We'll explicitly take $n-k+1$ sections of $T_{\ast} X$, written as linear combinations of the $\theta_i$ and compute where they become linear dependent.
Let's do $k=n$ first. So we have one section of $T_{\ast} X$, written as $\sum a_i \theta_i$. Let's work in the neighborhood of a fixed point of $X$. Without loss of generality, we may assume that the $t_i$ are local coordinates at that fixed point. So we want to know where $\sum a_i t_i (\partial/\partial t_i)$ vanishes. Well, it's exactly where all the components vanish, so where $a_1 t_1 = a_2 t_2 = \cdots = a_n t_n =0$. Assuming all the $a_i$ are nonzero, this is precisely where $t_1 = \cdots = t_n = 0$. In other words, at the fixed point. The local computation goes the same at every fixed point -- so $c_{n}(T_{\ast} X)$ is represented by the sum of the torus fixed points. Sanity check: The top Chern class of the tangent bundle is the Euler characteristic, and the Euler characteristic of a toric variety is equal to the number of torus fixed points.
Now let's do $k=1$. So we want to take $n$ sections, of the form $\sum_{i=1}^n a_{ij} \theta_i$, and figure out where they are linearly independent. Again, compute in the neighborhood of a torus fixed point. We want $\det (a_{ij} t_i) =0$, or $\det (a_{ij}) t_1 t_2 \cdots t_n$. If we have chosen the constants $a_{ij}$ generically, this is the same as $t_1 t_2 \cdots t_n$, so the union of the coordinate planes through the fixed point. Again, taking the union over all charts, the sections between linearly independent on the union of the toric divisors.
I won't do the general case, but it isn't much harder than these two.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469845294952393, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/86610/the-difference-between-a-handle-decomposition-and-a-cw-decomposition | The difference between a handle decomposition and a CW decomposition
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $M$ be a compact finite-dimensional smooth manifold. I have a question about the relationship between the statements that a Morse function induces a handle decomposition for $M$, and that it induces a CW decomposition for $M$.
A Morse function induces a handle decomposition
Denote by $X(M;f;s)$ the manifold $M$ with an $s$--handle attached by $f\colon\,(\partial D^s)\times D^{n-s}\to M$.
Theorem: Let f be a $C^\infty$ function on $M$ with no critical points on $f^{-1}[-\epsilon,\epsilon]$ except $k$ nondegenerate ones on $f^{-1}(0)$, all of index $s$. Then $f^{-1}[-\infty,\epsilon]$ is diffeomorphic to $X(f^{-1}[-\infty,-\epsilon];f_1,\ldots,f_k;s)$ (for suitable fi).
Historical note: This was stated by Smale in 1961, with proof outline. Milnor's Morse Theory, Theorem 3.2 states and proves a weaker, homotopy version of the theorem, where there is only one handle in play. I asked about a proof of this theorem in this MO question, and it turned out that the first complete proof appeared in Palais, simplified lated by Fukui [Math. Sem. Notes Kobe Univ. 3 (1975), no. 1, paper no. X, pp. 1-4]. There's an alternative proof given in Appendix C to Madsen-Tornehave.
Discussion: Roughly, the theorem states that passing a critical point of a Morse function corresponds to attaching a handle. Thus, a Morse function induces a handle decomposition for $M$.
A Morse function induces a CW decomposition
Let $f$ be a Morse function on $M$. Choosing a complete Riemannian metric on $M$ determines a stratification of $M$ into cells $D(p)$ (the unstable (descending) manifold for a critical point $p$ of $f$) in which two points lie in the same stratum if they are on the same unstable manifold. Each $D(p)$ is homeomorphic to an open cell, but the closure $\overline{D(p)}$ can be complicated.
Theorem: The union of compactified unstable manifolds $\bigcup \overline{D(p)}$ gives a CW decomposition of $M$ that is homeomorphic to $M$.
Historical note: A nice discussion of this theorem may be found in Bott's excellent Morse Theorem Indomitable, page 104. Milnor's Morse Theory derives a homotopy version of this statement (Theorem 3.5) from the homotopy version of the statement that a Morse function induces a handle decomposition (Theorem 3.2). The theorem seems to have been first proven by Kalmbach, and was recently strengthened to give the explicit characteristic maps by Lizhen Qin (understanding his papers is the motivation for my question).
The two statements given above look to me as though they should be very similar, especially since the homotopy version of the second follows directly from the homotopy version for the first in Milnor's book. But briefly searching through the literature makes it seem that they are virtually independant- papers proving one aren't even cited in papers proving the other, and the proofs look to me to be entirely unconnected. I don't understand why, probably because I'm having difficulty breaking free from the intuitive picture of the proof in Milnor's book, which works fine up to homotopy.
Question: Can you give an example, or intuition, for a case in which one of the above theorems is difficult but the other is easy? Is there an example for a compact finite-dimensional manifold with a Morse function such that the handlebody decomposition can be read straight off the Morse function, but the reading off the CW decomposition takes substantial extra work? Or the converse?
Stated differently, where does the "up to homotopy proof" on page 23 of Milnor conceptually collapse when we are working up to diffeomorphism instead of up to homotopy?
-
In the 2nd highlighted theorem you quote, shouldn't "that is diffeomorphic to $M$' be erased? Stating that it's a CW-decomposition of $M$ should be enough. CW-complexes don't have smooth structures so it's not clear what the latter part means. – Ryan Budney Jan 25 2012 at 11:28
I suppose I view the CW-structure on $M$ to be the less natural thing. To construct an explicit CW-decomposition on $M$ you either have to do something like modify the flow-lines for the gradient of $f$, or do some kind of blow-up procedure to turn the (naturally discontinuous) cellular attaching maps into continuous maps. Neither option is particularly pleasant. The handle decomposition is rather natural provided $f$ has some modest restrictions on it. But to answer your question it depends on what "reading off" means. There's substantially less information in a CW-decomposition so... – Ryan Budney Jan 25 2012 at 11:38
generally that takes less effort to generate, especially if all you care about is the homotopy-equivalent CW-complex, rather than putting the structure on $M$ itself. – Ryan Budney Jan 25 2012 at 11:42
Note that a handle decomposition can be turned around, in a way that corresponds to replacing $f$ by $-f$. But the same is not true for a CW structure up to homotopy. – Tom Goodwillie Jan 25 2012 at 14:05
1
The second Theorem is true for generic Riemannian metric only. The standard 2-torus in $R^3$, height function and the standard induced metric is a counterexamlpe (the boundary of upper 1-cell does not belong to the 0-cell). So in general we have only decomposition into disjoint union of balls, not CW-structure. – Petya Jan 25 2012 at 20:13
show 4 more comments
2 Answers
The second of the theorems you quoted is considerably harder to prove. The gist of the proof is as follows. Consider the closure $\overline{D(p)}$ of $D(p)$ in $M$. Then Lizhen Qin proves that it admits a resolution in the sense of semi-algebraic geometry. More precisely he constructs a compact space $\widehat{D(p)}$ and a continuous surjective map $\pi: \widehat{D(p)}\to\overline{D(p)}$ with the following properties.
$\bullet$ The space $\widehat{D(p)}$ is homeomorphic to a closed ball of dimension equal to the Morse index of $p$.
$\bullet$ The restriction of $\pi$ to the interior of $\widehat{D(p)}$ induces a homeomorphism onto $D(p)$.
The theorem requires that the gradient flow satisfy the Morse-Smale transversality condition wheras no such requirement is needed for the handle decomposition theorem. Moreover, the result is very sensitive to the behavior of the gradient flow near the critical points. In such a region the flow is a linear flow given by a symmetric matrix, the Hessian of $f$ at that particular point. If the eigenvalues are $\pm 1$ things are fine. For different eigenvalues things can go horribly wrong.
In Chap. 8 of my paper Tame flows I show that under appropriate conditions on the eigenvalues of the Hessians at the critical points the Morse-Smale condition is equivalent to the requirement that the stratification by unstable manifolds be a Whitney regular stratification. Moreover I give examples and pictures describing how the Whitney regularity is destroyed if the spectra of the Hessians do not satisfy those constraints.
Another very good reference for these topics is Burghelea-Friedlander-Kappeler survey arXiv: 1101.0778. Burghelea has an alternate and much simpler argument for Lizhen Qin's result, and the paper arXiv: 1101.0778 is much more readable than Qin's.
In the shameless-plug department, I ought to mention the recent 2nd edition of my book An Invitation to Morse Theory. In Chapter 4 I discuss at length these issues without the tameness assumption.
-
Liviu, are you sure that Burghelea et. al. produce the CW decomposition in the reference you cited? I could not find it mentioned there. In fact, I did a search of the file for the term "CW" and it is not to be found there. Also, my impression was that Burghelea and company only handle the case of a metric which is standard near the critical point set. – John Klein Jan 25 2012 at 16:57
It's not in their paper, I agree. In that paper they produce the compact space $\widehat{D}(p)$ and the projection $\pi$ so that the preimage of $D(p)$ in $\widehat{D(p)}$ is open dense. Their construction of $\widehat{D(p)}$ is identical to Qin's but the presentation is cleaner. As Qin mentions in his paper these facts alone imply immediately (via some nontrivial results in topology) that $\widehat{D}(p)$ is topological disk. Qin gives an alternate proof that avoids these topological results. I find this part of his proof hard to diggest. – Liviu Nicolaescu Jan 25 2012 at 17:07
1
Thanks Liviu. I presume that you mean "some nontrivial results in topology" refers to the $h$-cobordism theorem (or maybe the Poincare conjecture). Lizhen wanted to avoid using such a big weapon, so he found an alternative proof--a proof which uses more elementary tools. – John Klein Jan 25 2012 at 21:20
Yep. That's what I meant. – Liviu Nicolaescu Jan 26 2012 at 9:39
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another reference I would like to mention is
Sharko, V.V. Functions on manifolds, Translations of Mathematical Monographs, Volume 131. American Mathematical Society, Providence, RI (1993). Algebraic and topological aspects, Translated from the Russian by V. V. Minachin.
He really does use some aspects of the crossed complex related to the handlebody decomposition, rather than the CW-filtration. We have suggested this as a line of possible development in our book "Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids" (EMS Tract 15, 2011) since we work there with filtered spaces, and you get such from a Morse function on a manifold. I feel there is more to do there, using for example the tensor product technology explained in our book. This tensor product does reflect the usual cell decomposition of the product $E^m \times E^n$, $m,n \geqslant 0$, where $E^m, E^n$ have cell decompositions with 3 cells.
-
This sounds very interesting! – Daniel Moskovich Jan 28 2012 at 13:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210085868835449, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/228807/finding-factor-with-primes?answertab=votes | Finding factor with primes
If $p$ is a prime other then $2$, express the general prime factor of $2^p-1$ in terms of p and some other integer.
-
– Julian Kuelshammer Nov 4 '12 at 11:53
1 Answer
Let prime $q\mid (2^p-1)\implies ord_q2\mid p$
If $ord_q2=1,2^1\equiv 1\pmod q\implies q\mid (2-1)$ which is impossible.
SO, $ord_q2=p$ and $ord_q2\mid\phi(q)\implies p\mid\phi(q)\implies p\mid(q-1)$
For prime $q>2,q-1$ is even as $q$ must be odd.
So, $2\mid (q-1)\implies lcm(2,p)\mid (q-1)$
But $lcm(2,p)=2p$ as $p$ is odd.
So, $2pk=q-1\implies q=2pk+1$ for some natural number $k$
-
Hey, I've just started studying number theory. When you refer to $\phi(q)$ do you mean the Euler $\phi$-function? which counts the number of integers smaller then q which have a gcd of 1 with q? Also what does $ord_q2|p$ actually represent? – user48133 Nov 4 '12 at 11:58
– lab bhattacharjee Nov 4 '12 at 12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456487894058228, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/149961-spectral-radius-sum-commuting-matrices.html | # Thread:
1. ## Spectral radius of sum of commuting matrices
If two matrices A and B commute, when is it true that rho(A+B) = rho(A)+rho(B), i.e. that the spectral radius of A+B is equal to the sum of the two spectral radii or A and B?
2. What if $B=-A$?
3. Thanks, with B = -A I guess it holds trivially, is there any other set of conditions that ensures this?
4. Why does it hold? The spectral radius of the zero matrix is zero; so if your formula held, every matrix would have a zero spectral radius.
5. shoot! of course
but is it true at least that if A and B commute then rho(A+B)<=rho(A)+rho(B)? I think yes. My question is really, when does this hold with equality.
6. Originally Posted by chrysi
shoot! of course
but is it true at least that if A and B commute then rho(A+B)<=rho(A)+rho(B)? I think yes. My question is really, when does this hold with equality.
Sorry for reading your question too fast.
It might be true! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255236983299255, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/283278/set-notation-for-infinite-subsets | # Set notation for infinite subsets.
In set notation, how can one express an infinite set of subsets where each subset has exactly two elements $\{an-1, an+1\}$ where $a$ is a constant and $n\ge1$ and the $n$ value for each subset is one more than that of the previous subset. Example: $\{ \{a1-1, a1+1\},~\{a2-1, a2+1\},~\{a3-1, a3+1\},~. . . \}$
-
## 1 Answer
What about $\{\{an-1, an+1\}\ |\ n \in \mathbb{N}\setminus\{0\}\}$? Alternatively, for $n \in \mathbb{N}\setminus\{0\}$ you could define $A_n = \{an-1, an+1\}$ and the set you're interested in is $\{A_n\ |\ n \in \mathbb{N}\setminus\{0\}\}$.
-
Thanks, what about the part about $n$ for each subset being one more than the $n$ value for the previous subset? – Babiker Jan 21 at 6:28
A set has no order, so there is no sense of the previous subset. I've edited my answer so that it is clear that $n$ is a positive integer. Does that answer your question? – Michael Albanese Jan 21 at 6:40
Yes, thanks you. – Babiker Jan 21 at 6:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352836608886719, "perplexity_flag": "head"} |
http://cs.stackexchange.com/questions/7062/why-is-the-unary-representation-of-a-number-exponentially-larger-than-a-base-k-r | # Why is the unary representation of a number exponentially larger than a base k representation of it?
According to a book I am reading, the unary representation of a number exponentially larger than a base k representation of it. I, however, feel that the unary representation should scale linearly with the input.
After all, 1 is 1, 2 is 11, 3 is 111, and so on, right? Wouldn't that be linear?
-
## 1 Answer
The base $k$ representation of the number $n$ takes $\log_{k}n$ bits, whereas the unary representation takes $n$, and of course $n = k^{\log_{k}n}$
The gap is more apparent with larger numbers, if we take our normal base 10 system, then writing 1000 takes 4 decimal digits, but writing it in unary takes 1000 ones.
-
Thanks! I wonder how to prove though that the base $k$ representation of the number $n$ takes $\log_{k}n$ bits... the example with large numbers in base 10 does make sense though. – John Hoffman Dec 1 '12 at 1:11
2
A sketch of why it takes a logarithmic amount of space is to get the first digit, you divide your number by your base $k$, the remainder gives you your least significant digit, then you divide the quotient by $k$ again, getting the next digit as the remainder, and so on. So the length of the number $n$ representation base $k$ is the number of times you can divide $n$ by $k$ and get a non-zero quotient. I'll stick an example in the next comment. – Luke Mathieson Dec 1 '12 at 1:29
2
Say we have 25, and we want it in base 2. We divide by 2, and get quotient 12, remainder 1, so our base 2 number has 1 as its first bit. Then divide 12 by 2, get quotient 6, remainder 0, so our number-in-progress is 01. Compressing the notation a little now; 6/2 = 3r0 -> 001 : 3/2 = 1r1 -> 1001 : 1/2 = 0r1 -> 11001. At this point our quotient is down to zero, so we're done. This took 5 steps, which is $\lceil \log_{2}25\rceil$ (we need the ceiling function as we can't have a fraction of a digit - there's some other small technicalities too, exact powers take $1+log_{k}n$ digits). – Luke Mathieson Dec 1 '12 at 1:37
Thank you, that makes sense! – John Hoffman Dec 1 '12 at 1:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286289215087891, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/124320/counterexamples-in-convergence-in-mx-and-c-0x | # Counterexamples in convergence in $M(X)$ and $C_0(X)$.
I'm trying to get a good handle on analysis counterexamples as they relate to convergence in $M(X)$ and $C_0(X)$. Awhile back there was an excellent discussion of pointwise convergence, convergence in $L^p$ norm, weak convergence in $L^p$ and convergence in measure, here.
How about a similar set of counterexamples for $M(X)$ and $C_0(X)$? Here we have the notions of vague convergence, weak* convergence, convergence in norm (where the norm of a complex Radon measure is its total variation).
a) $\mu_n\to 0$ vaguely, but $\|\mu_n\|=|\mu|(X)|\nrightarrow 0$.
b) $\mu_n\to 0$ vaguely, but $\int f\ d\mu_n\nrightarrow \int f\ d\mu$ for some bounded measurable $f$ with compact support.
c) $\mu_n\ge 0$ and $\mu_n\to 0$ vaguely, but $\mu_n((-\infty,x])\nrightarrow \mu((-\infty,x])$ for some $x\in\mathbb{R}$.
d) $\{f_n\}\in C_0(X)$ converges weakly to some $f$, but not pointwise.
Any links to conceptual ways of internalizing these different notions of convergence, in addition to providing counterexamples, would be greatly appreciated.
Thanks.
EDIT: Let me define the notions of convergence as Folland does:
Vague convergence means convergence with respect to the vague topology on $M(X)$, which is also known as the weak* topology on $M(X)$, which means that $\mu_\alpha\to \mu$ iff $\int f\ d\mu_\alpha\to \int f\ d\mu$ for all $f\in C_0(X)$.
Weak convergence means that convergence on $X$ with respect to the topology generated by $X^*$.
The norm on $M(X)$ is given by the total variation, so that $\|\mu\|=|\mu|(X)$, so that convergence with respect to this norm means that $|\mu_n-\mu|(X)\to 0$.
Incidentally I find it awkward working with the total variation in such discussions of convergence because there is no clear geometry to work with as far as I can tell; the definition is rather too abstract for me at the moment.
-
Can you give your definitions for the various modes of convergence? Authors sometimes vary in their usage. Anyway, as a hint, consider $\delta_{x_n}$, a measure with a unit point mass at some $x_n \in \mathbb{R}$, and consider what happens as $x_n \to x$ or $x \to \pm \infty$. – Nate Eldredge Mar 25 '12 at 18:10
Thanks, Nate. I added the definitions to the question - I am using Folland as my guide through analysis. I actually considered that point mass measure, possibly from some other resource, but I can't find a way to make it work with the definitions. – Eric Gregor Mar 25 '12 at 20:01
So let's start with part (a). Let $\mu_n = \delta_n$ be a point mass at $n$. I claim that this sequence has the desired properties. Can you verify this from the definitions? If not, where do you get stuck? – Nate Eldredge Mar 25 '12 at 23:46
The total variation of a complex Radon measure is the positive measure $|\nu|$, where $d|\nu|=|f| d\mu$ for $\mu$ a positive measure, where by the Riesz Representation theorem $f$ is in $C_0(X)$. So if $\mu_n=\delta_n\to 0$ as $n\to\infty$, s.t. $|\mu_\alpha|(X)\to |\mu|(X)$ this means that for some $f\in C_0(X)$, $f(n)=|\mu_\alpha|=\int |f_\alpha|\ d\mu \nrightarrow \int |f|\ d\mu=|\mu|(X)=0$, but $\delta_n\to 0$ vaguely since for all $f\in C_0(X)$ for $n$ large $\int f\mu_n$ is small. How is that? – Eric Gregor Mar 26 '12 at 2:59
Your definition of total variation seems a little confused, you might want to review the definition. However it is worth noting that for a positive measure $|\mu| = \mu$. So in fact we have $\lVert \delta_n \rVert = 1$ for all $n$. It is also worth knowing that for a complex Radon measure $\mu$, $\lVert \mu \rVert = \sup\{\int f \,d\mu : f \in C_0(X), \lVert f \rVert_\infty \le 1\}$, i.e. the total variation norm is the operator norm in $C_0(X)^*$. – Nate Eldredge Mar 26 '12 at 3:07
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950474202632904, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/215773/variance-of-ratio-of-two-random-variables | # Variance of Ratio of Two Random Variables
Suppose we have two random variables $X$ and $Y$ with means $\mu_x, \mu_y$ and variances $\sigma_{X}^{2}$ and $\sigma_{Y}^{2}$. How would we derive $\text{Var} \left(\frac{X}{Y} \right)$?
Edit. $X$ and $Y$ are normally distributed.
-
1
do you have the distribution of $X$ and $Y$? – Seyhmus Güngören Oct 17 '12 at 17:50
## 2 Answers
As soon as the distribution of a random variable $Z$ has positive continuous density at zero, $\frac1Z$ is not integrable.
In the case at hand, $\frac1Y$ is not integrable. Since $X$ is independent of $Y$, $\frac{X}Y$ is not integrable either, a fortiori the variance of $\frac{X}Y$ does not exist, except in the degenerate case when $\sigma_Y^2=0\ne\mu_Y$.
To show the first assertion, consider $Z$ with density at least $\varepsilon\gt0$ on the interval $(-z,z)$. Then $$\mathbb E\left(\frac1{|Z|}\right)\geqslant\int_{-z}^z\frac\varepsilon{|t|}\,\mathrm dt=+\infty.$$
-
can one define a density for $X/Y$ on the extended real axis? Does it make sense? – Seyhmus Güngören Oct 17 '12 at 18:10
@SeyhmusGüngören There is nothing to further define, the density of X/Y does exist, on the non extended real axis (since X/Y is almost surely finite). – Did Oct 17 '12 at 20:50
To illustrate the problem let's look at a little R code. I'll define a routine that samples $n$ times from a normal distribution (calling the results $X$) and $n$ times from another normal distribution (calling the result $Y$) and then returns the variance of $Z=X/Y$.
````f <- function(n) {
X <- normr(n); # the operator '<-' is assignment to a variable
Y <- normr(n);
Z <- X / Y;
var(Z)
}
````
Let's look at the output for a few different random samples:
````> f(1e6)
[1] 14135397
> f(1e6)
[1] 706438.6
> f(1e6)
[1] 5685218
> f(1e6)
[1] 11334216
> f(1e6)
[1] 2090359
````
You can see that we're getting results from as low as 700,000 up to more than 14,000,000. The variance is completely dominated by large values of $Z$, corresponding to values of $Y$ near zero. This is what non-integrability looks like "in practice".
-
This is what non-integrability looks like. $X/Y$ has a density. – Nate Eldredge Oct 28 '12 at 3:39
Thanks Nate; corrected. – Chris Taylor Oct 28 '12 at 8:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8970255851745605, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/1311/are-there-more-rational-numbers-than-integers/1316 | # Are there more rational numbers than integers?
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled.
To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention.
The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is code for "no matter how far you count, you have never counted enough". If it were possible to count to infinity, it would be possible to count one step less and stop at count infinity-1 which must be different to infinity.
The second reason is that it's very easy to construct alternative mappings. Between zero and one there are infinitely many rational numbers, between one and two there are infinitely many rational numbers, and so on. To me, this seems a much more reasonable approach, implying that there are infinite rational numbers for every integer.
But even then, this is just one of many alternative ways to map between ranges of rationals and ranges of integers. Since you can count the rationals, you can equally count stepping by any amount for each rational. You can use 1..10 for the first rational and 11..20 for the second etc. Or 1..100 and 101..200 etc, or 1..1000 and 1001..2000 etc. You can map finite range of integers of any size to each rational this way and, since there is no finite upper bound to the stepping amount, you could argue there are potentially infinite integers for every single rational.
So... can anyone convince me that there is a single unambiguous correct answer to this question? Are there more rational numbers than integers, or not?
EDIT
Although I've already accepted an answer, I'll just add some extra context.
My reason for questioning this relates to the Hilbert space-filling curve. I find this interesting because of applications to multi-dimensional indexing data structures in software. However, I found Hilberts claim that the Hilbert curve literally filled a multi-dimensional space hard to accept.
As mentioned in a comment below, a one meter line segment and a two meter line segment can both be seen as sets of points and, but (by the logic in answers below), those two sets are both the same size (cardinality). Yet we would not claim the two line segments are both the same size. The lengths are finite and different. Going beyond this, we most certainly wouldn't claim that the size of any finite straight line segment is equal to the size of a one-meter-by-one-meter square.
The Hilbert curve reasoning makes sense now - the set of points in the curve is equal to the set of points in the space it fills. Previously, I was thinking too much about basic geometry, and couldn't accept the size of a curve as being equal to the size of a space. However, this isn't based on a fallacious counting-to-infinity argument - it's a necessary consequence of an alternative line of reasoning. The two constructs are equal because they both represent the same set of points. The area/volume/etc of the curve follows from that.
-
2
Second to last paragraph: you can also argue that there are potentially infinite integers for every single integer. – Qiaochu Yuan Jul 31 '10 at 20:17
@Qiaochu Yuan - that had occured to me, but I thought trying to argue that there are more integers than integers or visa versa was well down the road to insanity ;-) – Steve314 Jul 31 '10 at 20:51
Re edit: There's a confusion here between two distinct concepts of cardinality and measure. The cardinality of the set of points on a one-metre line segment and on a two-metre line is the same, but they have different measure (length, in this case). Similarly, the Hilbert space-filling curve fills all the points, but being a curve, it has measure 0 relative to the square it fills (it has length, but no area). The confusion arises because "size" is used loosely to refer to either concept. – ShreevatsaR Jul 31 '10 at 21:32
@ShreevatsaR - yes, that's my point. Why should "size" mean "cardinality of the set"? Simple answer - it's the only way to get a meaningful answer. But if you approach the issue worrying about curves and areas, it's hard not to see a different sense of the word "size". – Steve314 Jul 31 '10 at 21:45
– Noah Snyder Apr 11 at 0:48
## 6 Answers
Mathematicians have very precise definitions for terms like "infinite" and "same size". The single unambiguous correct answer to this question is that using the standard mathematical definitions, the rationals have the "same size" as the integers.
First, here are the definitions:
1. Define "0" = emptyset, "1" = {0}, "2" = {0,1}, "3" = {0,1,2}, etc. So, the number "n" is really a set with "n" elements in it.
2. A set A is called "finite" iff there is some n and a function f:A->n which is bijective.
3. A set A is called "infinite" iff it is not finite. (Note that this notion says nothing about "counting never stops" or anything like that.)
4. Two sets A and B are said to have the "same size" if there is a some function f:A-> B which is a bijection. Note that we do NOT require that ALL functions be bijections, just that there is SOME bijection.
Once one accepts these definitions, one can prove that the rationals and integers have the same size. One just needs to find a particular bijection between the two sets. If you don't like the one you mentioned in your post, may I suggest that Calkin-Wilf enumeration of the rationals? (Simply google search Calkin Wilf counting rationals. The first .pdf has what I'm talking about).
Of course, these give bijections between the naturals (with out 0) and the rationals, but once you have a bijection like this, it's easy to construct a bijection from the integers to the rationals by composing with a bijection from the naturals to the integers.
-
1
Excellent answer. I hadn't really considered that this was set theory (someone else added that tag). Now, I can see that this is the only way to interpret relative "size" that makes sense in this context. Thanks. – Steve314 Jul 31 '10 at 20:08
My last class in math was in the German high school equivalent, 37 years ago, so I'm a bit rusty ;-) -- However I found Jason's explanation convincing for me. -- Now I've got this idea: could one apply a concept of "density" upon both sets of numbers (integers and rationals) and somehow proof that "while both are of same size, the rationals have a higher density " ? [Of course it would all depend on a stringent definition for "density"... but maybe such a thing/concept/idea already exists in math and number theory?? -- I would be interested to know if that is the case. – Kurt Pfeifle Jul 31 '10 at 21:08
Sure. One can study the topology of the rational numbers and the integers as subsets of the real line, and they are very different. The rational numbers are, in the technical sense, dense (their closure is R), and the integers are discrete. That's one sense in which the rationals are more dense than the integers. – Qiaochu Yuan Aug 1 '10 at 5:21
Another interpretation of density might be Lebesgue measure. Funnily enough, the integers and the rationals both have measure zero, as does any countable subset of R. – Qiaochu Yuan Aug 1 '10 at 5:22
1
I think one of the main difficulties here is that there are many notions of "size". Off the top of my head, there are the number of elements of a set, the measure as a subset of R^n, and whether or not something has high density. The problem is that mathematically, all 3 of these concepts diverge, while our everyday experience (or at least mine) tells us the 3 should be (roughly) the same. – Jason DeVito Aug 1 '10 at 18:18
show 1 more comment
In mathematics a set is called infinite if it can be put into a 1-1 correspondence with a proper subset of it, and finite it is not infinite. (I know it seems crazy to have the concept of infinite as primitive and finite as a derivate, but it's simpler to do this, since otherwise you must assume that the integers exist before saying that a set is finite)
As for your remarks: - with your method (if you don't forget to throw out fractions like 4/6 which is equal to 2/3) you actually counted the rationals, since for each number you have a function which associates it to a natural number. It's true that you cannot count ALL rationals, or all integers; but you cannot either draw a whole straight line, can you? - with infinite sets you may build infinite mappings, but you just need a single 1-1 mapping to show that two sets are equal.
-
The straight line argument is significant. I cannot draw a 1 meter line segment by plotting a finite number of points, and the same for a 2 meter line segment. The cardinalities of the sets of points for these two lines I can (now, given other answers here) accept as equal. However, very few people would argue that the size of a 1 meter line segment is equal to the size of a 2 meter line segment. This isn't irrelevant since a co-ordinate system is simply a bijection of number-tuples to points in some space. There's more than one meaning of "size" IOW, but only one can answer my question. – Steve314 Jul 31 '10 at 20:32
size matters :-), but I was thinking of an infinite straight line. – mau Jul 31 '10 at 20:44
that's why I specifically said "line segment". – Steve314 Jul 31 '10 at 20:46
You may not be very satisfied with this answer, but I'll try to explain anyway.
Countability. We're not really talking about whether you can "count all of the rationals", using some finite process. Obviously, if there is an infinite number of elements, you cannot count them in a finite amount of time using any reasonable process. The question is whether there is the same number of rationals as there are positive integers; this is what it means for a set to be "countable" --- for there to exist a one-to-one mapping from the positive integers to the set in question. You have described such a mapping, and therefore the rationals are "countable". (You may disagree with the terminology, but this does not affect whether the concept that it labels is coherent.)
Alternative mappings. You seem to be dissatisfied with the fact that, unlike the case of a finite set, you can define an injection from the natural numbers to the rationals which is not surjective --- that you can in fact define a more general relation in which each integer is related to infinitely many rationals, but no two integers are related to the same rational numbers. Well, two can play at that game: you can define a relation in which every rational number is related to infinitely many integers, and no two rationals are related to the same integers! Just define the relation that each positive rational a/b is related to all numbers which are divisible by 2a but not 2a+1, and by 3b but not 3b+1; or more generally respectively 2ka and 3kb for any positive integer k. (There are, as you say, sign issues, but these can be smoothed away.)
You might complain that the relation I've defined isn't "natural". Perhaps you have in mind the fact that the integers are a subset of the rationals --- a subgroup, in fact, taking both of them as additive groups --- and that the factor group ℚ/ℤ is infinite. Well, this is definitely interesting, and it's a natural sort of structure to be interested in. But it's more than what the issue of "mere cardinality" is trying to get at: set theory is interested in size regardless of structure, and so we don't restrict to maps which have one or another kind of "naturalness" about them. Of course, if you are interested in mappings which respect some sort of structure, you can build theories of size based on that: this is what is done in measure theory (with measure), linear algebra (with dimension), and indeed group theory (with index). So if you don't like cardinality as set theorists conceive it, you can look at more structured measures of size that you find more interesting!
Immediate predecessors. A somewhat unrelated (but still important) complaint that you make is this: "If it were possible to count to infinity, it would be possible to count one step less and stop at count infinity-1 which must be different to infinity." The question is: why would you necessarily be able to stop at 'infinity minus one'? This is true for finite collections, but it does not necessarily hold that anything which is true of finite collections is true also for infinite ones. (In fact, obviously, some things necessarily will fail.) --- This is important if you study ordinals, which mirrors the process of counting itself in some ways (labelling things as being "first", "second", "third", and so forth), because of the concept of a limit ordinal: the first "infinitieth" element of a well-ordering doesn't have any immediate predecessors! Again, you are free to say that these are concepts that you are not interested in exploring personally, but this does not mean that they are necessarily incoherent.
To summarize: the set theorists measure "the size of a set" using a simple definition which doesn't care about structure, and which may violate your intuitions if you like to take the structure of the integers (and the rational numbers) very seriously, and also want to preserve your intuitions about finite sets. There are two solutions to this: try to stretch your intuition to accomodate the ideas of the set theorists, or study a different branch of math which you find more interesting!
-
On the "alternative mappings" I gave extremes in both directions - one rational to many integers as well as the other way around. On predecessors, I'm basically restating the classic argument for why infinity is not a number - ie because it doesn't behave as a number. As for your implication that I'm not up to handling set theory - I've coped with it perfectly well when I've needed to. In this case, I didn't realise I was dealing with set theory. BTW - at 39 years old, I am not looking to study anything formally. Don't assume everyone who asks about math is a student please. – Steve314 Jul 31 '10 at 20:43
(1) Whoa man, I never said you're "not up to handling set theory" --- I just suggested that if you find their definitions to be not the ones you care about, there are other areas. I have basically this attitude towards higher cardinalities myself: I understand them, I'm just not sure why we should care, when we can't even prove whether or not the continuum is the smallest uncountable cardinal. (2) It really depends on what you mean by "a number"; why should that property be necessary? (3) I only wrote my answer based on your question, which was typical of students learning about cardinality. – Niel de Beaudrap Jul 31 '10 at 21:32
OK, sorry for the oversensitivity there. In my case, I suspect my confusion is typical of programmers who only occasionally worry about computer science and math. Set theory is OK, but I don't remember the details off the top of my head, and in general I don't need to deal with the infinite. Even the cardinality of the set of integers is usually, to me, a little over 4 billion. – Steve314 Jul 31 '10 at 21:49
The cardinality of the set of rationals is the same as the cardinality of the integers is the same as the cardinality of the natural numbers.
When we count a finite set of elements, we are constructing a one-one map from the set onto a finite initial segment of the natural numbers. If we want to know if two finite sets have the same cardinality (are equi-cardinal) we can either: 1) count both sets and see if we get the same number, or 2) attempt to construct a one-one map from one set onto the other. If we can construct the map aimed at in (2), then the sets are equi-cardinal.
Generalizing that procedure from the finite sets to arbitrary sets, we get that for any two sets, the sets have the same cardinality (are equi-cardinal) if there exists a bijection (a one-one map between the sets that is onto the target rather than merely into). For the finite case, if there is a one-one map that is a bijection, all one-one maps are bijective. That is not the case for infinite sets, which is the root of your second concern.
To address that second concern, consider the map from the negative integers to the positive integers which maps each negative integer to its absolute value. The existence of that map shows that the two sets are equi-cardinal. We can, of course, construct one-one maps from the negative integers to the positive integers that are into rather than on to. (Consider the map that takes each negative integer to its product with -2.) But, the existence of these alternative maps doesn't affect the fact that there is at least one bijection between the sets, and that is all it takes for those sets to be equi-cardinal.
As for your first concern, I don't see why you think the procedure assumes that "infinity is a finite number". What it involves is specifying a mapping function from one set to the other that is one-one and onto. That attempt can certainly fail, as Cantor's Diagonalization Argument that the cardinality of a set is always strictly less than the cardinality of its power set shows. (A relevant application of that technique is the well-known proof that the cardinality of the reals is greater than the cardinality of the natural numbers.)
-
My concern about infinity was that it seemed to me that you could only justify one particular bijection as "the" bijection if there were a finite number of items in each set. If you can't count them all, you can't show that the last from one set maps to the last one from the other set. I guess it's really not a separate reason, but just an aspect of my "but there are other bijections" fallacy. – Steve314 Jul 31 '10 at 20:23
Actually, there being other bijections isn't a fallacy. Each one-to-one mapping implies the cardinalities are the same. There would only be a problem if you insisted that the existence of other kinds of injections, other than the bijections, was an issue. – Niel de Beaudrap Aug 1 '10 at 4:20
You can think about it a different way. Consider the set of real numbers between 0 and 1, and then the set of real numbers between 0 and 2.
By intuition, it seems that the set of real numbers between 0 and 2 has double the size of the set between 0 and 1. However, this is not the case, because the two sets have the same cardinality.
Consider the function $f(x) = 2x$. Every real between 0 and 1 is bijected to a real between 0 and 2. Therefore the sets are of the same size.
-
This in itself isn't convincing. There are alternative bijections, therefore it looks ambiguous at best. And the word "cardinality" is just a word - naming something doesn't make logical difficulties go away. Jason DeVitos answer was better - it's a consequence of the definition of "size" used, in which the existence of any bijection is sufficient. If I ask "why choose that particular bijection", the answer is "because it exists". – Steve314 Jul 31 '10 at 20:16
since one can construct the rationals from a ratio of integers, from here one will see that will be more rational numbers than integers (There are more 'options' to form numbers since it is a fraction two integers, except an integer over zero). Likewise I suspect that natural numbers have a smaller count than the integers since you can construct the integers from using natural numbers and -1*natural numbers along with zero. correct me if i'm wrong but i think this is all valid
-
2
You are wrong. Did you read any of the answers that were already posted? Did you read the Wikipedia article on countable sets? Or the Wikipedia article on cardinality for that matter? – Zev Chonoles♦ Apr 11 at 0:43
– Steve314 Apr 11 at 1:24
Starting from integers, your extra degree of freedom can derive new values - but that's just saying that the set of rationals is a strict superset of the set of integers. That gives a sense in which the set of rationals is larger, but that sense is not the same as set cardinality - the two versions of size are equivalent for finite sets, but not (at least in general) for infinite sets. The strict-superset ordering isn't even fully defined, and as most mathematicians consider "size" and "cardinality" synonyms, using "size" for something else causes confusion and grumpiness. – Steve314 Apr 11 at 1:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616253972053528, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/symmetry-breaking+higgs-mechanism | # Tagged Questions
1answer
73 views
### How to find the Higgs coupling with a mixing matrix?
It is known that the couplings to the Higgs are proportional to the mass for fermions; $$g_{hff}=\frac{M_f}{v}$$ where $v$ is the VEV of the Higgs field. I'm trying to figure out why this is true ...
1answer
244 views
### Spontaneous symmetry breaking in SU(5) GUT?
At the end of this video lecture about grand unified theories, Prof. Susskind explains that there should be some kind of an additional Higgs mechanism at work, to break the symmetry between the ...
3answers
222 views
### Higgs Boson: The Big Picture
First, please pardon the ignorance behind this question. I know a fair amount of math but almost no physics. I'm hoping someone can give me a brief "big picture" explanation of how physicists were ...
3answers
249 views
### How come a photon acts like it has mass in a superconducting field?
I've heard the Higgs mechanism explained as analogous to the reason that a photon acts like it has mass in a superconducting field. However, that's not too helpful if I don't understand the latter. ...
2answers
127 views
### Do particles gain mass only at energy levels found during the big bang?
I am trying to make sure my understanding is correct. At energies and temperatures found during the big bang (or at CERN recently), the Higgs mechanism comes into effect. When it does, there is a ...
2answers
1k views
### Why do we need Higgs field to re-explain mass, but not charge?
We already had definition of mass based on gravitational interactions since before Higgs. It's similar to charge which is defined based on electromagnetic interactions of particles. Why did Higgs ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9591147899627686, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/7026/analyzing-the-angle-between-vector-of-weights-and-vector-of-returns-in-mean-vari/7035 | # Analyzing the angle between vector of weights and vector of returns in mean-variance optimization
I am using the paper "A Sharper Angle on Optimization" by Golts and Jones (2009) as a basis for my (minor) masters thesis in mathematical finance. The paper focuses on the mean-variance analysis of Markowitz but instead turns attention to the vector geometry of the returns vector and vector of resultant portfolio weights. As it is a working paper, most of the concepts are not elaborated on well enough to make sense or for one to implement by him/herself. The paper may be accessed on this link: http://ssrn.com/abstract=1483412.
One of the ideas I am struggling with is the angle between the returns vector and vector of weights and how this angle can be related to the condition number of the covariance matrix. The authors then employ robust optimization techniques to control this angle (i.e. minimize it) to obtain more intuitive investment portfolios.
The authors state that the angle between the returns and positions vector, call it $\omega$, is bounded from below as: $\cos(\omega)=\frac{\alpha^{T}\Sigma^{-1}\alpha}{\sqrt{\alpha^{T}\alpha}\sqrt{\alpha^{T}\Sigma^{-2}\alpha}} \geq \frac{\theta_{\max}\theta_{\min}}{(\theta_{\max}^{2}+\theta_{\min}^{2})/2}$
where $\alpha$ is the vector of returns and $\Sigma$ is the covariance matrix with spectral decomposition given by $\Sigma=Q^{T}\mbox{diag}(\theta_{1}^{2},...,\theta_{n}^{2})Q$ where $\theta_{1}^{2} \geq \theta_{2}^{2} \geq ... \geq \theta_{n}^{2} > 0$ are the eigenvalues in decreasing order and where we let $\theta_{\max}^{2}=\theta_{1}^{2}$ and $\theta_{\min}^{2}=\theta_{n}^{2}$.
If anyone has any ideas on how the authors may have arrived at this, as well as what it means graphically, I would really appreciate it.
Many thanks in advance!
-
1
how is this related to finance? – Freddy Jan 19 at 2:34
@Freddy: We are dealing with the Markowitz mean-variance optimization setup. Sorry if that was not clear. – Geraldine Bailey Jan 19 at 10:49
– Richard Jan 27 at 21:35
@Richard: Thank you! Will take a look at it. – Geraldine Bailey Jan 28 at 0:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362143874168396, "perplexity_flag": "head"} |
http://mathpages.blogspot.com/2008/05/algorithm-for-solving-sudoku-puzzles.html | # Math Pages Blog
God used beautiful mathematics in creating the world.
## Monday, May 5, 2008
### An algorithm for solving Sudoku puzzles
In the previous post, Impossible Sudoku, I said that there exists a general algorithm that allows to solve any Sudoku puzzle. In this post I will present this algorithm and the solutions to the Sudoku puzzles I posted in the previous post.
First of all, in order to talk about solving Sudoku we need to define what a proper Sudoku is - A proper Sudoku puzzle is a puzzle that has unique solution. It is important to notice that in this definition there is no requirement that a proper Sudoku can be solved - the only thing we require is that there exists a way, and its is only one, to put the missing numbers in the puzzle without breaking the nine in row/column and nine in square rules.
In this post I will talk only about proper Sudoku puzzles. There are 6,670,903,752,021,072,936,960 such Sudoku by the way...
Now that we have this definition we can talk about solutions. Lets prove that all proper Sudoku puzzles can be solved:
A Sudoku puzzle it considered to be solvable if there exists a finite series of step that will produce a filled grid. T o prove that for all proper Sudoku puzzles such a series exists we will need to use linear algebra. If you are not familiar with it please, skip this paragraph. Lets define the following vector space:
$V=%5B%28a_%7B1%7D,a_%7B2%7D,....,a_%7B81%7D%29%7Ca_%7Bi%7D%5Cin%20Z_%7B9%7D%5D$
This is a 81 dimensional vector space. $Z_%7B9%7D$ is the field of natural numbers mod9. Now we can talk about a function from the group of all Sudoku puzzles to V. This function is not a nice one - it moves all the 1 in the puzzle to 0 and 9 to 8, so it loses information. However this is not important. What is important is that for proper Sudoku its solution is moved by the function to a unique vector in V. Because V is a vector space with finite dimension it must have a basis. The standard basis of V is clearly (1,0,0,...,0),.....(0,0,0,...,1). Lets selected one proper Sudoku. The solution of our Sudoku can be expressed therefore as a finite linear combination of these vectors. The problem is that each of them can appear more then one in the combination - we don't want this to happen. Therefore we will sum all the appearance of the same vector - for example if (1,0,0,...,0) appears 5 times in the combination we will write (5,0,0,...,0) instead of it. It is very easy to see that the puzzle we chose can also be written as a sum of part of the linear combination we got in the previous step - so lets sum a part of it to get the puzzle. What we now have is the following equation:
$U+v_%7B1%7D+...+v_%7Bk%7D=w$
in this equation U is our original puzzle and w is the solution. $v_%7Bi%7D$ are the remaining vectors in the linear combination. Each of them represents a number and a cell, so adding one of them is filling a cell. We already saw that k is a finite number, so indeed there exists a finite series of steps that solvs a certain proper Sudoku puzzle.
You are probably wondering why I wrote this prove - isn't it obvious that Sudoku puzzle can be solved? Well, yes. But if it is obvious, why not prove it? After the prove it is no longer obvious - it is just a plain fact.
Now that we have shown that there is a solution lets talk about how to find it. The algorithm consists of three main steps that are repeated until a solution is reached:
Step one - Scanning:
Cross-hatching: The scanning of rows to identify which line in a region may contain a certain numeral by a process of elimination. The process is repeated with the columns. It is important to perform this process systematically, checking all of the digits 1–9. It is usually faster to check for a digit in all of the grid at once than to check square by square.
Column check: After cross-hatching it is usually a good idea to use elimination on columns. It is done in the same manner as with squares.
Step two - Logic:
This step consists of attempts to make logical conclusions. For example in the puzzle above you don't know were 4 is in the center square. But you now that it must be in the right column of this square. You can use this information to conclude that it cannot be in the right column of the top-middle square. In this case this is enough to find 4 in that square.
Step three - Guess:
If you don't know you can always guess. In this step it is important not to make mistakes - otherwise it will not work.
If you want to make a guess you will need to find a good digit to guess. The best option is to find a cell (or row/column) with two candidate numbers. After finding such cell make a guess and then repeat steps one and two.
If you will get an impossibility at some point than your guess was wrong - and because there were only two candidate numbers it is the second number that should be in that cell.
If after repeating steps one and two you need to make another guess repeat step three - if your second guess is incorrect it means that your previous guess requires the second candidate in the cell you choose for the second guess.
Notes:
1. Easy and medium puzzles can be solved using first step only. Hard puzzles require step two, and very hard step three.
2. Don't overuse step three - it makes the game much less fun.
3. For step three you will need to keep track of the numbers you wrote after the guess. The Gnome-Sudoku I mentioned in the previous post gas an inbuilt tracker function for cases like this.
4. It is possible to add another step to the algorithm - marks. It basically consist of recording all the candidates for a certain cell on the puzzle itself. I don't think that it is needed, and it is time consuming.
And now as promised - solution to the puzzles from the previous post:
Posted by Anatoly
#### 5 comments:
brundinbar said...
I solved the similar problem, the solver I've created starts with a "logical" solving - it controls rows, columns, squares, etc. After all the possibilities are spent, the second section comes up - backtracking :)
Anatoly said...
Hello Brundibar,
Thanks for the link. I am going to write a separate post about you program, it certainly looks useful.
Mucha said...
Hi, it's some time ago I've programmed the application, I can't remember all the steps in detail. I'd like to refactor the program soon, then I describe the steps more clearly.
Have a nice day,
Brundibar
woerner1 said... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523633122444153, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/213/mathematical-subjects-you-wish-you-learned-earlier/1035 | # Mathematical subjects you wish you learned earlier
I am learning geometric algebra, and it is incredible how much it helps me understand other branches of mathematics. I wish I had been exposed to it earlier.
Additionally I feel the same way about enumerative combinatorics.
What are some less popular mathematical subjects that you think should be more popular?
-
1
What is "geometric algebra"? – Kevin Lin Jul 28 '10 at 19:16
4
– Jonathan Fischoff Jul 28 '10 at 19:26
5
I fear this is going to degenerate into "List subjects that you like." – Nate Eldredge Aug 17 '10 at 1:04
5
@Nate: I agree. Every time I understand something in mathematics (i.e., every good day I have) I wish I understood it much earlier, of course. This includes material from subjects I first "learned" long ago... – Pete L. Clark Feb 4 '11 at 22:19
Everything I like. – Jonas Teuwen Feb 5 '11 at 11:52
## 19 Answers
Category theory and algebraic geometry.
I spent a lot of time in undergrad studying things that were kinda nifty, but way too classical to be of any use/interest beyond "fun math". When I got to grad school, category theory was assumed and made some of my courses much harder than they should've been.
In the words of Ravi Vakil, "algebraic geometry should be learned slowly over a number of years". I currently NEED algebraic geometry, so I don't have this number of years. I wish I would've started that a long time ago. Additionally, both of these topics would've helped me learn the things I was thinking about anyways, in particular commutative algebra.
-
What is a good way to get an overview of algebraic geometry and the problems it solves? – Jonathan Fischoff Jul 22 '10 at 20:10
8
For a general overview of ALL of alg geom, I have no idea. But here are some specifics. If you care about enumerative algebraic geometry, tinyurl.com/28bkunk . If you care about real solutions to algebraic systems, arxiv.org/abs/0907.1847 . If you care about representation theory, tinyurl.com/2dkss6h . If you care about mirror symmetry or string theory, arxiv.org/abs/math/0601041 . If you care about number theory, math.mit.edu/~poonen/782/782notes.pdf . If you care about PDE's or analysis in general, math.stanford.edu/~dbaskin/sdgs-microlocal.pdf . :D – BBischof Jul 22 '10 at 22:46
2
– BBischof Jul 23 '10 at 3:46
4
So we have recommendations for algebraic geometry and geometric algebra. :) – Oscar Cunningham Sep 17 '10 at 12:50
@BBischof Lots of caring. – Gustavo Bandeira Dec 16 '12 at 11:46
show 2 more comments
Though I'm sure it's not unpopular, I don't think many people learn it early: Group Theory. It's a real nice area with a lot of cool math and some neat applications (like cryptography).
-
As a physicists I totally support that answer. – Lagerbaer Feb 4 '11 at 19:28
A firm understanding of Group Theory radically changes the way you define problem/solution space in Computer Science. – awashburn Feb 17 at 23:11
Lattices and order theory. While these concepts are so ubiquitous, they seem to be banned from mathematics courses. Also, if you know something about order theory, many concepts from category theory turn out to be quite familiar. (E.g. view a poset as a category, then product resp. coproduct become infimum resp. supremum, the slice and coslice category are up and down set, etc.)
-
I wish I'd understood the importance of inequalities earlier. I wish I'd carefully gone through the classic book Inequalities by Hardy, Littlewood, and Poyla early on. Another good book is The Cauchy-Schwarz Masterclass.
You can study inequalities as a subject in their own right, often without using advanced math. But they're critical techniques for advanced math.
-
1
Can you elaborate a bit about its importance? My only acquaintance with inequalities is back in high school olympiad stuff. Perhaps I'm not an analysis guy, I never really used those olympiad stuff after I entered university. – Soarer Jul 28 '10 at 20:14
1
Inequalities are everywhere in analysis. Often you'll bootstrap a simple inequality, one you may have seen in high school, into a sophisticated inequality. For example, you might take ab <= (a^2 + b^2)/2 and parlay that into a theorem about operators on Banach spaces. – John D. Cook Jul 28 '10 at 20:51
One of the major reasons a lot of students struggle in undergraduate analysis is becuase they don't have command of basic inequalities. Proving limits rigorously is VERY confusing without this skill. I know I'M sorry I didn't learn it before that. – Mathemagician1234 Jan 29 at 5:20
I wish I'd learned logic much, much earlier. Obviously young students couldn't handle much depth, but at least a basic introduction to a few concepts would be nice. Just understanding the concept of axioms and deductive rules would put all of math into some perspective. When I finally understood that math was constructed with formal definitions and proofs (or, for example, that there was more than one useful way to axiomatize), I felt I'd been kept in the dark my whole life, doing something (math) that I had absolutely no understanding of.
-
Graph Theory is a fantastic field. First, the fact that abstract concepts can be readily visualized makes it engaging for new students. And second, I believe it provides solid foundations into mathematical thinking like proofs, and engages the student to explore other related fields.
-
I don't know what level of mathematics you are referring to, but here's my opinion after recently finishing my university's undergraduate curriculum.
Firstly, I would like to second Jan Gorzny's reply of Group Theory.
Second, I wish that I had learned linear algebra earlier. The topic usually has two semesters: matrix algebra and then an early proof-based introduction to vector spaces and linear transformations. The real work in this topic can't begin until after both of these classes are completed.
I also wish I had been exposed to topology earlier than I was. Of course there are two "standard" approaches here, and I suppose the approach that introduces general topology before advanced calculus would have better suited my tastes.
Here is a good book that may give some more insight into the heavily debated area where your question lies: Thomas Garrity, All the Mathematics You Missed But Need to Know for Graduate School
-
3
Nobody learns linear algebra in two semesters. Hoffman and Kunze or bust. – user126 Jul 21 '10 at 1:17
12
where by "nobody," Harry, you mean "most math majors in the United States who take a class covering any kind of abstract linear algebra." No need for "I've learnt [x subject] [faster/better/with harder books]" on this eminently reasonable question and answer. – Jamie Banks Jul 21 '10 at 1:25
I have taken more than two semesters of Linear Algebra, followed by a reading course using HK (a great book). But there is still much, much more to this subject - only accessible after one can make it through a book such as HK. – Tom Stephens Jul 21 '10 at 1:38
Thanks Tom the book looks interesting. I'm always looking for non-rigorous heuristics to mathematical subjects. – Jonathan Fischoff Jul 21 '10 at 2:51
@Tom I was initially very excited by that book, but later grew to dislike it. I found that in fact, much of what is in there is less relevant than other important things. Additionally, the style of presentation leaves much to be desired. I was sad when I realized that it didn't help me much. Instead I would recommend the Berkeley problems book. That material is absolutely essential, and by solving problems you can be sure your solid on the topics. – BBischof Jul 22 '10 at 14:38
show 1 more comment
Theory of computation, information theory and logic/foundation of mathematics are very interesting topics. I wish I knew them earlier. They are not unpopular(almost every university have a bunch of ToC people in CS depatment...) , but many math major I know have never touched them.
They show you the limits of mathematics, computation and communication.
Logic shows there are things can't be proved from a set of axioms even if it's true--Godel's incompleteness theorem. There are other interesting theorems in foundation of mathematics. Like the independence of continuum hypothesis to ZFC.
Theory of computation showed me things that's not computable. Problems that takes exponential time, exponential space, no matter what kind of algorithm you come up with.
Information theory proves the minimum amount of information required to reconstruct some other information. It pops up in unexpected places. There is a proof of there are infinite number of primes by information theory (Sorry I can't find it, I can only tell you it exists. I might find it later).
-
4
I've removed the article link, as it does not belong on a site about mathematics (and because I think the rest of the answer has value, more so without the link, and the point of a CW format is to compile the best answers possible, which can mean substantially changing or combining answers). If anyone disagrees with this decision or wishes to discuss taking such action, please do it over on meta. – Jamie Banks Jul 21 '10 at 7:09
Information theory is typically only relevant to the comp sci sort, but I strongly agreed with the foundation/logic of mathematics/Godel's Incompleteness theorem. – Noldorin Jul 21 '10 at 8:08
1
@Katie, I agree with your uptake of the CW format, but not with your stand on the legitimacy of the link. The article obviously does not really deal with predicting the future, at least not in a constructive manner. The axiom of choice does have some interesting non-intuitive results which seem like "prediction", for example: xorshammer.com/2008/08/23/set-theory-and-weather-prediction – Tomer Vromen Jul 21 '10 at 17:02
4
I'm confused by the removal of the link with the rationale that it does not belong on a math site. The link is to an article which was recently published in the American Mathematical Monthly, which is, I believe, of all American periodicals devoted exclusively to math, the one with the largest circulation. What was found to be objectionable about this article? – Pete L. Clark Jul 29 '10 at 7:20
Statistics is the topic in which I am still poor and it is still useful to me which I learned so late and that's why I am poor in Statistics.
-
I wish I'd learned about special functions earlier. The subject is a treasure trove of results that were commonly known a century ago but now few people know.
-
I don't really think that graph theory is a "less popular mathematical subject," but I certainly wish I had been exposed to it earlier.
-
I did mathematics as an undergrad, and I thought that differential equations were boring and pointless. Type of diff eq -> existence and uniqueness proofs for solutions -> rinse and repeat. Yawn.
But now I find my lack of knowledge of differential equations is hampering my learning some interesting parts of physics that I'd like to know more about...
-
My answer is: fundamental concepts and methods of both first order logic and set theory. I really wish I learned them much earlier, since all mathematics is based on them.
-
Non-linear Dynamics and Chaos!
-
Not unpopular, but I wish I had studied the theory of Rings and Fields, and basic Topology earlier, because in my opinion both these branches appear in many interesting subjects studied relatively early during one's undergraduate studies. For example, the whole concept of a minimal polynomial of a linear transformation $T$ is more intuitive (at least for me) when viewed as the generator of the ideal of polynomials such that $P(t)=0$. Metric Spaces (studied in the introductory Topology course at my university) appear as early as in calculus, and are generally a basis to many definitions there. Also, many algebraic structures can be viewed as topologies, which can sometimes give a new insight or assist in proofs (a favourite of mine is a topological proof of the infinity of the primes, which can be found in Proofs from the Book).
-
Topology was the first real math subject I learned. And I struggled through a text before I had taken a proofs and logic class so it was also how i learned to write proofs. Now taking my second real analysis course and its a breeze. – AnonymousCoward Feb 5 '11 at 2:29
Information theory.
Incredibly deep field - it will have you perceive the world in a completely new way.
-
Riemann's Explicite Prime Counting Formula: $$\pi_{0}(x) = \operatorname{R}(x) - \sum_{\rho}\operatorname{R}(x^{\rho}) - \frac1{\ln x} + \frac1\pi \arctan \frac\pi{\ln x}$$ and all the theory of Dirichlet functions behind it...
-
Integrals.
My high school didn't cover them, so in 1Y uni, it was all new to me.
-
Linear Algebra for sure. I would also add Statistics, though I would rather learn both well and in good depth than necessarily early.
Would second the answer on Logic.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957158088684082, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/99289-show-existence-complementary-submodule.html | Thread:
1. show existence of complementary submodule
I think i understand the first 2 parts of the hints but the last part of this question just confuses me =/ Prove that if N is a submodule of an R module M so that the quotientM/N is a free R module then there exists a complementary submodule to N in M. lecturer's hint: let X be a subset of M st (x + N| x in X) is basis of M/N define fn B: M/N --> M st B(x +N) = x for all x in X and show that im(B) is complementary to N
2. Originally Posted by gtkc
I think i understand the first 2 parts of the hints but the last part of this question just confuses me =/ Prove that if N is a submodule of an R module M so that the quotientM/N is a free R module then there exists a complementary submodule to N in M. lecturer's hint: let X be a subset of M st (x + N| x in X) is basis of M/N define fn B: M/N --> M st B(x +N) = x for all x in X and show that im(B) is complementary to N
first note that $B$ is well-defined because $X$ is a basis for $M/N.$ now let $\text{Im}(B)=L$ and $z \in N \cap L.$ so $z=B(u),$ for some $u \in M/N.$ but $u=\sum c_j(x_j + N)=\sum c_j x_j + N,$ for some
$x_j \in X, \ c_j \in R.$ thus $z=B(u)=\sum c_j x_j \in N.$ therefore $u=\sum c_j x_j + N=0.$ so we proved that $N \cap L = \{0 \}.$ we only now need to prove $M=N+L$: let $y \in M.$ then $y + N \in M/N$
and hence $y+N=\sum r_jx_j + N,$ for some $x_j \in X, \ r_j \in R.$ thus $y - \sum r_j x_j = a \in N$ and so $y=a+ \sum r_j x_j = a+ B(\sum r_jx_j + N) \in N + L.$ hence $M \subseteq N + L.$ the other direction of
the inclusion is trivial. this completes the proof of $M=N \oplus L.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187725782394409, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/79442?sort=oldest | ## Number of distinct values taken by x^x^…^x with parentheses inserted in all possible ways
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For what positive x's the number of distinct values taken by x^x^...^x with parentheses inserted in all possible ways is not represented by the sequence A000081? Is it exactly the set of positive algebraic numbers? Is it a superset of positive algebraic numbers? Is it countable? Is $2^{\sqrt 2}$ or $\log_2 3$ in the set?
-
Off-hand, I'd be quite surprised if the generic number isn't attained in the case $x = 3$. Am I missing something? – Todd Trimble Oct 29 2011 at 2:34
1
The case $x=3$ gives oeis.org/A003018 which differs from oeis.org/A000081 starting from 7th term. – Vladimir Reshetnikov Oct 29 2011 at 2:53
2
Wow, that's really surprising! Can you tell me which two parenthesizations in the case $x = 3$ coincide? – Todd Trimble Oct 29 2011 at 3:07
10
3^(3^(3^3) * 3 * 3 * 3) = 3^(3^(3 * 3 * 3) * 3^3). [I've written these using products in the exponent, which of course can be rewritten into iterated exponentiations in various equivalent orders] – Sridhar Ramesh Oct 29 2011 at 5:50
6
The same phenomenon occurs for any natural number, of course: b^(b^(b^b) * the product of b many bs) = b^(b^(the product of b many bs) * b^b). So every natural number b fails to be generic for parenthesization of x^x^x... with 4 + b many copies of x. [Paraphrased from "The Nesting and Roosting Habits of The Laddered Parenthesis", by R. K. Guy and J. L. Selfridge] – Sridhar Ramesh Oct 29 2011 at 5:51
show 1 more comment
## 1 Answer
The answer to the second question is "no". Consider the unique solution $x > 0$ to the equation $x^x = 3$. By the Gelfond-Schneider theorem, this number is transcendental. But we have
$$((x^x)^x)^x = x^{x^3} = x^{(x^{(x^x)})}$$
so that two of the parenthesizations coincide. So evidently this set contains transcendental numbers. Lots of other solutions can be similarly generated (e.g., solve $x^{(x^x)} = 4$).
-
A similar example I find interesting is to take the unique positive solution of $x^x=x+1$. Then you can easily check that $x^{(x^{(x^x)})} =(x^x)^{(x^x)}$ using the defining equation and the usual exponentiation rules. Although the transcendentality doesn't seem obvious anymore ... – Dejan Govc Nov 2 2011 at 21:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959429860115051, "perplexity_flag": "middle"} |
http://www.cfd-online.com/W/index.php?title=Parallel_computing&diff=12747&oldid=4298 | [Sponsors]
Home > Wiki > Parallel computing
# Parallel computing
### From CFD-Wiki
(Difference between revisions)
| | | | |
|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Tsaad (Talk | contribs) () | | Peter (Talk | contribs) m (Reverted edits by Lusiasa (Talk) to last version by Peter) | |
| (24 intermediate revisions not shown) | | | |
| Line 4: | | Line 4: | |
| | Parallel computing is defined as the simultaneous use of more than one processor to execute a program. This formal definition holds a lot of intricacies inside. For instance, given a program, one cannot expect to run this program on a 1000 processors without any change to the original code. The program has to have instructions to guide it to run in parallel. Since the work is shared or distributed amongst "different" processors, data has to be exchanged now and then. This data exchange takes place using different methods depending on the type of parallel computer used. For example, using a network of PCs, a certain protocol has to be defined (or installed) to allow the data flow between PCs. The sections below describe some of the details involved. | | Parallel computing is defined as the simultaneous use of more than one processor to execute a program. This formal definition holds a lot of intricacies inside. For instance, given a program, one cannot expect to run this program on a 1000 processors without any change to the original code. The program has to have instructions to guide it to run in parallel. Since the work is shared or distributed amongst "different" processors, data has to be exchanged now and then. This data exchange takes place using different methods depending on the type of parallel computer used. For example, using a network of PCs, a certain protocol has to be defined (or installed) to allow the data flow between PCs. The sections below describe some of the details involved. |
| | | | |
| - | == Types of Parallel Computers == | + | == Types of parallel computers == |
| | There are two fundamental types of parallel computers | | There are two fundamental types of parallel computers |
| | *A single computer with multiple internal processors, known as a ''Shared Memory Multiprocessor''. | | *A single computer with multiple internal processors, known as a ''Shared Memory Multiprocessor''. |
| | | + | ** [http://www.cfd-online.com/Wiki/GPGPU GPGPU] enabled graphics cards are ''Shared Memory Multiprocessors''. |
| | *A set of computers interconnected through a network, known as a ''Distributed Memory Multicomputer''. | | *A set of computers interconnected through a network, known as a ''Distributed Memory Multicomputer''. |
| | Each of these can be referred to as a Parallel Computer. In this section we briefly discuss the architecture of the above systems. | | Each of these can be referred to as a Parallel Computer. In this section we briefly discuss the architecture of the above systems. |
| | | | |
| - | === Shared Memory Multiprocessor === | + | === Shared memory multiprocessor === |
| | A conventional computer consists of a processor and a memory readily accessible by any instruction the processor is executing. The shared memory multiprocessor is a natural extension of the single processor where multiple processors are connected to multiple memory modules such that each memory location has a single address space throughout the system. This means that any processor can readily have access to any memory location without any need for copying data from one memory to another. | | A conventional computer consists of a processor and a memory readily accessible by any instruction the processor is executing. The shared memory multiprocessor is a natural extension of the single processor where multiple processors are connected to multiple memory modules such that each memory location has a single address space throughout the system. This means that any processor can readily have access to any memory location without any need for copying data from one memory to another. |
| | [[Image:ParallelComputing Shared Memory Multiprocessor.gif|frame|Shared Memory Multiprocessor]] | | [[Image:ParallelComputing Shared Memory Multiprocessor.gif|frame|Shared Memory Multiprocessor]] |
| Line 21: | | Line 22: | |
| | # Short life: upgrade is limited | | # Short life: upgrade is limited |
| | | | |
| - | === Distributed Memory Multicomputer === | + | === Distributed memory multicomputer === |
| | The distributed memory multicomputer or message passing multicomputer consists of connecting independent computers via an interconnection network as shown in the figure below. Inter-processor communication is achieved through sending messages explicitly from each computer to another using a message passing library such as MPI (Message Passing Interface). In such a setup, as each computer has its own memory address space. A processor can only access its own local memory. To access a certain value residing in a different computer, it has to be copied by sending a message to the desired processor. The message passing multicomputer will physically scale easier than a shared memory multiprocessor, i.e. it can more easily be extended by adding more computers to the network. | | The distributed memory multicomputer or message passing multicomputer consists of connecting independent computers via an interconnection network as shown in the figure below. Inter-processor communication is achieved through sending messages explicitly from each computer to another using a message passing library such as MPI (Message Passing Interface). In such a setup, as each computer has its own memory address space. A processor can only access its own local memory. To access a certain value residing in a different computer, it has to be copied by sending a message to the desired processor. The message passing multicomputer will physically scale easier than a shared memory multiprocessor, i.e. it can more easily be extended by adding more computers to the network. |
| | [[Image:ParallelComputing_Distributed_Memory_Multicomputer.gif|frame|Distributed Memory Multicomputer]] | | [[Image:ParallelComputing_Distributed_Memory_Multicomputer.gif|frame|Distributed Memory Multicomputer]] |
| Line 31: | | Line 32: | |
| | A '''Slave''' Processor refers to any one of the computers on the network that is not a master. | | A '''Slave''' Processor refers to any one of the computers on the network that is not a master. |
| | | | |
| - | ==Measuring Parallel Performance== | + | ==Measuring parallel performance== |
| | There are various methods that are used to measure the performance of a certain parallel program. No single method is usually preferred over another since each of them, as will be seen later on, reflects certain properties of the parallel code. | | There are various methods that are used to measure the performance of a certain parallel program. No single method is usually preferred over another since each of them, as will be seen later on, reflects certain properties of the parallel code. |
| | | | |
| Line 37: | | Line 38: | |
| | In the simplest of terms, the most obvious benefit of using a parallel computer is the reduction in the running time of the code. Therefore, a straightforward measure of the parallel performance would be the ratio of the execution time on a single processor (the sequential version) to that on a multicomputer. This ratio is defined as the speedup factor and is given as <br> | | In the simplest of terms, the most obvious benefit of using a parallel computer is the reduction in the running time of the code. Therefore, a straightforward measure of the parallel performance would be the ratio of the execution time on a single processor (the sequential version) to that on a multicomputer. This ratio is defined as the speedup factor and is given as <br> |
| | <math>S(n)=\frac\mbox{Execution time using one processor}{\mbox{Execution time using N processors}}=\frac{t_s}{t_n}</math> <br> | | <math>S(n)=\frac\mbox{Execution time using one processor}{\mbox{Execution time using N processors}}=\frac{t_s}{t_n}</math> <br> |
| - | where <math>t_s</math> is the execution time on a single processor and <math>t_s</math> is the execution time on a multicomputer. | + | where <math>t_s</math> is the execution time on a single processor and <math>t_n</math> is the execution time on a parallel computer. |
| | | | |
| | S(n) therefore describes the scalability of the system as the number of processors is increased. The ideal speedup is n when using n processors, i.e. when the computations can be divided into equal duration processes with each process running on one processor (with no communication overhead). Ironically, this is called ''embarrassingly parallel computing''! | | S(n) therefore describes the scalability of the system as the number of processors is increased. The ideal speedup is n when using n processors, i.e. when the computations can be divided into equal duration processes with each process running on one processor (with no communication overhead). Ironically, this is called ''embarrassingly parallel computing''! |
| | | | |
| | In some cases, superlinear speedup (S(n)>n) may be encountered. Usually this is caused by either using a suboptimal sequential algorithm or some unique specification of the hardware architecture that favors the parallel computation. For example, one common reason for superlinear speedup is the extra memory in the multiprocessor system. | | In some cases, superlinear speedup (S(n)>n) may be encountered. Usually this is caused by either using a suboptimal sequential algorithm or some unique specification of the hardware architecture that favors the parallel computation. For example, one common reason for superlinear speedup is the extra memory in the multiprocessor system. |
| | | + | |
| | | + | The speedup of any parallel computing environment obeys the Amdahl's Law. |
| | | + | |
| | | + | Amdahl's law states that if ''F'' is the fraction of a calculation that is sequential (i.e. cannot benefit from parallelisation), and (1 − ''F'') is the fraction that can be parallelised, then the maximum speedup that can be achieved by using ''N'' processors is |
| | | + | |
| | | + | :<math>\frac{1}{F + (1-F)/N}</math>. |
| | | + | |
| | | + | In the limit, as ''N'' tends to [[infinity]], the maximum speedup tends to 1/''F''. In practice, price/performance ratio falls rapidly as ''N'' is increased once (1 − ''F'')/''N'' is small compared to ''F''. |
| | | + | |
| | | + | As an example, if ''F'' is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of ''N'' used. For this reason, [[parallel computing]] is only useful for either small numbers of [[processor]]s, or problems with very low values of ''F'': so-called [[embarrassingly parallel]] problems. A great part of the craft of [[parallel programming]] consists of attempting to reduce ''F'' to the smallest possible value. |
| | | | |
| | === Efficiency === | | === Efficiency === |
| Line 56: | | Line 67: | |
| | <math>cost=\frac{Nt_s}{S(n)}=\frac{t_s}{E(n)}</math><br> | | <math>cost=\frac{Nt_s}{S(n)}=\frac{t_s}{E(n)}</math><br> |
| | | | |
| - | === Performance of CFD Codes === | + | === Performance of CFD codes === |
| | The method used to assess the performance of a parallel CFD solver is becoming a topic for debate. While some implementations use a fixed number of outer iterations to assess the performance of the parallel solver regardless of whether a solution has ben obtained or not, other implementors use a fixed value for the residual as a basis for evaluation. Ironically, a large amount of implementors do not mention the method used in their assessment! | | The method used to assess the performance of a parallel CFD solver is becoming a topic for debate. While some implementations use a fixed number of outer iterations to assess the performance of the parallel solver regardless of whether a solution has ben obtained or not, other implementors use a fixed value for the residual as a basis for evaluation. Ironically, a large amount of implementors do not mention the method used in their assessment! |
| | | | |
| Line 65: | | Line 76: | |
| | The problem becomes more complicated when an algebraic multigrid solver is used. Depending on the method used in implementing the AMG solver, the maximum number of AMG levels in the parallel version will usually be less than that of the sequential version which raises the issue that one is not comparing the same algorithm. From an engineering point of view, the main concern is to obtain a valid solution for a given problem in a reasonable amount of time and thus, a user will not actually perform a sequential run and then a parallel run; rather, she will require the code to use as many AMG levels as possible. | | The problem becomes more complicated when an algebraic multigrid solver is used. Depending on the method used in implementing the AMG solver, the maximum number of AMG levels in the parallel version will usually be less than that of the sequential version which raises the issue that one is not comparing the same algorithm. From an engineering point of view, the main concern is to obtain a valid solution for a given problem in a reasonable amount of time and thus, a user will not actually perform a sequential run and then a parallel run; rather, she will require the code to use as many AMG levels as possible. |
| | | | |
| - | == Message Passing == | + | == Message passing == |
| | In a distributed memory environment, Message Passing is a protocol used to exchange messages or copy data from one memory location to another (where each memory belongs to a different computer). One of the most popular protocols is called the Message Passing Interface, '''MPI'''. | | In a distributed memory environment, Message Passing is a protocol used to exchange messages or copy data from one memory location to another (where each memory belongs to a different computer). One of the most popular protocols is called the Message Passing Interface, '''MPI'''. |
| | | | |
| - | === Peer to Peer Communication === | + | === Peer to peer communication === |
| | Peer to Peer (P2P) communication, as the name designates, occurs when one processor communicates with another processor at one time. Only these two processors are involved in the communication. There are two fundamental operations that take place in a P2P communication: | | Peer to Peer (P2P) communication, as the name designates, occurs when one processor communicates with another processor at one time. Only these two processors are involved in the communication. There are two fundamental operations that take place in a P2P communication: |
| | *A send operation | | *A send operation |
| Line 78: | | Line 89: | |
| | [[Image:ParallelComputing_Blocking_Communication.jpg|Blocking Communication]] | | [[Image:ParallelComputing_Blocking_Communication.jpg|Blocking Communication]] |
| | | | |
| - | ==== Non-Blocking ==== | + | ==== Non-blocking ==== |
| | A non-blocking message is the opposite of a blocking message where a processor performs a send or a receive operation and immediately returns (to the next instruction in the code) without caring whether the message has been received or not. Such a communication is shown in the figure below.<br> | | A non-blocking message is the opposite of a blocking message where a processor performs a send or a receive operation and immediately returns (to the next instruction in the code) without caring whether the message has been received or not. Such a communication is shown in the figure below.<br> |
| | [[Image:ParallelComputing_Non_Blocking_Communication.jpg|Blocking Communication]] | | [[Image:ParallelComputing_Non_Blocking_Communication.jpg|Blocking Communication]] |
| Line 89: | | Line 100: | |
| | So, as a general rule, when two processors know where to send and from who to receive, a blocking operation can be used. For example, when the master processor is distributing the initial data in the mesh, a blocking operation can be used here. | | So, as a general rule, when two processors know where to send and from who to receive, a blocking operation can be used. For example, when the master processor is distributing the initial data in the mesh, a blocking operation can be used here. |
| | | | |
| - | === Collective Communication === | + | === Collective communication === |
| | | + | In collective communication, all the processors are involved with some kind of send and/or receive operations. However, this is not done by explicitly using send or receive operations since MPI provides an interface for collective communication. The mostly used collective communication routines are |
| | | + | *Broadcast |
| | | + | *Gather |
| | | + | *Reduce |
| | | + | |
| | ==== Broadcast ==== | | ==== Broadcast ==== |
| - | ==== Collect ==== | + | A broadcast operation of consists of broadcasting or sending a message from a root processor to all other processors.<br> |
| | | + | [[Image:ParallelComputing_Broadcast_Operation.jpg|Broadcast Operation]] |
| | | + | |
| | | + | ==== Gather ==== |
| | | + | A gather operation of consists of gathering values from a group processors and doing something with them. For example, the Master processor might want to gather the solution from each processor to put them in one final array. |
| | | + | |
| | ==== Reduce ==== | | ==== Reduce ==== |
| | | + | In a reduce operation, the result is a reduction of values on all processors to a single value on a single processor using an algebraic/boolean operation such as a sum, a minimum, a maximum etc…<br> |
| | | + | [[Image:ParallelComputing_Gather_Reduce_Operation.jpg|Broadcast Operation]] |
| | | + | |
| | | | |
| - | == Reference == | | |
| | | | |
| | | + | == References == |
| | | + | #{{reference-book|author=Wilkinson, Barry and C. Michael Allen|year=1999|title=Parallel Programming : Techniques and Applications Using Networked Workstations and Parallel Computers|rest=ISBN 0136717101, 1st Ed., Prentice Hall, Upper Saddle River, N.J.}} |
| | | | |
| - | {{stub}} | + | == Resources == |
| | | + | * [http://pleasemakeanote.blogspot.com/2008/06/parallel-computing-with-mpi-roundup.html MPI Tutorial] |
| | | + | * [http://www.idris.fr/data/cours/parallel/mpi/choix_doc.html Course about MPI] (in French) |
## Introduction
Ever heard of "Divide and Conquer"? Ever heard of "Together we stand, divided we fall"? This is the whole idea of parallel computing. A complicated CFD problem involving combustion, heat transfer, turbulence, and a complex geometry needs to be tackled. The way to tackle it is to divide it and then conquer it. The computers unite their efforts to stand up to the challenge!
Parallel computing is defined as the simultaneous use of more than one processor to execute a program. This formal definition holds a lot of intricacies inside. For instance, given a program, one cannot expect to run this program on a 1000 processors without any change to the original code. The program has to have instructions to guide it to run in parallel. Since the work is shared or distributed amongst "different" processors, data has to be exchanged now and then. This data exchange takes place using different methods depending on the type of parallel computer used. For example, using a network of PCs, a certain protocol has to be defined (or installed) to allow the data flow between PCs. The sections below describe some of the details involved.
## Types of parallel computers
There are two fundamental types of parallel computers
• A single computer with multiple internal processors, known as a Shared Memory Multiprocessor.
• GPGPU enabled graphics cards are Shared Memory Multiprocessors.
• A set of computers interconnected through a network, known as a Distributed Memory Multicomputer.
Each of these can be referred to as a Parallel Computer. In this section we briefly discuss the architecture of the above systems.
### Shared memory multiprocessor
A conventional computer consists of a processor and a memory readily accessible by any instruction the processor is executing. The shared memory multiprocessor is a natural extension of the single processor where multiple processors are connected to multiple memory modules such that each memory location has a single address space throughout the system. This means that any processor can readily have access to any memory location without any need for copying data from one memory to another.
Shared Memory Multiprocessor
Programming a shared memory multiprocessor is attractive for programmers because of the convenience offered by data sharing. However, care must taken when altering values at a given memory location since cached copies of such variables have also to be updated for any processor using that data. Furthermore, simultaneous access to memory locations has to be controlled carefully.
The major disadvantages of the shared memory multiprocessor are summarized in the following:
1. Difficult to implement hardware able to achieve fast access to all shared memory locations
2. High cost: design and manufacturing complexities
3. Short life: upgrade is limited
### Distributed memory multicomputer
The distributed memory multicomputer or message passing multicomputer consists of connecting independent computers via an interconnection network as shown in the figure below. Inter-processor communication is achieved through sending messages explicitly from each computer to another using a message passing library such as MPI (Message Passing Interface). In such a setup, as each computer has its own memory address space. A processor can only access its own local memory. To access a certain value residing in a different computer, it has to be copied by sending a message to the desired processor. The message passing multicomputer will physically scale easier than a shared memory multiprocessor, i.e. it can more easily be extended by adding more computers to the network.
Distributed Memory Multicomputer
Programming a message passing multicomputer requires the programmers to provide explicit calls for message passing routines in their programs which is sometimes error prone. However, as the recent progress and research in parallel computing has shown, message passing does not cause any unsurpassed problem. Special mechanisms are not needed for controlling access to data since the data will be copied from one computer to another. The most compelling reason for using message passing multicomputers is in its direct applicability to existing computer networks. Of course, it is much better to use a new computer with a processor operating k times faster than each of the k processors in an old multiprocessor, especially if the new single processor costs much less than the multiprocessor. And it is therefore cheaper to buy N new processors and connect them through a network.
In a distributed memory system a Master Processor refers to any one of the computers, which actually acts as the job manager (orchestrator) that distributes jobs amongst the other computers. All the pre-processing and post-processing is done on the master processor.
A Slave Processor refers to any one of the computers on the network that is not a master.
## Measuring parallel performance
There are various methods that are used to measure the performance of a certain parallel program. No single method is usually preferred over another since each of them, as will be seen later on, reflects certain properties of the parallel code.
### Speedup
In the simplest of terms, the most obvious benefit of using a parallel computer is the reduction in the running time of the code. Therefore, a straightforward measure of the parallel performance would be the ratio of the execution time on a single processor (the sequential version) to that on a multicomputer. This ratio is defined as the speedup factor and is given as
$S(n)=\frac\mbox{Execution time using one processor}{\mbox{Execution time using N processors}}=\frac{t_s}{t_n}$
where $t_s$ is the execution time on a single processor and $t_n$ is the execution time on a parallel computer.
S(n) therefore describes the scalability of the system as the number of processors is increased. The ideal speedup is n when using n processors, i.e. when the computations can be divided into equal duration processes with each process running on one processor (with no communication overhead). Ironically, this is called embarrassingly parallel computing!
In some cases, superlinear speedup (S(n)>n) may be encountered. Usually this is caused by either using a suboptimal sequential algorithm or some unique specification of the hardware architecture that favors the parallel computation. For example, one common reason for superlinear speedup is the extra memory in the multiprocessor system.
The speedup of any parallel computing environment obeys the Amdahl's Law.
Amdahl's law states that if F is the fraction of a calculation that is sequential (i.e. cannot benefit from parallelisation), and (1 − F) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using N processors is
$\frac{1}{F + (1-F)/N}$.
In the limit, as N tends to infinity, the maximum speedup tends to 1/F. In practice, price/performance ratio falls rapidly as N is increased once (1 − F)/N is small compared to F.
As an example, if F is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very low values of F: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce F to the smallest possible value.
### Efficiency
The efficiency of a parallel system describes the fraction of the time that is being used by the processors for a given computation. It is defined as
$E(n)=\frac\mbox{Execution time using one processor}{\mbox{Execution time using N processors x N}}=\frac{t_s}{Nt_n}$
which yields the following
$E(n)=\frac{S(n)}{N}$
For example, if E = 50%, the processors are being used half of the time to perform the actual computation.
### Cost
The cost of a computation in a parallel environment is defined as the product of the number of processors used times the total execution time
$cost = Nt_n$
The above equation can be written as a function of the efficiency by using the fact that $t_p=\frac{t_n}{S(n)}$ which yields
$cost=\frac{Nt_s}{S(n)}=\frac{t_s}{E(n)}$
### Performance of CFD codes
The method used to assess the performance of a parallel CFD solver is becoming a topic for debate. While some implementations use a fixed number of outer iterations to assess the performance of the parallel solver regardless of whether a solution has ben obtained or not, other implementors use a fixed value for the residual as a basis for evaluation. Ironically, a large amount of implementors do not mention the method used in their assessment!
The reason for this discrepancy is that the first group (who uses a fixed number of outer iterations) believes that the evaluation of the parallel performance should be done using exactely the same algorithm which justifies the use of a fixed number of outer iterations. This can be acceptable from an algorithmic point of view.
The other group (who uses a fixed value for the maximum residual) believes that the evaluation of the parallel performance should be done using the converged solution of the problem which justifies the use of the maximum residual as a criterion for performance measurement. This is acceptable from an engineering point of view and from the user point of view. In all cases, the parallel code will be used to seek a valid solution! Now if the number of outer iterations is the same as that of the sequential version, tant mieux!
The problem becomes more complicated when an algebraic multigrid solver is used. Depending on the method used in implementing the AMG solver, the maximum number of AMG levels in the parallel version will usually be less than that of the sequential version which raises the issue that one is not comparing the same algorithm. From an engineering point of view, the main concern is to obtain a valid solution for a given problem in a reasonable amount of time and thus, a user will not actually perform a sequential run and then a parallel run; rather, she will require the code to use as many AMG levels as possible.
## Message passing
In a distributed memory environment, Message Passing is a protocol used to exchange messages or copy data from one memory location to another (where each memory belongs to a different computer). One of the most popular protocols is called the Message Passing Interface, MPI.
### Peer to peer communication
Peer to Peer (P2P) communication, as the name designates, occurs when one processor communicates with another processor at one time. Only these two processors are involved in the communication. There are two fundamental operations that take place in a P2P communication:
• A send operation
P2P communication can be performed using either a blocking or a non-blocking method.
#### Blocking
A blocking message occurs when one of the processors performs a send operation and does not return (i.e. does not execute any following instruction) unless it is sure that the message buffer can be reclaimed.
#### Non-blocking
A non-blocking message is the opposite of a blocking message where a processor performs a send or a receive operation and immediately returns (to the next instruction in the code) without caring whether the message has been received or not. Such a communication is shown in the figure below.
#### Advice
Both of the above communication methods have their own set of advantages and disadvantages.
For a blocking communication, one is almost certain that a given message will be received at its destination; however, the major problem with such a communication is that it requires the allocation of additional buffer memory which is not always available for large messages. On the other hand, an immediate communication does not have this problem (since the message waits till it has enough memory to be sent) but one is not always certain that the message will be received at its destination. Of course, designing a parallel program is not a game of luck. Both methods can be used successfully if they are carefully implemented in the code. For instance, in a parallel CFD code based on domain decomposition, there will be an inter-processor communication at some point. The best way to do this communication is to use a non-blocking method as the blocking method will end up with a dead lock (depending on the topology of the partitions).
So, as a general rule, when two processors know where to send and from who to receive, a blocking operation can be used. For example, when the master processor is distributing the initial data in the mesh, a blocking operation can be used here.
### Collective communication
In collective communication, all the processors are involved with some kind of send and/or receive operations. However, this is not done by explicitly using send or receive operations since MPI provides an interface for collective communication. The mostly used collective communication routines are
• Broadcast
• Gather
• Reduce
#### Broadcast
A broadcast operation of consists of broadcasting or sending a message from a root processor to all other processors.
#### Gather
A gather operation of consists of gathering values from a group processors and doing something with them. For example, the Master processor might want to gather the solution from each processor to put them in one final array.
#### Reduce
In a reduce operation, the result is a reduction of values on all processors to a single value on a single processor using an algebraic/boolean operation such as a sum, a minimum, a maximum etc…
## References
1. Wilkinson, Barry and C. Michael Allen (1999), Parallel Programming : Techniques and Applications Using Networked Workstations and Parallel Computers, ISBN 0136717101, 1st Ed., Prentice Hall, Upper Saddle River, N.J..
## Resources
• MPI Tutorial
• Course about MPI (in French) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126801490783691, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/120091/how-do-we-get-the-result-of-the-summation-sum-limits-k-1n-k-cdot-2k?answertab=oldest | # How do we get the result of the summation $\sum\limits_{k=1}^n k \cdot 2^k$? [duplicate]
Possible Duplicate:
Formula for calculating $\sum_{n=0}^{m}nr^n$
Can someone explain step by step how to derive the following identity? $$\sum_{k=1}^{n} k \cdot 2^k = 2(n \cdot 2^n - 2^n + 1)$$
-
What formulas related to this one do you know? – Davide Giraudo Mar 14 '12 at 15:39
1
– Tom Cooney Mar 14 '12 at 15:42
I recall the basic ones, that we are expected to know: sum of k = 1 to n of k is n(n+1) / 2 and geometric series formula ie. sum of n = 0 to m of a* r ^n – rrazd Mar 14 '12 at 15:42
2
Isn't this a multi duplicate? – Did Mar 14 '12 at 23:21
## marked as duplicate by lhf, Kannappan Sampath, Asaf Karagila, Hans Lundmark, t.b.Mar 15 '12 at 10:41
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 3 Answers
If you know $\sum_{k=1}^n a^k=\frac{a^{n+1}-1}{a-1}$, take the derivative of both sides with respect to $a$, multiply by $a$, and set $a=2$
-
Induction is a surefire way to prove it with elementary methods. With calculus and some formulas, we can approach this problem by evaluating a derivative in two different ways, as follows:
Quotient rule: $$\rm \frac{d}{dx}\frac{x^n-1}{x-1} =\frac{n\,x^{n-1}(x-1)-(x^n-1)(1)}{(x-1)^2} \tag{1}$$
Geometric formula:
$$\rm \frac{d}{dx}\frac{x^n-1}{x-1}=0+1x^0+2x^1+\cdots+n\,x^{n-1} \tag{2}$$
Equate equation $(1)$ with equation $(2)$, multiply both sides by $\rm x$ and then set $\rm x=2$.
-
Your last term in $(2)$ should be $nx^{n-1}$ instead – Kirthi Raman Mar 14 '12 at 16:07
Although you got great answers, I will add another way to look at the expression you are looking for
Let us denote
$$S = \sum_{k=1}^{n} k \cdot 2^k$$
This can be expanded as
$$\begin{align*} S &= 2 + 2 . 2^2 + 3 . 2^3 + \cdots n 2^n\\ 2S &= \hspace{8pt}+1.2^2+2.2^3 + \cdots (n-1) . 2^n+n . 2^{n+1} \end{align*}$$
If you notice, by multiplying by $2$ and writing under similar terms and then subtracting, we get
$$\begin{align*} S = n . 2^{n+1} - \left(2+2^2+2^3+\cdots+2^n \right) &= n .2^{n+1}-2^{n+1}+2\\ &= 2(n . 2^n-2^n+1) \end{align*}$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225777983665466, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/12819?sort=newest | ## Proof of Bondy and Chvátal Theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A graph is Hamiltonian if and only if its closure is Hamiltonian.
I am looking for a simple (i.e. short) proof of the theorem, that I can use as part of an article on topological sorting.
I've not been able to find one in the literature I have access to. Any help would be appreciated.
-
## 2 Answers
Let G=G_0, G_1, G_2, etc be a sequence of graphs where each G_i is formed by performing a single closure step to G_{i-1} — that is, add an edge uv to G_i when u and v together have at least n neighbors. If any graph in this sequence is Hamiltonian, let k be the minimum k such that G_k is Hamiltonian. Then I claim that k=0. For, otherwise let uv be the edge added to form G_k from G_{k-1} and let C be a Hamiltonian cycle in G_k. There are n-1 other edges in C, and n edges going out from u and v together, so by the pigeonhole principle there exists an edge pq in C (with the vertex labeling chosen so that u is clockwise of v and p is clockwise of q) such that G_{k-1} contains edges up and vq. But then C + up + vq - pq - uv is a Hamiltonian cycle in G_{k-1} contradicting the assumption that k > 0.
-
This is perfect. Thank you. – LBushkin Jan 24 2010 at 6:06
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me restate the theorem for variables.
Notation Read $p\bowtie q$ as "$p$ is adjacent to $q$", and $p\not\bowtie q$ as "$p$ is not adjacent to $q$".
Theorem Let $G$ be a graph whose order is greater than $2$. Let $v_1$ and $v_2$ be vertices such that $v_1\neq v_2 \land v_1\not\bowtie v_2$ and $\deg v_1 + \deg v_2 \ge n\in\mathbb{N}$. Then $G$ is Hamiltonian iff $G' = G + v_1 v_2$ is Hamiltonian.
Proof Assume $G'$ is Hamiltonian but not $G$. Therefore, by definition, there exists a cycle $(p_1,\ldots,p_n)$ in $G$ connecting $v_1$ (at $p_1$) to $v_2$ (at $p_n$) visiting all of $G$'s vertices. If $p_k\bowtie p_1$ then $p_{k-1} \not\bowtie p_n$, since $(p_1,p_k,p_{k+1},\ldots,p_n,p_{k-1},p_{k-2},\ldots,p_1)$ is a Hamiltonian cycle of $G$. As such, $\deg p_n \le n - (1 + \deg p_1)$, a contradiction.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357756972312927, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/79397/reference-request-for-equivariant-cohomology-of-g/79413 | ## Reference request for equivariant cohomology of G [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Possible Duplicate:
What is the equivariant cohomology of a group acting on itself by conjugation?
Let $G$ be a compact Lie group. Where can one read about the equivariant cohomology $H_G^*(G)$, where $G$ acts on itself by the adjoint action? A study of a concrete example (like SU(2)) would already be useful for me. Thanks!
-
## 1 Answer
This question has already been asked (and answered) here http://mathoverflow.net/questions/20671/what-is-the-equivariant-cohomology-of-a-group-acting-on-itself-by-conjugation
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8677909970283508, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/121485?sort=newest | ## How to tell if a second-order curve goes below the $x$ axis?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a second-order curve in general form:
(1) $a_{11}x^{2}+2a_{12}xy+a_{22}y^{2}+2a_{13}x+2a_{23}y+a_{33}=0$.
I'd like to know if there is a simple condition that ensures that the curve has at least one point on on or below the $x$ axis, i.e. that the left-hand side of (1) is nonpositive.
In the trivial case that the curve is a parabola, the discriminant being nonnegative is just such a condition. But what happens in the general case?
-
## 2 Answers
We may regard the left-hand side of the equation of thecurve as a quadratic polynomial in $x$. If $D(y)$ is its discriminant (with respect to $x$), then $D(y)\ge 0$ iff there exists a point with the second coordinate $y$ on the curve. Solve this inequality for $y$ and check whether its minimal solution is negative:)))
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Solve for $y$ in the form $y= A(x) \pm \sqrt{B(x)}$ and estimate. More abstract versions are just variant of this.
-
Since my parameters are themselves complicated functions, I was hoping to avoid this... – Felix Goldberg Feb 11 at 16:03
Perhaps you should provide this --- and any other relevant information --- in the body of the question. – Gerry Myerson Feb 11 at 22:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326362013816833, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/22420/a-class-of-determinants-associated-to-catalan-like-hankel-determinants | ## A class of determinants associated to Catalan-like Hankel determinants
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The following matrices are related to some Catalan-like Hankel matrices. My question is whether direct computations of determinants of such matrices (i.e. without recourse to Hankel determinants) can be found in the literature (except the simple cases p=0 or p=1 and k=0).
Let $H_n^{(p)} (k,c)$ be defined by $H_n^{(0)} (k,c) = \left( {h(k,i,j)} \right)_{i,j = 0}^{n - 1}$ where $h(k,i,j) = 1$ if $i + j = k + 2l$ for a nonnegative integer $l$ and $\left| {j - i} \right| \le k$, and $h(k,i,j) = 0$ else.
For $p > 0$ let $H_n^{(p)} (0,c) = cH_n^{(p - 1)} (0,c) + H_n^{(p - 1)} (1,c)$ and
$H_n^{(p)} (k,c) = H_n^{(p - 1)} (k - 1,c) + cH_n^{(p - 1)} (k,c) + H_n^{(p - 1)} (k + 1,c)$.
To indicate the connection with Hankel determinants consider e.g. the sequence (a(n))=(1,0,1,0,2,0,5,0,14,0,…) of Catalan numbers. Then $\det \left( {a(i + j + p)} \right)_{i,j = 0}^{n - 1} = \det H_n^{(p)} (0,0).$
-
3
Have you tried looking for in Christian Krattenthaler's surveys of determinants (and also in his arts on Catalan's numbers)? – Wadim Zudilin Apr 24 2010 at 11:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8432714939117432, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/supersymmetry?page=4&sort=newest&pagesize=15 | # Tagged Questions
The supersymmetry tag has no wiki summary.
1answer
256 views
### AGT conjecture and WZW model
In 2009 Alday, Gaiotto and Tachikawa conjectured an expression for the Liouville theory conformal blocks and correlation functions on a Riemann surface of genus g and n punctures as the Nekrasov ...
3answers
290 views
### Using supersymmetry outside high energy/particle physics
Are there applications of supersymmetry in other branches of physics other than high energy/particle physics?
1answer
86 views
### What evidence do we have for S-duality in N=4 Super-Yang-Mills?
Do we have anything resembling a proof*? Or is it just a collection of "coincidences"? Also, do we have evidence from lattice gauge theory computations? *Of course I'm not talking about a proof in ...
2answers
172 views
### How to prove quantum N=4 Super-Yang-Mills is superconformal?
I'm especially interested in elegant illuminating proofs which don't involve a lot of straightforward technical computations Also, does a non-perturbative proof exist?
2answers
299 views
### Kähler potential vs full effective potential
In evaluating the vacuum structure of quantum field theories you need to find the minima of the effective potential including perturbative and nonperturbative corrections where possible. In ...
2answers
160 views
### Generalized Complex Geometry and Theoretical Physics
I have been wondering about some of the different uses of Generalized Complex Geometry (GCG) in Physics. Without going into mathematical detail (see Gualtieri's thesis for reference), a Generalized ...
1answer
57 views
### N=2 SSM without a Higgs
In arXiv:1012.5099, section III, the authors describe a supersymmetric extension to the standard model in which there is no Higgs sector at all, in the conventional sense. The up-type Higgs is a ...
1answer
25 views
### Scaling solutions in context of Denef - Moore
My question is based on the paper Split states, entropy enigma, holes, halos. What are the scaling solutions discussed on page 49 of the paper ? It is stated that the equations \${\sum_{j, i\neq ...
1answer
242 views
### Basic Grassmann/Berezin Integral Question
Is there a reason why $\int\! d\theta~\theta = 1$ for a Grassmann integral? Books give arguments for $\int\! d\theta = 0$ which I can follow, but not for the former one.
1answer
50 views
### Local Fermionic Symmetry
That is perhaps a bit of an advertisement, but a couple of collaborators and myself just sent out a paper, and one of the results there is a little bit surprising. We found (in section 6E) a fermionic ...
2answers
142 views
### Topological twists of SUSY gauge theory
Consider $N=4$ super-symmetric gauge theory in 4 dimensions with gauge group $G$. As is explained in the beginning of the paper of Kapustin and Witten on geometric Langlands, this theory has 3 ...
3answers
93 views
### Paper listing known Seiberg-dual pairs of N=1 gauge theories
Is there a nice list of known Seiberg-dual pairs somewhere? There are so many papers from the middle 1990s but I do not find comprehensive review. Could you suggest a reference? Seiberg's original ...
2answers
339 views
### Stablising the Higgs without SUSY
Should the Higgs be found at the LHC, but no supersymmetry (assuming for the sake of argument that the LHC be capable of eliminating all versions of SUSY that are motivated by solving the hierarchy ...
2answers
268 views
### BPS states : Mathematical definition
First of all, let me congratulate the theoretical physics community for this site. I am a mathematics student with very little background in phyiscs. The question I want to ask is: What is the proper ...
2answers
64 views
### Uniqueness of supersymmetric heterotic string theory
Usually we say there are two types of heterotic strings, namely $E_8\times E_8$ and $Spin(32)/\mathbb{Z}_2$. (Let's forget about non-supersymmetric heterotic strings for now.) The standard argument ...
1answer
44 views
### Which is the coupling between the photon and the SU(2)xU(1) gauginos, before symmetry breaking?
The photon field is the non chiral piece of SU(2)xU(1), independently of symmetry breaking or not, isn't it? But before symmetry breaking, each gauge boson has only a chiral gaugino as ...
1answer
154 views
### Question about the parity of the ghost number operator in BRST quantization
Given a Lie algebra $[K_i,K_j]=f_{ij}^k K_k$, and ghost fields satisfying the anticommutation relations $\{c^i,b_j\}=\delta_j^i$, the ghost number operator is then $U=c^ib_i$ (duplicate indices are ...
1answer
324 views
### Fundamental particles with spin > 1
I am in undergraduate quantum mechanics, and the TA made an off-hand comment that currently no one knows how to describe fundamental particles with spin > 1 without supersymmetry. I was curious and ...
1answer
30 views
### Dual Pairs in Four Dimensions
Following the conversation here, I am wondering if anyone knows of an example of dual pair with 4-dimensional N=1 SUSY which relates a non-Abelian gauge theory on one side to a theory with a ...
1answer
259 views
### Does the ruling out of TeV scale SUSY breaking disfavor grand unification?
One of the arguments in favor of TeV scale SUSY breaking is that it leads to the appropriate running of the gauge coupling strengths leading to grand unification, i.e. $k_Y = \frac{5}{3}$ instead of ...
2answers
129 views
### Does 4D N = 3 supersymmetry exist?
Steven Weinberg's book "The Quantum Theory of Fields", volume 3, page 46 gives the following argument against N = 3 supersymmetry: "For global N = 4 supersymmetry there is just one supermultiplet ... ...
1answer
31 views
### SuperHiggs Mechanism on different Backgrounds & Compactifications
I've been studying Bagger & Giannakis paper on the SuperHiggs Mechanism found here. The paper shows how SUSY is broken by a $B_{\mu\nu}$ gauge field background restricted to $T^3$ in \$M^7\times ...
1answer
188 views
### Vassiliev Higher Spin Theory and Supersymmetry
Recently there is renewed interest in the ideas of Vassiliev, Fradkin and others on generalizing gravity theories on deSitter or Anti-deSitter spaces to include higher spin fields (utilizing known ...
1answer
269 views
### in SUSY, does WW scattering unitarisation needs the higgs boson?
One of the arguments of LHC "win-win situation" is that the scattering of W particles needs to include new terms to preserve unitatity begond 500 GeV or so. In the SM, this is realized by the higgs ...
0answers
112 views
### Treatment of sbottoms in prospino
Could someone please explain the details of the following "propaganda plot" from the prospino website? There is one curve for stop pair production $\tilde t \bar {\tilde t}$, and one for general ...
1answer
190 views
### Can Fermionic symmetries be fully integrated into geometric deformation complexes or symplectic reduction?
How should a geometer think about quotienting out by a Fermionic symmetry? Is this a formal concept? A strictly linear concept? A sheaf theoretic concept? How does symplectic reduction work with odd ...
1answer
190 views
### Is there a SQCD gluino string, similar to the gluon string?
A gluon string is a particular kind of open string terminated in two particles which are the sources for the field. Is it possible to have a similar arrangement with gluinos? At first glance, it seems ...
1answer
288 views
### Why must gluinos be spin 1/2 instead of 3/2?
Is there some condition in the N=1 SUSY algebra telling that the spin of the superpartners of gauge bosons (either for colour or for electroweak) must be less than the spin of the gauge boson? I am ...
3answers
1k views
### What are the mathematical problems in introducing Spin 3/2 fermions?
Can the physics complications of introducing spin 3/2 Rarita-Schwinger matter be put in geometric (or other) terms readily accessible to a mathematician?
1answer
156 views
### About unitarity and R-charge in 2+1 superconformal field theory
How does unitarity require that every scalar operator in a $2+1$ SCFT will have to have a scaling dimension $\geq \frac{1}{2}$ ? Why is an operator with scaling dimension exactly equal to ...
0answers
122 views
### Argument for quantum theoretic conformality of $\cal{N}=2$ super-Chern-Simon's theory in $2+1$ dimensions -Part 2
This is in continuation to what I was asking here earlier - Argument for quantum theoretic conformality of $\cal{N}=2$ super-Chern-Simon's theory in $2+1$ dimensions Or one can look at this ...
1answer
163 views
### Argument for quantum theoretic conformality of $\cal{N}=2$ super-Chern-Simon's theory in $2+1$ dimensions
I am using the standard symbols of $V_\mu$ for the gauge field, $\lambda$ for its fermionic superpartner and $F$ and $D$ be scalar fields which make the whole thing a $\cal{N}=2$ vector/gauge ...
2answers
327 views
### Alejandro Rivero's correspondence: diquarks and mesons as superpartners of quarks and leptons
The idea of “hadronic supersymmetry” originated in the mid-1960s and derives from the observation that baryons and mesons have similar Regge slopes, as if antiquarks and diquarks are superpartners. ...
1answer
197 views
### Superpartner for the stress-energy tensor
I would like to understand what is meant when one introduces a generator $G(z)$ as the superpartner of the energy-momentum tensor $T(z)$. How does one decide that this $G(z)$ should have a ...
1answer
205 views
### Parametrisation of general MSSM/SUSY based on collider experiment observables
The full MSSM contains 120 parameters. In SUSY searches, one usually picks a model like MSUGRA which makes a few assumptions and only has 5 free parameters like $m_0$, $m_{1/2}$, .... Now, I'm ...
0answers
1k views
### Superfields and the Inconsistency of regularization by dimensional reduction
Question: How can you show the inconsistency of regularization by dimensional reduction in the $\mathcal{N}=1$ superfield approach (without reducing to components)? Background and some references: ...
2answers
238 views
### Sterile Neutrinos as Dark Matter
There has been recent activity by astrophysicists to determine whether a fourth flavor of neutrino, a sterile neutrino, exists. It would likely be more massive than electron, muon or tau neutrinos. ...
1answer
327 views
### Seiberg Witten theory
I'm currently reading the Seiberg-Witten paper on $N=2$ supersymmetric Yang Mills pure gauge theory (i.e. no hypermultiplets). I have the following question: How does one understand that the metric ...
1answer
160 views
### Why are the third generation superpartners lighter than the other sfermions in MSUGRA
In the MSUGRA breaking scenario, the stop particle typically appears at energies reachable at the LHC. Other sfermions, notably the partners of up, down, strange and charm are assumed to be degenerate ...
1answer
426 views
### Definition and difference between the R-symmetry and the $U(1)_R$ internal symmetry
For a general ${\cal N}$ the R-symmetry group is $U({\cal N})$ but for the ${\cal N}=2$ case why is it $SU(2)$ ? I guess it is again different for ${\cal N}=4$. How does one understand this? One ...
2answers
314 views
### Neutralino Dark Matter Detection
Assuming supersymmetry exists and a neutralino is stable, it's often seen as a leading dark matter candidate. What would be expected from the interaction of a neutralino and its anti-particle? Has ...
1answer
354 views
### BPS sectors in $\cal{N}=4$ SYM
I am familiar with the idea of a BPS bound as in a lower limit on the mass of supermultiplets given by a certain function of the central charge and when I think of $\cal{N}=4$ SYM I see a complicated ...
1answer
190 views
### Are non-supersymmetric GUTs ruled out due to lack of precise gauge coupling unification?
Does there exist any good proposal on how the gauge coupling unification can be fixed in non-supersymmetric GUTs? If not, can we assert that non-supersymmetric GUTs have been experimentally ruled out? ...
1answer
262 views
### The superconformal algebra
How does one derive the superconformal algebra? Especialy how to argue the existence of the operator $S$ which doesn't exist either in either the supersymmetric algebra or the conformal algebra? ...
2answers
427 views
### Is there a maximum number of types of elementary particles?
Doing a Google search i found a paper called The maximum number of elementary particles in a super symmetric extension of the standard model. It claims in the abstract that the upper bound is 84 (i ...
1answer
123 views
### Why is there more analysis of short multiplets compared to long multiplets?
In theories with extended supersymmetry, both short and long multiplets exist. For some reason or other, short multiplets are studied more often. Why? What's wrong with long multiplets?
0answers
342 views
### An unfamiliar way of writing supersymmetry transformations
This question is in relation to this recent paper. I would like to know how the so called supersymmetry transformations at the start of page 27 or at the end of page35 (equation 8.4) or at the end ...
2answers
644 views
### What does “soft” in “soft symmetry breaking” mean?
For example it is stated that if supersymmetry breaking is soft then stability of gauge hierarchy can be still maintained.
1answer
314 views
### Decay of SUSY particles
In discussion of LHC searches for SUSY particles, physicists seem to assume they will decay quickly to the lightest SUSY particle which then remains stable (at least within the time it takes to leave ...
1answer
323 views
### Katz and Vafa's work on F-theory
I would like to know about the larger picture, current state and future prospects of the sequence of papers that were written by Sheldon Katz and Cumrun Vafa on F-theory. (Freddy Cachazo was also a ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012078046798706, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/103296/nomenclature-of-random-variables-x-0-y-0-same-as-x-0-cap-y-0 | # Nomenclature of random variables $\{X=0, Y=0\}$ same as $\{X=0\}\cap \{Y=0\}$?
just a small doubt. My exercises keep oscillating their nomenclature on this small detail and I always have the other version.
Let $X,Y$ be random variables. Is $\{X=0, Y=0\}$ the same as $\{X=0\}\cap \{Y=0\}$?
Another example. Let $N$ be the number of Users on a webpage. Two files are available for download, one with 200 kb and another with 400 kb size.
$$\begin{align} X_n(w) := w_n = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\} \end{align}$$
I want to express, at least one user downloaded the 200 kb file. Here's how I expressed it $\{X_1 + X_2 + \cdots + X_n \geq 1\}$. Would this be ok? The book expressed it as $\{X_1=1\}\cup\{X_1=3\}\cup \cdots \cup\{X_n=1\}\cup\{X_n=3\}$.
Another thing to express: no user downloaded the 200 kb file. I expressed it as $|\{X_k=1, 1 \leq k \leq N\}|=0$. The book as $\{X_1 \neq 1\}\cap \cdots \cap \{X_n \neq 1\}$. Would my solution be ok?
I'm always in doubt when I'm allowed to use symbols like $+$ and $|\mathrm{modulo}|$ (to get the number of elements). Is this generally always allowed? Many thanks in advance!
Thanks in advance guys!
-
## 2 Answers
$\{X=0,Y=0\}$ and $\{X=0\}\cap\{Y=0\}$ are the same thing. Both notations refer to $$\{\omega\in\Omega : X(\omega)=0\ \ \&\ \ Y(\omega)=0\} = \{\omega\in\Omega : X(\omega)=0\}\cap\{\omega\in\Omega : Y(\omega)=0\}.$$
Your notation saying $$\begin{align} X_n(w) := w_n = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\} \end{align}$$ seems confused. I suspect maybe you meant $$\begin{align} \Omega = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\}, \end{align}$$ although even that may differ from what's appropriate if you're bringing in $n$ different random variables. Your later notation makes it look as if what the author of the book had in mind is that $X_k$ is the number of kb downloaded by the $k$th user, for $k=1,\ldots,n$. Just what $w$ is, you're not clear about, and at this point I'm wondering if you're confusing $w$ with $\omega$. Probably what is needed is this:
$$\begin{align} \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\}^n \end{align}$$ i.e. then $n$th power of that set of four elements. This is the set of all $n$-tuples where each component of an $n$-tuple is one of these four elements. Then, when $\omega$ is any such $n$-tuple, $X_k(\omega)$ is its $k$th component, which is one of those four elements.
For example, if $n=3$, so there are three users, then $$\begin{align} \Omega = \{ & (0,0,0), (0,0,1), (0,0,2), (0,0,3), (0,1,0), (0,1,1), (0,1,2), (0,1,3),\ldots\ldots\ldots \\ \\ & \ldots\ldots\ldots, (3,3,3) \}, \end{align}$$ with $64$ elements. If, for example, $\omega=(2,3,0,1)$, then $X_2(\omega)=3$.
-
Your second example is incorrect: if no user downloaded the 200K file, but at least one user downloaded the 400K file, we will still have $X_1 + \dots + X_n \ge 1$.
-
thanks, well observed! but the notation I used would be ok? – Clash Jan 28 '12 at 22:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545090198516846, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/28957/hbar-the-angular-momentum-and-the-action/28959 | $\hbar$, the angular momentum and the action
Is there anything interesting to say about the fact that $\hbar$, the angular momentum and the action have the same units or is it a pure coincidence?
-
– asmaier May 25 '12 at 10:06
– anna v May 25 '12 at 13:07
3 Answers
The dimensions of
1. the Planck constant $\hbar$,
2. the action $S$, and
3. the angular momentum,
are constrained by the following important facts:
1. A conjugated pair of two observables is quantum mechanically related to the Planck constant $\hbar$ via a Heisenberg uncertainty relation.
2. A conjugated pair of two variables is classically related to the action $S$ via Noether's Theorem, cf. e.g. this Phys.SE post. Listen e.g. to Richard Feynman approximately 50 minutes into this Youtube video.
3. The conjugated variable to an angular momentum is an angle (angular position), which is usually treated as dimensionless.
-
Correction to the answer(v2): The word conjugated should be conjugate. – Qmechanic♦ Dec 4 '12 at 17:31
Let me try to answer using different words but with the same spirit as Qmechanic.
It is surely not a coincidence that $\hbar, S, \vec J$ have the same units. First of all, $\hbar$ is the quantum of the angular momentum or the quantum of the action, a universal constant that determines the strength of the quantum effects. So if you adopt one of these two definitions, you explain why $\hbar$ has the same units as either $S$ or $\vec J$ (only one of them) and reduce the question to the question why the angular momentum and the action have the same units.
It's not hard to see why the angular momentum and the action have the same units. Both may be written as $p\cdot x$, dimensionally speaking. The (orbital) angular momentum is defined as $\vec r \times P$; the commutator of $x,p$ is $xp-px=i\hbar$, which you may have included as well, has the units of position times momentum; and the action has the same units because the action has the same units as the Lagrangian times time $Lt$ which is the same as the units of the Hamiltonian times time $Ht$ and because $p\dot x$ appears in the difference/sum between $L$ and $-H$, in $L+H$, it's clear that $Lt$ has to have units of $px$, too.
Because the strength of quantum effects is determined by $\hbar$ that has the same units as the action $S$ or the angular momentum $\vec J$, it follows that both $S/\hbar$ and $\vec J/\hbar$ are dimensionless: they have no units.
Both of these facts have a robust and important explanation in the foundations of quantum mechanics. The action divided by the reduced Planck's constant is what appears in the exponent in Feynman's path integral, $${\mathcal A}_{i\to f} = \int {\mathcal D}\phi\cdot \exp(iS[\phi]/\hbar)$$ and the exponents have to be dimensionless, of course. From this Feynman approach, you could determine that the constant measuring the strength of quantum effects has the same units as the action.
Analogously, you may say a similar thing about the angular momentum. The reason is that the operators $J_x/\hbar$ and $J_y/\hbar$ have a commutator $$[\frac{J_x}{\hbar}, \frac{J_y}{\hbar}] = i \frac{J_z}{\hbar}$$ equal simply to the last component of $\vec J/\hbar$, without any extra coefficients. So these three operators generate a flawless $SU(2)$ or $SO(3)$ "Lie algebra" in the unitless mathematical normalization. (Well, mathematicians would also include the $i$ into each generator so that there would be even no prefactor of $i$ on the right hand side.) For this reason, the eigenvalues of $J_z/\hbar$ are quantized: they are inevitably multiples of $\hbar/2$. We may say that $\hbar/2$ is the elementary quantum of the angular momentum. (The orbital angular momentum is a multiple of $\hbar$ without the factor of 1/2.)
Just with some knowledge of Noether's theorem that links conservation laws and symmetries, one could have been able to guess – before he learned the full quantum mechanics – that the angular momentum should be related to generators of rotations. Because angles of rotations are dimensionless, the generators have to be dimensionless as well which means that quantum mechanics must contain a constant whose units are the same as those of the angular momentum so that it is possible to construct a dimensionless $\vec J/\hbar$ out of them.
It is somewhat difficult to find a more "direct" relationship between the angular momentum and the action, despite their having the same units. In particular, the angular momentum is quantized, a multiple of $\hbar/2$ as I mentioned. On the contrary, the action $S$ is continuous. As the Feynman path integral shows, the action $S$ is actually only meaningful in quantum mechanics up to shifts by multiples of $2\pi\hbar$. Such shifts don't change the exponential. So the angular momentum only allows the integer (or half-integer) values; on the other hand, the action only cares about the fractional parts! So the action and the angular momentum are never really "the same thing" in any sense, despite their identical units. After all, the angular momentum is a pseudovector (a particular set of conserved quantities in rotationally symmetric theories) while the action is the ultimate spacetime scalar defining a theory and invariant under everything.
-
Although the answers so far to this questions are very interesting and informative, I think from an analytical point of view, your question is not quite sensible.
In a mathematical structure, one could argue that there are no "coincidences", everything is related through the fundamental basis. Now in practice, the answers expain why "$\hbar$", "angular momentum" and "the action $S$" are related. But if "mass $m$", "position $x$" and "momentum $p$" would have the same units, then there would also be an explaination for that, because these are parts of a physical theory, put into mathematical terms.
So if you ask "Is there anything interesting to say about the fact that ℏ, the angular momentum and the action have the same units or is it a pure coincidence?" (and you do), then the answer is "Yes.", optionally followed by an elaboration of the mathematical structure of the theory, a search for a common denominator.
-
Alternatively, you could interprete my question as: "Is QM built so that $\hbar$, $S$ and $J$ have the same units and is it necessary?" – Isaac May 25 '12 at 13:16
@Isaac: I don't understand. The first question "so that...have the same units" is already implied in your formulation and only a matter of checking any source online and in the second question I don't know what "necessary" is supposted to mean. If you already consider QM, then it's obviously necessary and if you don't, then nothing is really necessary. – Nick Kidman May 25 '12 at 13:45
I obviously know that my question is not surely sensible; my question implicitely begins by "suppose there could be a reason". You are only discussing the sense of my question, which is clearly not the point (though I am thankful that you have answered). Why would we not speak about the sense of life, then? By "necessary", I could mean: "Has it been tried to build a kind of QM without that characteritic?" or something in that kind. I'm always trying to be synthetic when I ask a question in physics. – Isaac May 25 '12 at 19:26
@Isaac: I don't know the context of the question "Why would we not speak about the sense of life, then?" here. Also, I don't know what it means to synthetically ask a question. – Nick Kidman May 25 '12 at 19:45
English is not my first language, so I beg you pardon if I do not always choose the right words; by synhtetic, I mean that I try to ask my questions with the fewest words possible. Then, the sentence about life is an answer to yours, "nothing is necessary", which I think is very naive, as I have just explained. I suggest us to leave it at that. – Isaac May 25 '12 at 19:59
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947179913520813, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/252217/how-to-solve-fracdydt-1f2tytt-geqslant-0-where-f-is-bounded/252234 | # How to solve: $\frac{dy}{dt}=(1+f^{2}(t))y(t);t\geqslant 0$,where $f$ is bounded continuous function on $[0,\infty)$.
I came across this problem which says: Consider the equation: $$\frac{dy}{dt}=(1+f^{2}(t))y(t); \quad t\geqslant 0,$$ where $f$ is bounded continuous function on $[0,\infty)$.Then which of the following options is correct?
(a) This equation admits a unique solution $y(t)$ and further $\lim_{t\to\infty}y(t)$ exists and finite ,
(b) This equation admits 2 linearly independent solutions,
(c) This equation admits a bounded solution for which $\lim_{t\to\infty}y(t)$ does not exist,
(d) this equation admits a unique solution and further, $\lim_{t\to\infty}y(t)=\infty$.
I have taken $f(t)$ to be $1/(1+t)$ so that $f$ is bounded and continuous in the aforementioned interval and then applying the given conditions, i see that the option (d) holds true.
$$\int \frac{dy}{y}=\int (1+f^{2}(t))dt=\int (1+\frac{1}{(1+t)^{2}})dt=t-1/(1+t)+a.$$ Hence, $y(t)=ce^{t}e^{-1/(1+t)}$, where $c=e^{a}$. Now we put the value of c and see that $y$ approaches to infinity as $t$ tends to infinity
Am i correct? Is there any other better way to approach the problem?Any kind of help will be highly appreciated.Thanks everyone in advance for your time.
-
I edited some "\$" signs you forgot. – macydanim Dec 6 '12 at 10:42
@macydanim thanks a lot. – learner Dec 6 '12 at 10:43
## 1 Answer
Sorry but one cannot choose $y$ and then adjust $f$... instead, one is given $f$ and the question is to determine the properties of the solution(s) $y$. In the present case, one may wish to show first that every solution $y$ is given by the formula $$y(t)=y(0)\,\exp\left(\int_0^t(1+f(s)^2)\,\mathrm ds\right).$$ Once this is done, which property (properties) amongst (a)-(b)-(c)-(d) is (are) true should be clear.
-
Furthermore y(t)=1/(1+t) is never a solution, for any function f. – Did Dec 6 '12 at 11:15
sorry..i have actually taken f(t) to be 1/(1+t)..it is just typo. – learner Dec 6 '12 at 11:19
And you solved `dy/dt=(1+f(t)^2)y(t)` when `f(t)=1/(1+t)`? How so? – Did Dec 6 '12 at 11:24
Right. This is a special case of the formula in my post, hence you should be in the position of proving it in the general case, which is necessary since, as I said, you are supposed to solve the question for every possible f. – Did Dec 6 '12 at 11:44
yes sir. But i could not find any way to prove it in the general case. – learner Dec 6 '12 at 11:50
show 5 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287644624710083, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/38983/galilean-relativity-in-projectile-motion | # Galilean relativity in projectile motion
Consider a reference frame $S^'$ moving in the initial direction of motion of a projectile launched at time, $t=0$. In the frame $S$ the projectile motion is:
$$x=u(cos\theta)t$$
$$y=u(sin\theta)t-\frac{g}{2}t^2$$
I know that that at $y_{max}$, $\frac{dy}{dt}=0$ so using this I find that $$t_{y_{max}}=\frac{usin\theta}{g}$$
so therefore: $$y_{max}=\frac{2u^2sin\theta}{g}$$
I know that when the particle lands at $y_{bottom}=0$ the distance in the $x$ direction is $$x_{y_{bottom}}=\frac{2u^2sin^2\theta}{g}$$
but I am confused about how to describe the motion for the particle in $S'$ frame.
-
## 1 Answer
In the $S'$ frame, your variables are $x' = x - t\cdot u \cos\theta$ and $y' = y - t\cdot u \sin\theta$. If you do the change of variable, you get that the motion now is described by
$$x' = 0$$ $$y' = -\frac{g}{2}t^2$$
So in your new frame of reference you have vertical free fall from rest.
This is not very helpful in finding out when or where does the projectile hits the ground, but is very relevant if you want to know where will the projectile be after releasing it from a plane moving at constant velocity: right below it all the time. Disregarding air resistance, of course.
EDIT The system with a prime is moving with velocity $(u \cos\theta, u\sin\theta)$, so if you have a velocity in the unprimed system, to convert it to the primed system, you have to substract the velocity of the origin:
$$\vec{v'} = \vec{v} - (u \cos\theta, u\sin\theta)$$
Integrating this, you can get the relation for the position vector:
$$\vec{r'} = \vec{r} - (u \cos\theta, u\sin\theta)t + \vec{r}_0$$
where $\vec{r}_0$ is the position of the origin of the primed system for $t=0$. Both systems share origin for $t=0$, so $\vec{r}_0=\vec{0}$.
Now replace $\vec{r'}=(x',y')$ and $\vec{r}=(x,y)$ and you will get the equations above.
-
but why are these my variables? – Magpie Oct 3 '12 at 22:55
Actually I think I get it now; $\vec{v}$ is the velocity of the projectile and you are subtracting the velocity of the frame $S'$. Correctly understood? – Magpie Oct 4 '12 at 11:29
Exactly that, Magpie. – Jaime Oct 4 '12 at 18:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9038463234901428, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=2065990 | Physics Forums
## How does a hemisphere differ from a disk?
Can the surface of a hemisphere be distinguished from a disk without introducing a metric?
What is the minimum amount of info we need to know to identify one space as a disk and the other as the surface of hemisphere?
Both can have the same coordinate chart. For example, distance from the center r + position $$\theta$$ on the circle passing through r.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Well, they are homeomorphic (for example, via (x,y,z)<->(x,y)), if that's what you mean.
Quote by Preno Well, they are homeomorphic (for example, via (x,y,z)<->(x,y)), if that's what you mean.
They are homeomorphic, yes, but I'm not sure if you mean the same thing as I do. I just mean the 2-D surface of the hemisphere, not the hemisphere itself. The surface is just a stretched disk.
I know that the difference in curvature (one has constant non-zero curvature, one has zero curvature) can be identified through the Riemann curvature tensor once we assign a metric, but is there any difference between the two prior to the metric?
## How does a hemisphere differ from a disk?
Quote by pellman They are homeomorphic, yes, but I'm not sure if you mean the same thing as I do. I just mean the 2-D surface of the hemisphere, not the hemisphere itself. The surface is just a stretched disk.
Well, yes. The disk is obviously not homeomorphic to the whole "hemi-ball".
Well, if you look at the surface in question as an abstract Riemannian manifold, they are actually the same as well (look at the pullback metric of the projection that gives the homeomorphism). This is the same object. So to distinguish them, we need to add something extra, like giving the hemisphere the induced metric from R3.
Quote by zhentil Well, if you look at the surface in question as an abstract Riemannian manifold, they are actually the same as well (look at the pullback metric of the projection that gives the homeomorphism). This is the same object. So to distinguish them, we need to add something extra, like giving the hemisphere the induced metric from R3.
Are you saying that the 2D metrics on the two surfaces are not enough to distinguish them and that we have to embed them in a 3D space?
If so, that is not right. I can illustrate if you wish.
Quote by pellman Are you saying that the 2D metrics on the two surfaces are not enough to distinguish them and that we have to embed them in a 3D space? If so, that is not right. I can illustrate if you wish.
A priori, an induced (or pullback) tensor requires a map. Once the metric is constructed, it's not dependent on the ambient space, but I would be hard-pressed to come up with a canonical induced metric on a manifold.
To answer your original question, unless you specify something else, they can't be distinguished. They are diffeomorphic for starters, so any invariant would require additional structure. The invariant you seem to be trying to find is that the curvature of the hemisphere with the induced metric is different than that of the disk with the induced metric. This requires not only a metric, but the metric induced by the specific embedding in Euclidean space.
Thank you, zhentil. I had to look up "induced metric." I think what we arriving at is that by a Reimannian "surface of a hemisphere" we mean precisely : That 2d manifold whose metric is the induced metric of a "hemisphere" embedded in 3D euclidean space where "hemisphere" in the definition means a certain locus of points as one would define it in basic geometry. And so the identification "hemisphere" (as a topological manifold) has no other meaning apart from this. Right? If so, the answer to my question is, "No, there is no difference between them prior to defining a metric." The terms "surface of a hemisphere" and "disk" are meaningless except as applied to metric spaces.
Precisely. Except in the last line, the term "disk" is used invariably to refer to anything that is homeomorphic to the standard Euclidean disk.
Gotcha. Thanks!
Quote by zhentil Precisely. Except in the last line, the term "disk" is used invariably to refer to anything that is homeomorphic to the standard Euclidean disk.
zhentil
What you have said is true but in my opinion overlooks an important idea in the mathematical concept of equivalence.
It is true that any homeomorph of the 2 dimensional disk may be given a metric that makes it isometric to the standard 2d hemisphere.
But before one can have a Riemannian metric on a topological manifold it must first be given a differentiable structure. Two manifolds that are isometric must have equivalent differentiable structures. What if it turned out that the 2 disk had more than one differentiable structure? Then one of these structures could never be made isometric to the standard hemisphere no matter what metric it had.
So there is a theorem here that you are using implicitly. That is that there is only one differentiable structure on the 2 disk. This allows one to ignore the differentiable category and view any manifold that is homeomorphic to the standard hemisphere as the 2 disk.
In higher dimensions this may not work.
When one removes the Riemannian metric from a manifold one is left with a differentiable manifold and its diffeomorphs not a topological manifold and its homeomorphs.
It's more a matter of convention. When we refer to S^7 in the smooth category, we're referring to it with its standard differentiable structure. The same with R^4. Hence the term "exotic," implying that there is a "non-exotic."
Quote by zhentil It's more a matter of convention. When we refer to S^7 in the smooth category, we're referring to it with its standard differentiable structure. The same with R^4. Hence the term "exotic," implying that there is a "non-exotic."
I see what you are saying but don't agree. The question had to do with what happens when you ignore the metric. this does not give you the topological category. It gives you the differentiable category. In the topological category you may not be able to get the metric back again.
Thread Tools
| | | |
|----------------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: How does a hemisphere differ from a disk? | | |
| Thread | Forum | Replies |
| | Set Theory, Logic, Probability, Statistics | 11 |
| | Earth | 5 |
| | Biology | 7 |
| | General Physics | 48 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508604407310486, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/150632/on-automorphisms-group-of-some-finite-2-groups | # On automorphisms group of some finite 2-groups
Let $G$ be a finite 2-group of nilpotency class 2 such that $\frac{G}{Z(G)}\simeq C_{2}\times C_{2}$. I want information about its automorphisms group. Please guide me. Thank you
-
– user1729 May 28 '12 at 9:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964400291442871, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/35217?sort=oldest | ## Ackermann function in the Primitive recursive arithmetic
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello.
I study primitive recursive arithmetic and have the following questions.
1) Is it possible to express in the PRA that Ackermann function is total?
2) If yes, is such expression decidable in the PRA ?
Can u suggest some literature on this topic?
Thank you.
-
## 2 Answers
You can express the totality of any computable function in PRA, using Kleene's T predicate, which is primitive recursive. So if you pick any index $e$ for the Ackermann function, the formula $(\forall n)(\exists t) T(\underline{e}, n, t)$ is already in the language of PRA.
However, you cannot prove the totality of the Ackermann function in PRA. One way to see this is to note that PRA is a subtheory of $\text{I-}\Sigma^0_1$, modulo an interpretation of the language of PRA into $\text{I-}\Sigma^0_1$. The provably total functions of $\text{I-}\Sigma^0_1$ are well-known to be exactly the primitive recursive functions.
There is a lot of proof theory literature on provably total functions, which are also called provably recursive functions. But I don't know how much of it focuses specifically on primitive recursive arithmetic. One place to look might be Hájek and Pudlák, Metamthematics of First-Order Arithmetic.
-
4
Different indices $e$ for the same function might lead to different formulations `$(\forall n)(\exists t)T(e,n,t)$` of totality. Carl's answer is fine because, for the Ackermann function, no choice of $e$ will make totality provable in PRA. But for other functions, you could have two indices of the same function such that totality for one is provable in PRA while totality for the other is not provable even in ZFC. (In fact, the constant zero function has two such indices.) – Andreas Blass Aug 11 2010 at 20:01
1
Good point - in general you have to take a "natural" index for the computable function you want to prove is total. Of course you also have to take a "natural" primitive recursive index for the $T$ predicate, or it might be that your theory can't prove any computable function is total. It's an endemic problem with formalization. – Carl Mummert Aug 11 2010 at 23:34
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Carl, can I ask you one more question concerning the topic?
I thought about the reasons why we cannot prove in PRA totality of the Ackerman function $A(m,n)$, directly using double mathematical induction for $n$ then for $m$, and came to a conclusion, that it's because the deduction theorem is nonapplicable to PRA. Please correct me if the following reasoning is wrong.
First, let us define in the PRA language, using Kleene's T predicate, the predicate $\varphi_A(m,n)$, which means: $\exists k ~ k=A(m,n)$.
Second, using the definition of $A(m,n)$, let us amplify the PRA by the next three axioms:
1) $\varphi_A(0,n)$
2) $\varphi_A(m,1) \to \varphi_A(m+1,0)$
3) $\varphi_A(m,K(m)) \to [\varphi_A(m+1,n) \to \varphi_A(m+1,n+1)]$
The axiom (3) is the result of Skolemization of the next assertion:
$\forall m ~ [\forall k ~ \varphi_A(m,k)] \to [\forall n ~ \varphi_A(m+1,n) \to \varphi_A(m+1,n+1)]$
where $K$ is the new functional symbol.
Third, notice that $\varphi_A(m,1) \wedge \varphi_A(m,K(m))$ implies both $\varphi_A(m+1,0)$ and $\varphi_A(m+1,n) \to \varphi_A(m+1,n+1)$:
4) $[\varphi_A(m,1) \wedge \varphi_A(m,K(m))] \to \varphi_A(m+1,0)$
5) $[\varphi_A(m,1) \wedge \varphi_A(m,K(m))] \to [\varphi_A(m+1,n) \to \varphi_A(m+1,n+1)]$
Here we used deduction theorem, but only in trivial form: $(a \wedge b \vdash c) \to (a \vdash b \to c)$, which is independent of the PRA axioms. Joining (4) with (5):
6) $[\varphi_A(m,1) \wedge \varphi_A(m,K(m))] \to (\varphi_A(m+1,0) \wedge [\varphi_A(m+1,n) \to \varphi_A(m+1,n+1)])$
Here we can see a premise of mathematical induction for $n$ - in the right part of the implication.
We have not proven assertion $\varphi_A(m+1,0) \wedge [\varphi_A(m+1,n) \to \varphi_A(m+1,n+1)]$ yet. But if we could use deduction theorem in the form: $(PRA \wedge a \vdash b) \to (PRA \vdash a \to b)$, then we could continue the proof.
So, let us suppose, that we can use deduction theorem in the form: $(PRA \wedge a \vdash b) \to (PRA \vdash a \to b)$.
Fourth, let us suppose $\varphi_A(m,1) \wedge \varphi_A(m,K(m))$. It implies $\varphi_A(m+1,0) \wedge [\varphi_A(m+1,n) \to \varphi_A(m+1,n+1)]$, and the last, using mathematical induction for $n$, implies $\varphi_A(m+1,n)$. Thus, by the deduction theorem we have:
7) $\varphi_A(m,1) \wedge \varphi_A(m,K(m)) \to \varphi_A(m+1,n)$
Fifth, using substitution $n$ for $1$ and for $K(m+1)$, we can conclude from $\varphi_A(m+1,n)$ the next: $\varphi_A(m+1,1) \wedge \varphi_A(m+1,K(m+1))$. Thus, using (7), we can conclude the last from $\varphi_A(m,1) \wedge \varphi_A(m,K(m))$. Using deduction theorem again (but only in "trivial" form, which is independent of the PRA axioms), we have:
8) $\varphi_A(m,1) \wedge \varphi_A(m,K(m)) \to \varphi_A(m+1,1) \wedge \varphi_A(m+1,K(m+1))$
Sixth, from (1) we can conclude:
9) $\varphi_A(0,1) \wedge \varphi_A(0,K(0))$
Seventh, from (8) and (9), using mathematical induction for $m$, we can conclude:
10) $\varphi_A(m,1) \wedge \varphi_A(m,K(m))$
Eighth, from (7) and (10) we can conclude:
11) $\varphi_A(m+1,n)$
Jointly with (1) it means $\varphi_A(m,n)$ - assertion about totality of the Ackerman function.
Knowing, that it's undecidable in PRA, I can see only one used assumption, which can be wrong: That we can use deduction theorem in the form: $(PRA \wedge a \vdash b) \to (PRA \vdash a \to b)$. So we cannot use deduction theorem in this form?
-
I haven't had a chance to look at this in great detail, but the main reason that PRA does not prove the Ackerman function is total is that PRA does not include enough induction axioms. PRA itself, being just a first-order theory without any additional inference rules, does satisfy the usual deduction theorem. There are indeed formal systems that do not satisfy the deduction theorem, but these must have additional rules of inference in addition to the usual inference rules for first-order logic, because the usual proof of the deduction theorem applies to all first-order theories. – Carl Mummert Apr 17 2012 at 14:45
3
First, additional questions should be posted as new questions (with links to old ones, if appropriate), not as answers. Second, as far as I can see, the problem with your proof is not the deduction theorem but the use of induction for a formula involving $\varphi_A$ and $K$. I don't think PRA includes enough induction to justify this. – Andreas Blass Apr 17 2012 at 14:51
Regarding the induction, the usual way I would prove that the Ackermann function total would involve the inductive hypothesis "If $\lambda n.A(m,n)$ is total then $\lambda n.A(m+1,n)$ is total". The naive computation makes that hypothesis $\Delta^0_3$, but PRA has the same strength as $\Sigma^0_1$ induction when it comes to proving that computable functions are total. – Carl Mummert Apr 17 2012 at 14:51
I completely agree with Andreas that this would be better as a separate question. – Carl Mummert Apr 17 2012 at 14:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398084282875061, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/34417/experimental-observation-of-elementary-particles | # Experimental observation of elementary particles?
I posted a similar yet totally unrelated question recently, and got really satisfying responses to it. Thus, on the same theme...
How have we come to realize the existence of elementary particles in general? What evidence have we accumulated over the years that proves their existence? Once again, please note that I'm not doubting the existence of elementary particles, but am just curious as to how we have found out about them.
-
## 2 Answers
First, people had to realize that the matter is composed of atoms. They had good reasons to think so for centuries. For example, the mixing ratios in chemistry were rational numbers (in some good enough units), indicating that a single material is made of small pieces of the same kind (atoms or molecules).
In the 19th century, the atomic theory of matter strengthened when it was shown that the statistical properties of the atoms and molecules may explain thermal phenomena. The energy per degree of freedom of a single atom is the temperature (times a numerical factor of order one and times Boltzmann's constant); the entropy is $k$ times the amount of information in "nats" (bits over the natural log of two).
In 1905 and 1906, the Brownian motion was explained as collisions of a pollen particle with the molecules of water, and the size of the molecules could have been estimated in this way, too. At that time, the serious opponents of the atomic theory became non-existent overnight.
The best microscopes today may see individual atoms directly. One just magnifies the view sufficiently (and uses high-frequency particles instead of visible photons so that the long waves don't make the picture fuzzy).
A few years later, Ernest Rutherford realized in his famous gold foil experiment (alpha radiation sometimes recoiled from gold foil in the opposite direction, proving that the gold must be made of very hard "localized" matter) that the atoms had a tiny positively charged nucleus, 10,000 times smaller than the atom, and it was orbited by (a) negatively charged particle(s), the electron(s). The nucleus was hypothesized to be made out of protons and neutrons. They were isolated by the 1932 discovery of the neutron.
In the late 1960s, deep inelastic experiments showed that much like atom has localized subparticles, protons and neutrons have localized much smaller particles inside, too. They were the partons or quarks. The quark-parton theory not only explained the deep inelastic scattering but also the classification of different hadrons (different composite particles similar to protons and neutrons; there are many of them). In some sense, this Gell-Mann's work on the "construction of hadrons out of quarks" was fully analogous to the atomic explanation of the Mendeleev periodic table of elements.
Different particles such as electron, its heavier cousins muon and tau, and the neutrinos, and different flavors of quarks etc. were discovered - and their masses were measured - at various moments of the history. The last known quark, the top quark, was discovered at the Tevatron in 1994. The last particle, the Higgs boson, was officially discovered on July 4th, 2012, by seeing bumps in the processes where a hypothetical new particle decays either to two photons or two Z bosons. In a large enough number of collisions, the LHC simply detects a pair of photons whose total center-of-mass energy is 126 GeV, thus proving that there must exist a new particle of this mass.
Neutrinos were harder to detect but they rarely interact with the nuclei which shows that they're present.
In some sense, your question is a very broad question asking "almost" about all of atomic and particle physics from the whole 20th century as well as big branches of thermodynamics etc. The details of the discovery of individual particles depend on the particle species. But a punch line is that it is not hard to "see" the elementary particles, almost directly. This point – seeing – is particularly explicit in the case of the microscopes seeing atoms; and in the case of charged elementary particles leaving tracks (of bubbles) in a cloud chamber etc. There are many ways how individual elementary particles manifest themselves.
-
In the Rutherford paragraph "and it was orbited by an electrically neutral particle(s), the electron(s). " you of course mean negative. ' – anna v Aug 17 '12 at 18:20
Thanks, Anna! ;-) – Luboš Motl Aug 17 '12 at 18:24
They can be observed in a cloud chamber. Neils Bohr first proposed the concept of the atom with a nucleus and electron shells.
Neutrons and protons now are thought to be comprised of Quarks.
According to Quantum Mechanics there is really no such things as a particle. What we call a particle may really just be a moving fluctuation in a complex, multi-demensional field.
Some physicists interpret the probability amplitude of a particle as having a certain probability of being either here or there. Roger Penrose explains why this cannot be true. In some real sense particles are spread-out in space-time. The wave-function is just not a probability function that tells us where a particle exists. Between measurements particles are really spread-out in space-time.
When particles are observed/measured, they don't exhibit a specific spaitial volume.
A very likely possibility is particles do not exist at all, that the wave-function psi never collapses, that particles are always waves, and always behave like waves. Hugh Everett's relative-state theory explains all the paradoxes that arise from the Coppenhagen interpretation of quantum mechanics.
When physicsts speak of particles, they are really talking about a certain kind of wave spread-out in space-time in a complex field.
-
Dear @Michael, sorry, but it's simply not true that "according to quantum mechanics there are no particles". Particles and waves are two equally good classical limits of the true quantum entity which is different from any classical object. But the states may be classified as particles because the number of particles (in the Fock space) is a well-defined integer; and particles land at particle points of the photographic plate just like particles in classical physics. – Luboš Motl Aug 17 '12 at 18:27
You may be confused by the "wave function": it's not a genuine material (classical) wave similar to an electromagnetic wave; it is just a complex probability amplitude wave. It describes the state of one's knowledge, the probabilistic distribution that the particle is here or there. But whatever the wave function is, the particle such as an electron is point-like; it is never extended; it can never create a big spot on a macroscopic piece of material (just do an experiment). The wave function isn't a real wave and it isn't observable, neither in the technical sense nor in the colloquial sense. – Luboš Motl Aug 17 '12 at 18:30
Take a hydrogen atom. One must distinguish the "spread of the wave function" and the "size of the atom". The size of the atom is the size of the region that the atom may influence at the same moment and it is close to the Bohr radius, 0.1 nm, the typical distance between the proton and the electron. The "spread of the wave function" may be meters or kilometers and it says nothing about the size of an object. Instead, it only tells us about the uncertainty of the information about the position of the atom. – Luboš Motl Aug 17 '12 at 18:36
Once it's seen somewhere, the uncertainty instantly disappears and one may see that we deal with a 0.1-nanometer-large particle at a particle place. It has always been a 0.1-nanometer-large particle at some place, we just didn't know and couldn't know (even in principle) what the place was. – Luboš Motl Aug 17 '12 at 18:37
@LubošMotl: You're right, but one should be kind to a newcomer. He obviously means wavefunction space, and he is taking a field basis so that the wavefunction is over fields. It is not universally known that you can treat this as particle superpositions interacting when they are at the same point. He probably does have some confusion over fields/wavefunctions, but in this case, it is mostly terminology, I think, not principle. – Ron Maimon Aug 18 '12 at 8:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456884860992432, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/64333/list | ## Return to Answer
4 added 346 characters in body
[Editing...see the comments below]
As I stated in the comments, the problem is usually that such topologies do not distinguish between cones and universal cones. However, in the special case where the categories are preorders (i.e., categories where each $Hom(A, B)$ has at most one arrow) arrow), then it is possible to give a topology on the set of objects in such a way that for complete preorders, a functor is continuous with respect to the topologies considered if and only if it preserves limits (i.e., inf) of totally (pre)ordered diagrams. Let us call a functor between complete preorders weakly continuous if and only it does exactly that, i.e., if it is continuous with respect preserves all limits corresponding to the topologies consideredtotally (pre)ordered diagrams. We can topologize these categories the preorders as follows:
Consider first the category $\mathbf{2}$ consisting of an arrow $0 \to 1$ between two objects. Define a topology there which has the object $1$ as the non-trivial closed set. For a general preorder $\mathcal{C}$, define now a topology as the one whose closed sets are of the form $F^{-1}(1)$ for limit-preserving weakly continuous functors $F: \mathcal{C} \to \mathbf{2}$. To see that it is indeed a topology, note that if $(A_i)_i$ are closed sets corresponding to the preimages of $1$ of the limit preserving weakly continuous functors $(F_i)_i$, respectively, then $\prod_i F_i$ is a limit preserving weakly continuous functor such that the preimage of $1$ is exactly $\cap_i A_i$. Similarly, if $A, B$ are closed sets corresponding to the preimages of $1$ of the limit preserving weakly continuous functors $F, G$, respectively, then $F \coprod G$ is limit preserving weakly continuous and the preimage of $1$ is exactly $A \cup B$ (note that in general the coproduct of two limit-preserving limit preserving functors from preorders to $\mathbf{2}$ need not be limit preserving, but it is in the case of functors from preorders so we do need to $\mathbf{2}$).restrict our considerations to weakly continuous functors).
It is clear that if $F: \mathcal{C} \to \mathcal{D}$ preserves limitsis weakly continuous, it is continuous with respect to the topologies defined above, since for a closed set $A$ in $\mathcal{D}$, preimage of $1$ of the functor $G$, we have that $F^{-1}(A)$ is the preimage of $1$ of the limit preserving functor weakly continuous composition $GF$. Conversely, let us see that if $F$ is continuous with respect to the topologies then it must preserve limitsbe weakly continuous. If $(C \to C_i)_i$ is a limiting cone in $\mathcal{C}$, \mathcal{C}$corresponding to a totally (pre)ordered diagram$(C_i)_i$, then by definition$C$belongs to the closure of$(C_i)_i$, and hence$F(C)$must belong to the closure of$(F(C_i))_i$in$\mathcal{D}$. If$(F(C) \to F(C_i))_i$were not a universal cone, let$D$be the vertex of such a cone; we have an induced arrow$F(C) \to D$(and hence no arrow$D \to F(C)$). But then the representable functor$[D, -]$(regarded as a functor with values in$\mathbf{2}$) would be weakly continuous (in fact, limit preservingpreserving), and the closed set which is the preimage of$1$contains each$F(C_i)$but not$F(C)$, which is absurd, since in that case$F(C)$would not belong to such a closed set containing the$F(C_i)$. Therefore,$(F(C) \to F(C_i))_i\$ must be a universal cone and the proof is complete.
3 added 38 characters in body
[Editing...see the comments below]
As I stated in the comments, the problem is usually that such topologies do not distinguish between cones and universal cones. However, in the special case where the categories are preorders (i.e., categories where each $Hom(A, B)$ has at most one arrow) then it is possible to give a topology on the set of objects in such a way that for complete preorders, a functor preserves limits (i.e., inf) if and only if it is continuous with respect to the topologies considered. We can topologize these categories as follows:
Consider first the category $\mathbf{2}$ consisting of an arrow $0 \to 1$ between two objects. Define a topology there which has the object $1$ as the non-trivial closed set. For a general preorder $\mathcal{C}$, define now a topology as the one whose closed sets are of the form $F^{-1}(1)$ for limit-preserving functors $F: \mathcal{C} \to \mathbf{2}$. To see that it is indeed a topology, note that if $(A_i)_i$ are closed sets corresponding to the preimages of $1$ of the limit preserving functors $(F_i)_i$, respectively, then $\prod_i F_i$ is a limit preserving functor such that the preimage of $1$ is exactly $\cap_i A_i$. Similarly, if $A, B$ are closed sets corresponding to the preimages of $1$ of the limit preserving functors $F, G$, respectively, then $F \coprod G$ is limit preserving and the preimage of $1$ is exactly $A \cup B$ (note that in general the coproduct of two limit-preserving functors need not be limit preserving, but it is in the case of functors from preorders to $\mathbf{2}$).
It is clear that if $F: \mathcal{C} \to \mathcal{D}$ preserves limits, it is continuous with respect to the topologies defined above, since for a closed set $A$ in $\mathcal{D}$, preimage of $1$ of the functor $G$, we have that $F^{-1}(A)$ is the preimage of $1$ of the limit preserving functor $GF$. Conversely, let us see that if $F$ is continuous then it must preserve limits. If $(C \to C_i)_i$ is a limiting cone in $\mathcal{C}$, then by definition $C$ belongs to the closure of $(C_i)_i$, and hence $F(C)$ must belong to the closure of $(F(C_i))_i$ in $\mathcal{D}$. If $(F(C) \to F(C_i))_i$ were not a universal cone, let $D$ be the vertex of such a cone; we have an induced arrow $F(C) \to D$ (and hence no arrow $D \to F(C)$). But then the representable functor $[D, -]$ (regarded as a functor with values in $\mathbf{2}$) would be limit preserving, and the closed set which is the preimage of $1$ contains each $F(C_i)$ but not $F(C)$, which is absurd, since in that case $F(C)$ would not belong to such a closed set containing the $F(C_i)$. Therefore, $(F(C) \to F(C_i))_i$ must be a universal cone and the proof is complete.
2 added 134 characters in body
As I stated in the comments, the problem is usually that such topologies do not distinguish between cones and universal cones. However, in the special case where the categories are preorders (i.e., categories where each $Hom(A, B)$ has at most one arrow) then it is possible to give a topology on the set of objects in such a way that for complete preorders, a functor preserves limits (i.e., inf) if and only if it is continuous with respect to the topologies considered. We can topologize these categories as follows:
Consider first the category $\mathbf{2}$ consisting of an arrow $0 \to 1$ between two objects. Define a topology there which has the object $1$ as the non-trivial closed set. For a general preorder $\mathcal{C}$, define now a topology as the one whose closed sets are of the form $F^{-1}(1)$ for limit-preserving functors $F: \mathcal{C} \to \mathbf{2}$. To see that it is indeed a topology, note that if $A, B$ (A_i)_i$are closed sets corresponding to the preimages of$1$of the limit preserving functors$F, G$, (F_i)_i$, respectively, then $F \prod G$ \prod_i F_i$is a limit preserving functor such that the preimage of$1$is exactly$A \cap B$\cap_i A_i$. Similarly, if $A, B$ are closed sets corresponding to the preimages of $1$ of the limit preserving functors $F, G$, respectively, then $F \coprod G$ is limit preserving and the preimage of $1$ is exactly $A \cup B$ (note that in general the coproduct of two limit-preserving functors need not be limit preserving, but it is in the case of functors from preorders to $\mathbf{2}$).
It is clear that if $F: \mathcal{C} \to \mathcal{D}$ preserves limits, it is continuous with respect to the topologies defined above, since for a closed set $A$ in $\mathcal{D}$, preimage of $1$ of the functor $G$, we have that $F^{-1}(A)$ is the preimage of $1$ of the limit preserving functor $GF$. Conversely, let us see that if $F$ is continuous then it must preserve limits. If $(C \to C_i)_i$ is a limiting cone in $\mathcal{C}$, then by definition $C$ belongs to the closure of $(C_i)_i$, and hence $F(C)$ must belong to the closure of $(F(C_i))_i$ in $\mathcal{D}$. If $(F(C) \to F(C_i))_i$ were not a universal cone, let $D$ be the vertex of such a cone; we have an induced arrow $F(C) \to D$ (and hence no arrow $D \to F(C)$). But then the representable functor $[D, -]$ (regarded as a functor with values in $\mathbf{2}$) would be limit preserving, and the closed set which is the preimage of $1$ contains each $F(C_i)$ but not $F(C)$, which is absurd, since in that case $F(C)$ would not belong to such a closed set containing the $F(C_i)$. Therefore, $(F(C) \to F(C_i))_i$ must be a universal cone and the proof is complete.
1
As I stated in the comments, the problem is usually that such topologies do not distinguish between cones and universal cones. However, in the special case where the categories are preorders (i.e., categories where each $Hom(A, B)$ has at most one arrow) then it is possible to give a topology on the set of objects in such a way that for complete preorders, a functor preserves limits (i.e., inf) if and only if it is continuous with respect to the topologies considered. We can topologize these categories as follows:
Consider first the category $\mathbf{2}$ consisting of an arrow $0 \to 1$ between two objects. Define a topology there which has the object $1$ as the non-trivial closed set. For a general preorder $\mathcal{C}$, define now a topology as the one whose closed sets are of the form $F^{-1}(1)$ for limit-preserving functors $F: \mathcal{C} \to \mathbf{2}$. To see that it is indeed a topology, note that if $A, B$ are closed sets corresponding to the preimages of $1$ of the limit preserving functors $F, G$, respectively, then $F \prod G$ is a limit preserving functor such that the preimage of $1$ is exactly $A \cap B$. Similarly, $F \coprod G$ is limit preserving and the preimage of $1$ is exactly $A \cup B$ (note that in general the coproduct of two limit-preserving functors need not be limit preserving, but it is in the case of functors from preorders to $\mathbf{2}$).
It is clear that if $F: \mathcal{C} \to \mathcal{D}$ preserves limits, it is continuous with respect to the topologies defined above, since for a closed set $A$ in $\mathcal{D}$, preimage of $1$ of the functor $G$, we have that $F^{-1}(A)$ is the preimage of $1$ of the limit preserving functor $GF$. Conversely, let us see that if $F$ is continuous then it must preserve limits. If $(C \to C_i)_i$ is a limiting cone in $\mathcal{C}$, then by definition $C$ belongs to the closure of $(C_i)_i$, and hence $F(C)$ must belong to the closure of $(F(C_i))_i$ in $\mathcal{D}$. If $(F(C) \to F(C_i))_i$ were not a universal cone, let $D$ be the vertex of such a cone; we have an induced arrow $F(C) \to D$ (and hence no arrow $D \to F(C)$). But then the representable functor $[D, -]$ (regarded as a functor with values in $\mathbf{2}$) would be limit preserving, and the closed set which is the preimage of $1$ contains each $F(C_i)$ but not $F(C)$, which is absurd, since in that case $F(C)$ would not belong to such a closed set containing the $F(C_i)$. Therefore, $(F(C) \to F(C_i))_i$ must be a universal cone and the proof is complete. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 193, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502537846565247, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/75371/exponential-sums-over-finite-fields-with-even-characteristic/75375 | ## Exponential sums over finite fields with even characteristic
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am looking for an elementary evaluation (if one exists) of the exponential sum
$$G_r(a,b) = \sum_{x \in \mathbb{F}_{2^r}} \psi(ax^2 + bx),$$
where $a,b \in \mathbb{F}_{2^r}^*$ are both units, $\psi(x) = e(Tr(x)/2)$ and $Tr : \mathbb{F}_{2^r} \to \mathbb{F}_2$ is the usual field Trace map
$$Tr(x) = \sum_{i=0}^{r-1} x^{2^i}.$$
It should be noted that
$$G_r(a,0) = G_r(0,a) = 0,$$
since the map $x \mapsto x^2$ permutes the elements of $\mathbb{F}_{2^r}$.
I feel that such a sum must have surely been studied before, but I am having trouble both evaluating the sum and finding references for it. Short of an explicit formula for the sum, any information (or any reference to where this sum might be studied) would be appreciated. I found no information on this sum in the usual suspects: Ireland-Rosen, Iwaniec-Kowalski and "Gauss and Jacobi Sums," by Berndt, Evans, and Williams.
-
It should also be noted that the method for evaluating such sums over finite fields of odd characteristic is to complete the square, which is not applicable in the above case. – David Sep 14 2011 at 3:49
1
Trace is additive, and ${\rm Tr}(u) = {\rm Tr}(u^2)$ for all $u$, so $ax^2+bx$ has the same trace as $(a+b^2)x^2$. Therefore the sum is $2^r$ if $a=b^2$ and zero otherwise. – Noam D. Elkies Sep 14 2011 at 3:57
@Noam. Thank you for your insight. If you add your comment as an answer, I would love to accept it. – David Sep 14 2011 at 4:20
@David: you're welcome & thanks & that was quick :-) – Noam D. Elkies Sep 14 2011 at 4:37
2
The technique generalizes to linearized polynomials, i.e. polynomials where only terms of degrees that are power of the characteristic occur. This technique is relatively common in finite fields, and coding theorists often use this trick, when studying quadratic forms in char 2. The classic tome Finite Fields by Lidl & Niederreiter (Cambridge Univ. Press) is the reference. – Jyrki Lahtonen Sep 14 2011 at 7:09
## 1 Answer
Trace is additive, and ${\rm Tr}(u)={\rm Tr}(u^2)$ for all $u$, so $ax^2+bx$ has the same trace as $(a+b^2)x^2$. Therefore the sum is $2^r$ if $a=b^2$ and zero otherwise.
In general, for a polynomial $P(x)$ over the field of $2^r$ elements, the sum of $\psi(P(x))$ is $2^r$ less than the number of affine points on the "hyperelliptic" curve $y^2+y=P(x)$. Here $P(x) = ax^2+bx$, so (for much the same reason I gave above: polynomials $\eta^2+\eta$ can be absorbed into $y^2+y$) the curve is rational, with $2^r$ points, unless $a=b^2$ when it is the union of two disjoint lines and has $2^{r+1}$ points.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407784342765808, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/188585/localizations-of-dedekind-domains-are-discrete-valuation-rings | # Localizations of Dedekind Domains are Discrete Valuation Rings
I am trying to prove the following implication, and can't seem to find my way around all the equivalent definitions of Dedekind domains and DVRs:
I have a ring $R$ with the following properties:
1) $R$ is Noetherian.
2) $R$ is integrally closed.
3) Every nonzero prime ideal in $R$ is maximal.
I wish to show that every localization of $R$ at a maximal ideal is a principal ideal domain.
Does anyone know a direct argument proving this (i.e. not passing through the myriad of equivalent definitions of Dedekind domains and DVRs)? Alternatively, I would be thankful if someone could provide me with a "road map" to proving this claim in a a way which would convince someone (namely, me) without knowledge of Dedekind domains and DVRs.
Thanks a lot!
Roy
-
1
I‘m not sure whether the proofs I have in mind would satisfy you. It seems to me that you quickly get to the statement “an integrally closed Noetherian domain with a unique non-zero prime ideal is in fact principal” and from there you have to do some real work, and you would be proving one direction of an equivalence between definitions. I think Serre does this in a very low-tech way at the beginning of his Local Fields, although I don't like that proof very much. Are willing to use some commutative algebra? Basic dimension theory really helps. – Dylan Moreland Aug 29 '12 at 23:59
1
– Makoto Kato Aug 30 '12 at 0:33
## 5 Answers
In my previous answer, we used a fact that an invertible ideal is projective and a fact that a finitely generated projective module over a local ring is free. Here is a proof without using these facts.
Lemma 1 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $K$ be the field of fractions of $A$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} ⊂ A\}$. Then $\mathfrak{m}^{-1} \neq A$.
Proof: Let $a \neq 0$ be an element of $\mathfrak{m}$. By the assumption, Supp$(A/aA) = \{\mathfrak{m}\}$. Since Ass$(A/aA) \subset$ Supp($A/aA)$, Ass$(A/aA) = \{\mathfrak{m}\}$. Hence there exists $b \in A$ such that $b \in A - aA$ and $\mathfrak{m}b \subset aA$. Since $\mathfrak{m}(b/a) \subset A$, $b/a \in \mathfrak{m}^{-1}$. Since $b \in A - aA$, $b/a \in K - A$. QED
Lemma 1.5 Let $A$ be an integral domain. Let $K$ be the field of fractions of $A$. Let $M \neq 0$ be a finitely generated $A$-submodule of $K$. Let $x \in K$ be such that $xM \subset M$. Then $x$ is integral over $A$.
Proof: Let $\omega_1,\dots,\omega_n$ be generators of $M$ over $A$. Let $x\omega_i = \sum_j a_{i,j} \omega_j$. Then $x$ is a root of the characteristic polynomial of the matrix $(a_{ij})$. QED
Lemma 2 Let $A$ be an integrally closed Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\mathfrak{m}$ is invertible.
Proof: Let $K$ be the field of fractions of $A$. Let $a \neq 0$ be an element of $\mathfrak{m}$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} \subset A\}$. Since $\mathfrak{m} \subset \mathfrak{m}\mathfrak{m}^{-1} \subset A$, $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$ or $\mathfrak{m}\mathfrak{m}^{-1} = A$. Suppose $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$. Since $\mathfrak{m}$ is finitely generated, every element of $\mathfrak{m}^{-1}$ is integral over $A$ by Lemma 1.5. Since $A$ is integrally closed, $\mathfrak{m}^{-1} \subset A$. This is a contradiction by Lemma 1. Hence $\mathfrak{m}\mathfrak{m}^{-1} = A$. QED
Lemma 3 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\bigcap_n \mathfrak{m}^n = 0$.
Proof: Let $I = \bigcap_n \mathfrak{m}^n$. Suppose $I \neq 0$. Since dim$(A/I) = 0$, $A/I$ is an Artinian ring. Hence there exists $n$ such that $\mathfrak{m}^n \subset I$. Since $I \subset \mathfrak{m}^n$, $I = \mathfrak{m}^n$. Since $I \subset \mathfrak{m}^{n+1}$, $\mathfrak{m}^n = \mathfrak{m}^{n+1}$. By Nakayama's lemma, $\mathfrak{m}^n = 0$. Hence $I = 0$. This is a contradiction. QED
Lemma 4 Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $I$ be a non-zero ideal of $A$ such that $I \neq A$. Then $I = \mathfrak{m}^n$ for some integer $n > 0$.
Proof: By Lemma 3, there exists $n > 0$ such that $I \subset \mathfrak{m}^n$ and I is not contained in $\mathfrak{m}^{n+1}$. By Lemma 2, $\mathfrak{m}$ is invertible. Since $I \subset \mathfrak{m}^n$, $I\mathfrak{m}^{-n} \subset A$. Suppose $I\mathfrak{m}^{-n} \neq A$. Then $I\mathfrak{m}^{-n} \subset \mathfrak{m}$. Hence $I \subset \mathfrak{m}^{n+1}$. This is a contradiction. Hence $I\mathfrak{m}^{-n} = A$. Hence $I = \mathfrak{m}^n$. QED
Theorem Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $A$ is a discrete valuation ring.
Proof: By Nakayama's lemma, $\mathfrak{m} \neq \mathfrak{m}^2$. Let $x \in \mathfrak{m} - \mathfrak{m}^2$. By Lemma 4, $xA = \mathfrak{m}$. Let $I$ be a non-zero ideal of $A$ such that $I \neq A$. By Lemma 4, $I = \mathfrak{m}^n$. Hence $I$ is principal. Hence $A$ is a discrete valuation ring. QED
-
Lemma 1 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $K$ be the ring of fractions of $A$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} ⊂ A\}$. Then $\mathfrak{m}^{-1} \neq A$.
Proof: Let $a \neq 0$ be an element of $\mathfrak{m}$. By the assumption, Supp$(A/aA) = \{\mathfrak{m}\}$. Since Ass$(A/aA) \subset$ Supp($A/aA)$, Ass$(A/aA) = \{\mathfrak{m}\}$. Hence there exists $b \in A$ such that $b \in A - aA$ and $\mathfrak{m}b \subset aA$. Since $\mathfrak{m}(b/a) \subset A$, $b/a \in \mathfrak{m}^{-1}$. Since $b \in A - aA$, $b/a \in K - A$. QED
Lemma 1.5 Let $A$ be an integral domain. Let $K$ be the field of fractions of $A$. Let $M \neq 0$ be a finitely generated $A$-submodule of $K$. Let $x \in K$ be such that $xM \subset M$. Then $x$ is integral over $A$.
Proof: Let $\omega_1,\dots,\omega_n$ be generators of $M$ over $A$. Let $x\omega_i = \sum_j a_{i,j} \omega_j$. Then $x$ is a root of the characteristic polynomial of the matrix $(a_{ij})$. QED
Lemma 2 Let $A$ be an integrally closed Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\mathfrak{m}$ is principal.
Proof: Let $K$ be the field of fractions of $A$. Let $a \neq 0$ be an element of $\mathfrak{m}$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} \subset A\}$. Since $\mathfrak{m} \subset \mathfrak{m}\mathfrak{m}^{-1} \subset A$, $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$ or $\mathfrak{m}\mathfrak{m}^{-1} = A$. Suppose $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$. Since $\mathfrak{m}$ is finitely generated, every element of $\mathfrak{m}^{-1}$ is integral over $A$ by Lemma 1.5. Since $A$ is integrally closed, $\mathfrak{m}^{-1} \subset A$. This is a contradiction by Lemma 1. Hence $\mathfrak{m}\mathfrak{m}^{-1} = A$ and therefore $\mathfrak{m}$ is invertible. Hence $\mathfrak{m}$ is a projective $A$-module. Since $A$ is a local ring, $\mathfrak{m}$ is free $A$-module. Hence $\mathfrak{m}$ is principal. QED
Lemma 3 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\bigcap_n \mathfrak{m}^n = 0$.
Proof: Let $I = \bigcap_n \mathfrak{m}^n$. Suppose $I \neq 0$. Since dim$(A/I) = 0$, $A/I$ is an Artinian ring. Hence there exists $n$ such that $\mathfrak{m}^n \subset I$. Since $I \subset \mathfrak{m}^n$, $I = \mathfrak{m}^n$. Since $I \subset \mathfrak{m}^{n+1}$, $\mathfrak{m}^n = \mathfrak{m}^{n+1}$. By Nakayama's lemma, $\mathfrak{m}^n = 0$. Hence $I = 0$. This is a contradiction. QED
Theorem Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $A$ is a discrete valuation ring.
Proof: Let $I$ be a non-zero ideal of $A$ such that $I \neq A$. By Lemma 3, there exists $n > 0$ such that $I \subset \mathfrak{m}^n$ and $I$ is not contained in $\mathfrak{m}^{n+1}$. By Lemma 2, $\mathfrak{m}$ is principal. Hence $\mathfrak{m}$ is invertible. Since $I \subset \mathfrak{m}^n$, $I\mathfrak{m}^{-n} \subset A$. Suppose $I\mathfrak{m}^{-n} \neq A$. Then $I\mathfrak{m}^{-n} \subset \mathfrak{m}$. Hence $I \subset \mathfrak{m}^{n+1}$. This is a contradiction. Hence $I\mathfrak{m}^{-n} = A$. Hence $I = \mathfrak{m}^n$. Since $I$ is principal, $I$ is also principal. QED
-
Thank you very, very much! – Roy Ben-Abraham Aug 30 '12 at 13:11
Lemma 1 Let $A$ be a Noetherian local domain which is not a field. Suppose its maximal ideal $\mathfrak{m}$ is principal. Then $\bigcap_n \mathfrak{m}^n = 0$.
Proof: Let $\mathfrak{m} = tA$. Let $x \in \bigcap_n \mathfrak{m}^n$. Suppose $x \neq 0$. There exists $y_n \in A$ for every $n$ such that $x = t^ny_n$. Then $t^ny_n = t^{n+1}y_{n+1}$. Hence $y_n = ty_{n+1}$. Hence $y_nA \subset y_{n+1}A$. Since $A$ is Noetherian, there exists $k$ such that $y_kA = y_{k+1}A$. Hence there exists $u \in A$ such that $y_{k+1} = uy_k$. Since $y_k = ty_{k+1}$, $y_k = uty_k$. Hence $(1 - ut)y_k = 0$. Since $t \in \mathfrak{m}$, $1 - ut$ is invertible. Hence $y_k = 0$. Hence $x = t^ky_k = 0$. This is a contradiction. QED
Lemma 2 Let $A$ be a Noetherian local domain which is not a field. Suppose its maximal ideal $\mathfrak{m}$ is principal. Then $A$ is a discrete valuation ring.
Proof: Suppose $\mathfrak{m} = tA$. By Lemma 1, $\bigcap_n \mathfrak{m}^n = 0$. Let $I$ be a non-zero ideal of $A$. There exists $n$ such that $I \subset \mathfrak{m}^n$ but not $I \subset \mathfrak{m}^{n+1}$. Since $\mathfrak{m}^n = t^nA$, $It^{-n} \subset A$. Suppose $It^{-n} \neq A$. Then $It^{-n} \subset \mathfrak{m}$. Hence $I \subset \mathfrak{m}^{n+1}$. This is a contradictin. Hence $I = t^nA$. QED
Lemma 3 Let $A$ be an integral domain. Let $I$ be an ideal of $A$. Suppose $I$ is invertble. Then $I$ is a finitely generated projective $A$-module.
Proof: Since $II^{-1} = A$, there exist $a_1,\dots,a_n \in I$ and $b_1,\dots,b_n \in I^{-1}$ such that $\sum_i a_ib_i = 1$. Let $f_i:I\rightarrow A$ be the $A$-linear map defined by $f_i(x) = b_ix$. Let $L$ be a free $A$-module with a basis $e_1,\dots,e_n$. Let $g:L \rightarrow I$ be the $A$-linear map defined by $g(e_i) = a_i$. Let $f:I \rightarrow L$ be the $A$-linear map defined by $f(x) = \sum_i f_i(x)e_i = \sum_i b_ixe_i$. Since $gf(x) = \sum_i g(b_ixe_i) = \sum_i b_ia_ix = x$ for every $x \in I$, $gf = 1$. Hence $I$ is isomorphic to a direct summand of $L$. Hence $I$ is a finitely generated projective $A$-module. QED
Lemma 4 Let $A$ be a local ring. Let $M$ be a finitely generated projective $A$-module. Then $M$ is a finitely generated free $A$-module.
Proof: Let $\mathfrak{m}$ be the maximal ideal of $A$. Let $k = A/\mathfrak{m}$. Since $M$ is finitely generated, dim$_k M\otimes_A k$ is finite. Let $a_1,\dots,a_n$ be elements of $M$ such that $\{a_1\otimes 1,\dots,a_n\otimes 1\}$ is a basis of $M\otimes_A k$ over $k$. By Nakayama's lemma, $a_1,\dots,a_n$ generates $M$ over $A$. Let $L$ be a free $A$-module with a basis $\{e_1,\dots,e_n\}$. Let $f:L\rightarrow M$ be the $A$-linear map such that $f(e_i) = a_i (i = 1,\dots,n)$. Let $K$ be the kernel of $f$. Then we get the following exact sequence.
$0 \rightarrow K \rightarrow L \rightarrow M \rightarrow 0$
Then the following sequence is exact by the well known theorem of homological algebra.
Tor$_1(M, k) \rightarrow K\otimes_A k \rightarrow L\otimes_A k \rightarrow M\otimes_A k \rightarrow 0$
Since $M$ is projective, Tor$_1(M, k) = 0$. Since $L\otimes_A k \rightarrow M\otimes_A k$ is an isomorphism, $K\otimes_A k = 0$. Since $M$ is projective, $K$ is a direct summand of $L$. Hence $K$ is finitely generated. Hence $K = 0$ by Nakayama's lemma. QED
Lemma 5 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $K$ be the field of fractions of $A$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} ⊂ A\}$. Then $\mathfrak{m}^{-1} \neq A$.
Proof: Let $a \neq 0$ be an element of $\mathfrak{m}$. By the assumption, Supp$(A/aA) = \{\mathfrak{m}\}$. Since Ass$(A/aA) \subset$ Supp($A/aA)$, Ass$(A/aA) = \{\mathfrak{m}\}$. Hence there exists $b \in A$ such that $b \in A - aA$ and $\mathfrak{m}b \subset aA$. Since $\mathfrak{m}(b/a) \subset A$, $b/a \in \mathfrak{m}^{-1}$. Since $b \in A - aA$, $b/a \in K - A$. QED
Lemma 6 Let $A$ be an integral domain. Let $K$ be the field of fractions of $A$. Let $M \neq 0$ be a finitely generated $A$-submodule of $K$. Let $x \in K$ be such that $xM \subset M$. Then $x$ is integral over $A$.
Proof: Let $\omega_1,\dots,\omega_n$ be generators of $M$ over $A$. Let $x\omega_i = \sum_j a_{i,j} \omega_j$. Then $x$ is a root of the characteristic polynomial of the matrix $(a_{ij})$. QED
Lemma 7 Let $A$ be an integrally closed Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\mathfrak{m}$ is invertible.
Proof: Let $K$ be the field of fractions of $A$. Let $a \neq 0$ be an element of $\mathfrak{m}$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} \subset A\}$. Since $\mathfrak{m} \subset \mathfrak{m}\mathfrak{m}^{-1} \subset A$, $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$ or $\mathfrak{m}\mathfrak{m}^{-1} = A$. Suppose $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$. Since $\mathfrak{m}$ is finitely generated, every element of $\mathfrak{m}^{-1}$ is integral over $A$ by Lemma 6. Since $A$ is integrally closed, $\mathfrak{m}^{-1} \subset A$. This is a contradiction by Lemma 5. Hence $\mathfrak{m}\mathfrak{m}^{-1} = A$ and therefore $\mathfrak{m}$ is invertible. QED
Lemma 8 Let $A$ be an integral domain. Let $L$ be a finitely generated free A-module. Let $M$ be a finitely generated free A-submodule of L. Then rank$_A M \le$ rank$_A L$.
Proof: Let $K$ be the field of fractions of $A$. Since $K$ is a flat $A$-module, the canonical homomorphism $M\otimes_A K \rightarrow L\otimes_A K$ is injective. Hence we are done. QED
Theorem Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $A$ is a discrete valuation ring.
Proof: By Lemma 7, $\mathfrak{m}$ is invertible. By Lemma 3, $\mathfrak{m}$ is projective over A. By Lemma 4, $\mathfrak{m}$ is a finitely generated free module $A$. By Lemma 8, $\mathfrak{m}$ is principal. Hence $A$ is a discrete valuation ring by Lemma 2. QED
-
The following proof is similar to the previous ones but we only assume very basic knowledge of commutative algebra.
Lemma 0 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $I$ be a non-zero ideal of $A$. Then there exists $n > 0$ such that $\mathfrak{m}^n \subset I$.
Proof: Suppose the assertion is false. Let $\mathfrak{I}$ be the set of non-zero ideal of $A$ such that the statement is false. Let $I$ be a maximal element of $\mathfrak{I}$. Since $I$ is not a prime ideal, there exists $a, b \in A$ such that $ab \in I$ and $a \in A - I, b \in A - I$. Let $J_1 = I + aA, J_2 = I + bA$. Since $I$ is a maximal element of $\mathfrak{I}$, there exists $n_1, n_2 > 0$ such that $\mathfrak{m}^{n_1} \subset J_1$, $\mathfrak{m}^{n_2} \subset J_2$, Since $J_1J_2 \subset I$, this is a contradiction. QED
Lemma 1 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $K$ be the field of fractions of $A$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} ⊂ A\}$. Then $\mathfrak{m}^{-1} \neq A$.
Proof: Let $a \neq 0$ be an element of $\mathfrak{m}$. By Lemma 1, there exists n > 0 such that $\mathfrak{m}^n \subset aA$. Let $n$ be minimal satisfying this condition. Let $b \in \mathfrak{m}^{n-1} - aA$. Then $\mathfrak{m}b \subset aA$. Since $\mathfrak{m}(b/a) \subset A$, $b/a \in \mathfrak{m}^{-1}$. Since $b \in A - aA$, $b/a \in K - A$. QED
Lemma 1.5 Let $A$ be an integral domain. Let $K$ be the field of fractions of $A$. Let $M \neq 0$ be a finitely generated $A$-submodule of $K$. Let $x \in K$ be such that $xM \subset M$. Then $x$ is integral over $A$.
Proof: Let $\omega_1,\dots,\omega_n$ be generators of $M$ over $A$. Let $x\omega_i = \sum_j a_{i,j} \omega_j$. Then $x$ is a root of the characteristic polynomial of the matrix $(a_{ij})$. QED
Lemma 2 Let $A$ be an integrally closed Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\mathfrak{m}$ is invertible.
Proof: Let $K$ be the field of fractions of $A$. Let $a \neq 0$ be an element of $\mathfrak{m}$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} \subset A\}$. Since $\mathfrak{m} \subset \mathfrak{m}\mathfrak{m}^{-1} \subset A$, $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$ or $\mathfrak{m}\mathfrak{m}^{-1} = A$. Suppose $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$. Since $\mathfrak{m}$ is finitely generated, every element of $\mathfrak{m}^{-1}$ is integral over $A$ by Lemma 1.5. Since $A$ is integrally closed, $\mathfrak{m}^{-1} \subset A$. This is a contradiction by Lemma 1. Hence $\mathfrak{m}\mathfrak{m}^{-1} = A$. QED
Lemma 3 Let $A$ be an integrally closed Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then every non-zero ideal is invertible.
Proof: Suppose the assertionis is false. Let $I$ be a maximal non-zero non-invertible ideal. Then $I \subset \mathfrak{m}$. Hence $I \subset I\mathfrak{m}^{-1} \subset A$. Suppose $I = I\mathfrak{m}^{-1}$. Then every element of $\mathfrak{m}^{-1}$ is integral over $A$ by Lemma 1.5. Since $A$ is integrally closed, $\mathfrak{m}^{-1} \subset A$. By Lemma 1, this is a contradiction. Hence $I \neq I\mathfrak{m}^{-1}$. Hence $I\mathfrak{m}^{-1}$ is invertible. Hence $I$ is invertible. This is a contradiction. QED
Lemma 4 Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $I$ be a non-zero ideal of $A$ such that $I \neq A$. Then $I = \mathfrak{m}^n$ for some integer $n > 0$.
Proof: Suppose the assertionis is false. Let $I \neq A$ be a maximal non-zero ideal which is not power of $\mathfrak{m}$. Since $I \subset \mathfrak{m}$, $I \subset I\mathfrak{m}^{-1} \subset A$. $I \neq I\mathfrak{m}^{-1}$ by the same argument of the proof of Lemma 3. Hence $I\mathfrak{m}^{-1}$ is a power of $\mathfrak{m}$. Hence $I$ is a power of $\mathfrak{m}$. This is a contradiction. QED
Theorem Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $A$ is a discrete valuation ring.
Proof: Suppose $\mathfrak{m} \neq \mathfrak{m}^2$. Since $\mathfrak{m}$ is invertble. $\mathfrak{m} = A$. This is a contradiction. Hence $\mathfrak{m} \neq \mathfrak{m}^2$. Let $x \in \mathfrak{m} - \mathfrak{m}^2$. By Lemma 4, $xA = \mathfrak{m}$. Let $I$ be a non-zero ideal of $A$ such that $I \neq A$. By Lemma 4, $I = \mathfrak{m}^n$. Hence $I$ is principal. Hence $A$ is a discrete valuation ring. QED
-
Lemma 1 Let $A$ be a Noetherian local domain which is not a field. Suppose its maximal ideal $\mathfrak{m}$ is principal. Then $\bigcap_n \mathfrak{m}^n = 0$.
Proof: Let $\mathfrak{m} = tA$. Let $x \in \bigcap_n \mathfrak{m}^n$. Suppose $x \neq 0$. There exists $y_n \in A$ for every $n$ such that $x = t^ny_n$. Then $t^ny_n = t^{n+1}y_{n+1}$. Hence $y_n = ty_{n+1}$. Hence $y_nA \subset y_{n+1}A$. Since $A$ is Noetherian, there exists $k$ such that $y_kA = y_{k+1}A$. Hence there exists $u \in A$ such that $y_{k+1} = uy_k$. Since $y_k = ty_{k+1}$, $y_k = uty_k$. Hence $(1 - ut)y_k = 0$. Since $t \in \mathfrak{m}$, $1 - ut$ is invertible. Hence $y_k = 0$. Hence $x = t^ky_k = 0$. This is a contradiction. QED
Lemma 2 Let $A$ be a Noetherian local domain which is not a field. Suppose its maximal ideal $\mathfrak{m}$ is principal. Then $A$ is a discrete valuation ring.
Proof: Suppose $\mathfrak{m} = tA$. By Lemma 1, $\bigcap_n \mathfrak{m}^n = 0$. Let $I$ be a non-zero ideal of $A$. There exists $n$ such that $I \subset \mathfrak{m}^n$ but not $I \subset \mathfrak{m}^{n+1}$. Since $\mathfrak{m}^n = t^nA$, $It^{-n} \subset A$. Suppose $It^{-n} \neq A$. Then $It^{-n} \subset \mathfrak{m}$. Hence $I \subset \mathfrak{m}^{n+1}$. This is a contradictin. Hence $I = t^nA$. QED
Lemma 3 Let $A$ be a local domain. Let $\mathfrak{m}$ be its maximal ideal. Suppose $\mathfrak{m}$ is invertble. Then $\mathfrak{m}$ is principal.
Proof(Serre's Local fields): $\mathfrak{m}\mathfrak{m}^{-1} = A$, there exist $a_1,\dots,a_n \in \mathfrak{m}$ and $b_1,\dots,b_n \in \mathfrak{m}^{-1}$ such that $\sum_i a_ib_i = 1$. If $a_ib_i \in \mathfrak{m}$ for all $i$, $1 \in \mathfrak{m}$. This is a contradiction. Hence there exists $k$ such that $a_kb_k \in K - \mathfrak{m}$. Since $b_k \in \mathfrak{m}^{-1}$, $a_kb_k \in A$. Hence $a_kb_k = u$ is invertible. Hence $a_ku^{-1}b_k = 1$. Let $a = a_ku^{-1}$. Then $a \in \mathfrak{m}$ and $ab_k = 1$. Let $x \in \mathfrak{m}$. $x = xab_k$. Since $b_k \in \mathfrak{m}^{-1}$, $xb_k \in A$. Hence $x \in aA$. Hence $\mathfrak{m} = aA$. QED
Lemma 4 Let $A$ be a Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Let $K$ be the field of fractions of $A$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} ⊂ A\}$. Then $\mathfrak{m}^{-1} \neq A$.
Proof: Let $a \neq 0$ be an element of $\mathfrak{m}$. By the assumption, Supp$(A/aA) = \{\mathfrak{m}\}$. Since Ass$(A/aA) \subset$ Supp($A/aA)$, Ass$(A/aA) = \{\mathfrak{m}\}$. Hence there exists $b \in A$ such that $b \in A - aA$ and $\mathfrak{m}b \subset aA$. Since $\mathfrak{m}(b/a) \subset A$, $b/a \in \mathfrak{m}^{-1}$. Since $b \in A - aA$, $b/a \in K - A$. QED
Lemma 5 Let $A$ be an integral domain. Let $K$ be the field of fractions of $A$. Let $M \neq 0$ be a finitely generated $A$-submodule of $K$. Let $x \in K$ be such that $xM \subset M$. Then $x$ is integral over $A$.
Proof: Let $\omega_1,\dots,\omega_n$ be generators of $M$ over $A$. Let $x\omega_i = \sum_j a_{i,j} \omega_j$. Then $x$ is a root of the characteristic polynomial of the matrix $(a_{ij})$. QED
Lemma 6 Let $A$ be an integrally closed Noetherian local domain. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $\mathfrak{m}$ is invertible.
Proof: Let $K$ be the field of fractions of $A$. Let $a \neq 0$ be an element of $\mathfrak{m}$. Let $\mathfrak{m}^{-1} = \{x \in K; x\mathfrak{m} \subset A\}$. Since $\mathfrak{m} \subset \mathfrak{m}\mathfrak{m}^{-1} \subset A$, $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$ or $\mathfrak{m}\mathfrak{m}^{-1} = A$. Suppose $\mathfrak{m}\mathfrak{m}^{-1} = \mathfrak{m}$. Since $\mathfrak{m}$ is finitely generated, every element of $\mathfrak{m}^{-1}$ is integral over $A$ by Lemma 5. Since $A$ is integrally closed, $\mathfrak{m}^{-1} \subset A$. This is a contradiction by Lemma 4. Hence $\mathfrak{m}\mathfrak{m}^{-1} = A$ and therefore $\mathfrak{m}$ is invertible. QED
Theorem Let $A$ be an integrally closed Noetherian local. Suppose its maximal ideal $\mathfrak{m}$ is the unique non-zero-prime ideal. Then $A$ is a discrete valuation ring.
Proof: By Lemma 6, $\mathfrak{m}$ is invertible. By Lemma 3, $\mathfrak{m}$ is principal. Hence $A$ is a discrete valuation ring by Lemma 2. QED
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 619, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012098908424377, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/221783/understanding-the-definition-and-notation-of-geometric-realization | # Understanding the definition and notation of geometric realization
I had trouble making sense of the definition of geometric realization of a simplicial set. Let $\Delta^n$ be the standard n-simplex defined as the functor $\hom_\Delta(-,$*n*$): \Delta \rightarrow$ Set, and $\left|\Delta^{n}\right|$ the topological standard n-simplex, let $X$ be a simplicial set, the realization of $X$ is defined by the colimit:
$\left| X \right| = \underrightarrow{\lim} \: \:\: \left| \Delta^{n} \right|$
$\Delta^{n} \rightarrow X$
in $\Delta\downarrow X$ (the simplex category of $X$).
Frankly I don't understand the notation. Look at the diagram of the colimit in $\Delta\downarrow X$:
$X \cong \underrightarrow{\lim} \: \Delta^{n}$
$\Delta^{n} \rightarrow X$:
Is the geometric realization of $X$ then the geometric realization of the colimit $L = \left| L \right|$? So then $L$ must be standard n-simplex $\Delta^{p}$ for some $p$ no? I want to make sure I'm understanding this right.
-
Why would the colimit just be an $n$-simplex? In a typical simplicial set you are taking the colimit over an infinite diagram with no terminal object. – Zhen Lin Oct 27 '12 at 6:42
This is how I see it: So far I know what the geometric realization of a standard n-simplex is, I want to know what's the geometric realization of a simplicial set that is NOT an standard n-simplex. Is the geometric realization of a simplicial set $X$ (that is NOT a standard n-simplex) the geometric realization of the colimit $L$? If that is so, and $L$ is not a standard n-simplex, then $\left| L \right|$ is the geometric realization of a simplicial set that is NOT an standard n-simplex, which is the very thing I want to know in the first place – Mario Carrasco Oct 27 '12 at 17:50
Your notation makes no sense. The geometric realisation of a simplicial set $X$ is defined to be a certain colimit over the diagram of shape $(\Delta^\bullet \downarrow X)$. – Zhen Lin Oct 27 '12 at 17:53
I think this isn't the easiest way to see what geometric realization is trying to do. We're trying to build a $\Delta$-complex-looking thing, and a simplicial set $X_\bullet$ is first and foremost a list of sets $\{X_n\}_{n \geq 0}$ whose elements are precisely the sets of maps in from the standard $n$-simplices. So to build $|X_\bullet|$, start with a vertex for every element of $X_0$. Then, give yourself edges for every element of $X_1$... [cont.] – Aaron Mazel-Gee Nov 3 '12 at 5:15
However, be sure to (a) collapse down those that are degenerate (i.e. they arise from maps $\Delta^1 \rightarrow |X_\bullet|$ of $\Delta$-complexes that take $\Delta^1$ to a single point), and (b) attach all edges to their boundary vertices using the face maps. Then, continue on up. The advantage of all this is that there's a lot of power in naturality, so you can do a lot by manipulating a "space" as "maps into the space" (from some particular set of objects). The main disadvantage is that in this framework you have no choice but to carry around all these degenerate simplices. – Aaron Mazel-Gee Nov 3 '12 at 5:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273386597633362, "perplexity_flag": "head"} |
http://cs.stackexchange.com/questions/9469/unbiasing-of-sequences | # Unbiasing of sequences
There is the well-known method of unbiasing of bit sequences due to von Neumann. Are there similar schemes applicable to other sequences, e.g. the result of throwing a normal die?
-
## 3 Answers
You can generalize the von-Neumann procedure canonically. Assume you have a discrete RV $X$ which you want to make fair, i.e., each event in the event space $\Omega$ of $X$ shall occur with the same probability. Then you can redraw $X$ exactly $n:=| \Omega|$ times, yielding $x_1, \ldots, x_n$. In case $x_1 \not = \ldots \not = x_n$, you return $x_1$. Otherwise, you repeat the above procedure.
Since the probability of drawing $x_1, \ldots, x_n$ differently is just a product of probabilities, the order of the elements does not matter. Hence, the probability of drawing an element $x$ at position $1$ is just $1/n$.
-
Hypothetically assuming that one has an almost perfect die, wouldn't that procedure be rather expensive in practice? – Mok-Kong Shen Feb 4 at 14:33
[Addendum] Maybe a dumb question: Wouldn't the result of applying the procedure to repetitions of a segment of length n, in which all elements are different, lead to something evidently inacceptable? – Mok-Kong Shen Feb 4 at 14:56
The above procedure is of course very expensive, even to correct an already fair RV you need an expected number of $n!$ tries. With bias, it only gets worse. The question didn't involve efficienty though.. – HdM Feb 4 at 15:17
Could you also comment on my addendum? – Mok-Kong Shen Feb 4 at 15:33
I don't understand what you are asking in your addendum. What will be inacceptable? What exactly do you want to repeat? The von Neumann method does not repeat sequences, in each step there are $n$ values drawn according to $X$. – HdM Feb 4 at 20:25
show 2 more comments
The following simpler adaptation of von Neumann's trick is more efficient than the one described in HdM's answer. Throw your die twice. If the two answers $x,y$ are different, write $0$ if $x<y$ and $1$ if $x>y$. This way, you get an unbiased bit source. If you are so inclined, you can use this bit source to simulate fair die throws.
-
I have a little problem of understanding: If one were to apply the same principle to a bit sequence, one wouldn't obtain the original scheme of unbiasing of von Neumann, right? – Mok-Kong Shen Feb 4 at 16:30
On the contrary, this is the original von Neumann scheme. – Yuval Filmus Feb 4 at 17:33
You are right in the above. – Mok-Kong Shen Feb 4 at 20:53
I read somewhere that you can't create e.g. an unbiased 3-die with an unbiased bit source (essentially, $1 / 3$ hasn't got a finite binary expansion). – vonbrand Feb 5 at 0:03
That's correct, if you're only allowed to use a bounded number of bits, then it's impossible. But there is an algorithm which uses $\log_2 3$ (or so) bits in expectation. I believe there was a question regarding this on cs. – Yuval Filmus Feb 5 at 0:12
It seems to me that presumably the only viable satisfactory solution would be: Obtain a bit sequence from the sequence from the die, apply the von Neumann unbiasing, then generate from that a sequence for the die.
-
No, my solution (for example) is superior than this, since the probability that you don't extract information from two throws is larger in your scheme compared to mine. – Yuval Filmus Feb 4 at 17:33
Could you please give some hints of how to show that? – Mok-Kong Shen Feb 4 at 21:50
Suppose you convert die throws to coin throws by parity. Then my bad throws are 11,22,33,44,55,66 while yours also include 13,15,24,26,31,35,42,46,51,53,62,64. – Yuval Filmus Feb 4 at 21:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382246136665344, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/12484?sort=newest | ## Closed form of a nonlinear recurrence sequence.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The master theorem seems to fail on nonlinear recursive functions. Is there a standard tool for finding the closed forms of recursive functions of this form?
The question comes from trying to find the closed form of the following recursive function: $f_i(X) = (f_{i-1}(X)^2 + f_{i-1}(X))/2$
Where:
$f_0(X) = X$
I would be willing to part with recurrence relations for this function, but I would be much more delighted to learn a general method or trick which makes finding closed forms of functions like this simple.
-
I think you may want to retitle this something like "Closed form for a nonlinear recurrence sequence?" – Zev Chonoles Jan 21 2010 at 3:08
3
Generally speaking, nonlinear recurrences almost never have closed forms. – Qiaochu Yuan Jan 21 2010 at 3:28
I changed the title as suggested. – Jason Knight Jan 21 2010 at 3:29
Qiaochu is right. Whatever general results there are for nonlinear recurrences, they should be in here: books.google.com/… I own this book myself, but have only studied the parts of it relevant to linear recurrences, so I can't direct you to anything specific. – Zev Chonoles Jan 21 2010 at 3:39
Seems like I'm out of luck. I just compared the function in question with logistic maps (the similarity is striking). No wonder I was butting my head against a wall. – Jason Knight Jan 21 2010 at 3:50
show 4 more comments
## 1 Answer
As has already been explained, there is no hope in general of finding explicit solutions to nonlinear recurrences. However, for your example, it is possible to find $\lim_{n\to\infty}f_n(X)$ for all real $X$.
The function $g(x)=(x^2+x)/2$ has two fixed points: $x=0$ (atractor) and $x=1$ (repulsor). Its respective stable sets are $(-2,1)$ and $\{-2,1\}$; $(-\infty,-2)\cup(1,+\infty)$ is the stable set of $+\infty$. Thus,
$$\lim_{n\to\infty}f_n(X)=\left\{\matrix{0, & X\in(-2,1)\cr 1, & X\in\{-2,1\}\cr +\infty, & X\in(-\infty,-2)\cup(1,+\infty)}\right.$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607197642326355, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/28885/list | ## Return to Answer
2 typo
I think the more fundamental question to ask is why set theorists insist that the axioms of set theory be strictly first-order in nature (*). I claim that you can't really explain the motivations of set theorists until you address this phenomenon.
In light of this near-universal assumption, I think it is fair to say that:
• First-order logic is the foundation of set theory (which is in turn the foundation of other things)
• The axioms selected for set theory are those which enable as much model theory as possible, without risking inconsistency.
The fact that one can then turn around and use model theory to study set theory itself is a major bonus, but not really a part of the foundations argument. I've heard some quasi-mystical tales about sets existing in some alternate universe out there, but I feel far more comfortable using "it lets me do model theory and hasn't lead led to contradictions" as a justification for the axioms.
(*) Excluding, for example, second-order quantification, the logic $L_{\omega_1,\omega}$ of countable conjunctions, or the "exists uncountably many" quantifier.
1
I think the more fundamental question to ask is why set theorists insist that the axioms of set theory be strictly first-order in nature (*). I claim that you can't really explain the motivations of set theorists until you address this phenomenon.
In light of this near-universal assumption, I think it is fair to say that:
• First-order logic is the foundation of set theory (which is in turn the foundation of other things)
• The axioms selected for set theory are those which enable as much model theory as possible, without risking inconsistency.
The fact that one can then turn around and use model theory to study set theory itself is a major bonus, but not really a part of the foundations argument. I've heard some quasi-mystical tales about sets existing in some alternate universe out there, but I feel far more comfortable using "it lets me do model theory and hasn't lead to contradictions" as a justification for the axioms.
(*) Excluding, for example, second-order quantification, the logic $L_{\omega_1,\omega}$ of countable conjunctions, or the "exists uncountably many" quantifier. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457479119300842, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/200497/find-tnb-vectors-for-a-given-point | # Find TNB Vectors for a given point
Can anyone tell me whether or not my work and answer below are correct? This is question 13.3.48 in Stewart Calculus 7th edition.
Here is the problem definition:
"Find the vectors $\vec T, \vec N, \vec B$ at the given point. $\vec r(t) = (\cos t)i +(\sin t)j + (ln \cos t)k$ , (1,0,0)"
Here is my work:
Note: $t=0 \text{ because } \cos 0 = 1, \sin 0 = 0,\text{ and }\ln[ \cos 0] = \ln(1)=0$
$\vec r'(t) = (-\sin t)i + (\cos t)j + (-\tan t)k$
$|\vec r'(t)| = \sec t$
$\vec T(t)= (-\sin t \cos t)i + ((\cos t)^2)j + (-\sin t)k$
$\vec T'(t) = (1-2(\cos t)^2)i -(2 \sin t \cos t)j -(\cos t)k$
$|\vec T'(t)| = \sqrt{1+(\cos t)^2}$
$\vec N(t) = {\vec T'(t)\over|\vec T'(t)|} = {(1-2(\cos t)^2)\over \sqrt{1+(\cos t)^2}}i + ({(-2(\sin t)(\cos t))\over \sqrt{1+(\cos t)^2}})j + {(-\cos t)\over \sqrt{1+(\cos t)^2}}k$
$\vec B(t) = \vec T(t) \times \vec N(t) = \begin{vmatrix} i & j & k \\ -\sin t \cos t & \cos^2(t) & -\sin t \\ {1-2(\cos t)^2\over \sqrt{1+(\cos t)^2}} & {-2(\sin t)(\cos t)\over \sqrt{1+(\cos t)^2}} & {-\cos t\over \sqrt{1+(\cos t)^2}} \end{vmatrix}$
Thus, for t=0 and point (1,0,0) we have:
$\vec T(0) = j$
$\vec N(0) = ({-\sqrt2\over 2})i + ({-\sqrt2\over 2})k$
$\vec B(0) = ({-\sqrt2\over 2})i + ({\sqrt2\over 2})k$
-
– joriki Sep 21 '12 at 22:08
Everything up to "abs(r'(t)) = sec t" seems OK (if by that you mean $|r'(t)|=\sec t$). But I don't see how you arrived at the expression for $T(t)$ in the next line. Shouldn't you just divide $r'(t)$ by $|r'(t)|$ do get $T(t)$? And $|T(t)|$ should be $1$, not $\sqrt{1+\cos^2t}$, no? – joriki Sep 21 '12 at 22:15
Strings like "sin" are interpreted as concatenations of variable names and are therefore italicized. To get the right font and spacing for function names like $\sin$, you can use the predefined commands like \sin, or if you need a function name like $\operatorname{erf}$ for which there's no predefined command, you can use \operatorname{erf}. To format square roots, use e.g. `\sqrt{1+x^2}` to get $\sqrt{1+x^2}$. – joriki Sep 21 '12 at 22:25
To include text like "omitting this because I don't know how to format it" in a formula, you can use `\text{the text}`. – joriki Sep 21 '12 at 22:41
@joriki, Thank you. I fixed the omission and cleaned up the formatting enough that it now accurately represents my work in a more easily readable manner. I may make more formatting changes later when I have time. Can you tell yet whether I got this correct? If not, where any math error might be? Thank you. – CodeMed Sep 21 '12 at 22:52
show 3 more comments
## 1 Answer
I went through your work and I couldn't find any more errors.
-
Thank you very much for all of your insights. Thank you in particular for helping me start to learn the syntax for proper formatting. +1 – CodeMed Sep 22 '12 at 3:12
@CodeMed: You're welcome! – joriki Sep 22 '12 at 3:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9095978140830994, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/28329/diameter-of-a-graph-with-random-edge-weights/28351 | ## diameter of a graph with random edge weights
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a weighted directed graph $G=(V,E, w)$, suppose we generate a new graph $G'=(V,E,w')$ with the same vertices and edges, but now letting the weight of edge $(i,j)$ be an exponential random variable with mean $w_{ij}$. My question is: what is the expected diameter of $G'$?
Why I'm interested in this: I was intrigued by the observation that the expected diameter of $G'$ can be quite different from the diameter of $G$. Indeed, consider the following example: define $G$ by taking the complete graph $K_{n+1}$, picking an arbitrary vertex $a$, and assigning weight $n$ to any edge incident on $a$, and weight $1$ to every other edge. Then, the diameter of $G$ is $n$. On the other hand, the expected diameter of $G'$ is O(1) since we can expect one of the edges incident on $a$ to have small weight.
-
## 2 Answers
For the special case of the complete graph $K_n$ which you mention in your post, Svante Janson answered your question in this paper; the answer is that the weighted diameter grows like $3 \log n$ in probability.
There is also some very nice work by Bhamidi et. al on this question when the underlying graph is the giant component of Erdos--Renyi random graph $G_{n,c/n}$ with $c>1$ fixed, although they only prove lower bounds. Amini et. al. (link is to a PDF) have found the asymptotics of the weighted diameter for random graphs with a given degree sequence, under some conditions, for degree sequences which in particular result in graphs which are with high probability connected. Ding et.al. (Theorems 3.7 and 3.8 of the linked paper) prove quite refined estimates, and tail bounds, for the weighted diameter of random $d$-regular graphs, for $d \geq 3$. (Since random regular graphs are a special case of random graphs with a given degree sequence, the results of Amini et. al. and Ding et.al. have some overlap).
There is also related work, on the hopcount of randomly weighted graphs. The hopcount is what you get if you count the number of edges on the smallest-weight path. The primary interest of Bhamidi et. al. in fact seems to be hopcounts rather than weighted path lengths.
-
Thanks for the references! – alex Sep 1 2010 at 17:54
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The diameter is defined as $$d(G') = \sup_{x,y \in G} \inf_{\gamma} \sum_{e \in \gamma} w_{e},$$ where the infimum is over all paths $\gamma$ connecting $x$ to $y$, and the sum is over the edges $e$ which $\gamma$ crosses. The graph structure is very crucial to this problem. If the graph is very connected (as in your example), then the diameter is essentially the maximum value of the variables `$\{w_i\}$`. A large fluctuation of a single $w_i$ will cause the diameter to be very large.
On the other hand, if you're dealing with something more like a lattice (e.g., a large, finite subset of $\mathbb Z^2$), then the diameter is a combination of many independent random variables. Large fluctuations of any single variable will be muted and not affect the diameter much, and limit theorems will apply. A variant of Kingman's subadditive ergodic theorem will show that $$d(G') \sim d(G).$$
Questions like this are very much in the realm of first-passage percolation. Check out this survey by Howard, as well as this one by Blair-Stahn on the arXiv.
-
1
Thanks for the references; will check them out. In the very connected case, my intuition says that the opposite of your statement is true: even if one $w_{ij}$ is large, the existence of many paths between any two vertices will insure that the diameter is not strongly affected. – alex Jun 16 2010 at 5:25
Oops, you're absolutely right. It should be the opposite of what I said: the diameter should cluster on small values. Say you want to connect x to y. There's the one-edge path which connects them, and there are O(|G|) two-edge paths. There is a very high chance that one of them will have a very small weight. – Tom LaGatta Jun 16 2010 at 5:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542945623397827, "perplexity_flag": "head"} |
http://www.sagemath.org/doc/bordeaux_2008/modular_symbols.html | # Modular Symbols¶
Modular symbols are a beautiful piece of mathematics that was developed since the 1960s by Birch, Manin, Shokorov, Mazur, Merel, Cremona, and others. Not only are modular symbols a powerful computational tool as we will see, they have also been used to prove rationality results for special values of $$L$$-series, to construct $$p$$-adic $$L$$-series, and they play a key role in Merel’s proof of the uniform boundedness theorem for torsion points on elliptic curves over number fields.
We view modular symbols as a remarkably flexible computational tool that provides a single uniform algorithm for computing $$M_k(N,\varepsilon)$$ for any $$N, \varepsilon$$ and $$k\geq 2$$. There are ways to use computation of those spaces to obtain explicit basis for spaces of weight $$1$$ and half-integral weight, so in a sense modular symbols yield everything. There are also generalizations of modular symbols to higher rank groups, though Sage currently has no code for modular symbols on higher rank groups.
## Definition¶
A modular symbol of weight $$k$$, and level $$N$$, with character $$\varepsilon$$ is a sum of terms $$X^i Y^{k-2-i} \{\alpha, \beta\}$$, where $$0\leq i \leq k-2$$ and $$\alpha, \beta \in \mathbb{P}^1(\QQ) = \QQ \cup \{\infty\}$$. Modular symbols satisfy the relations
$X^i Y^{k-2-i} \{\alpha, \beta\} + X^i Y^{k-2-i} \{\beta, \gamma\} + X^i Y^{k-2-i} \{\gamma, \alpha\} = 0$
$X^i Y^{k-2-i} \{\alpha, \beta\} = -X^i Y^{k-2-i} \{\beta, \alpha\},$
and for every $$\gamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\Gamma_0(N)$$, we have
$(dX - bY)^i (-cX + aY)^{k-2-i} \{\gamma(\alpha),\gamma(\beta)\} = \varepsilon(d) X^i Y^{k-2-i} \{\alpha, \beta\}.$
The modular symbols space $$\mathcal{M}_k(N,\varepsilon)$$ is the torsion free $$\QQ[\varepsilon]$$-module generated by all sums of modular symbols, modulo the relations listed above. Here $$\QQ[\varepsilon]$$ is the ring generated by the values of the character $$\varepsilon$$, so it is of the form $$\QQ[\zeta_m]$$ for some integer $$m$$.
The amazing theorem that makes modular symbols useful is that there is an explicit description of an action of a Hecke algebra $$\mathbb{T}$$ on $$\mathcal{M}_k(N,\varepsilon)$$, and there is an isomorphism
$\mathcal{M}_k(N,\varepsilon;\CC) \xrightarrow{\approx} M_k(N,\varepsilon) \oplus S_k(N,\varepsilon).$
This means that if modular symbols are computable (they are!), then they can be used to compute a lot about the $$\mathbb{T}$$-module $$M_k(N,\varepsilon)$$.
## Manin Symbols¶
### Definition¶
Though $$\mathcal{M}_k(N,\varepsilon)$$ as described above is not explicitly generated by finitely many elements, it is finitely generated. Manin, Shokoruv, and Merel give an explicit description of finitely many generators (Manin symbols) for this space, along with all explicit relations that these generators satisfy (see my book). In particular, if we let
$(i,c,d) = [X^i Y^{2-k-i}, (c,d)] = (dX - bY)^i (-cX + aY)^{k-2-i} \{\gamma(0),\gamma(\infty)\},$
where $$\gamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)$$, then the Manin symbols $$(i,c,d)$$ with $$0\leq i \leq k-2$$ and $$(c,d)\in\mathbb{P}^1(N)$$ generate $$\mathcal{M}_k(N,\varepsilon)$$.
### Computing in Sage¶
We compute a basis for the space of weight $$4$$ modular symbols for $$\Gamma_0(11)$$, then coerce in $$(2,0,1)$$ and $$(1,1,3)$$.
```sage: M = ModularSymbols(11,4)
sage: M.basis()
([X^2,(0,1)], [X^2,(1,6)], [X^2,(1,7)], [X^2,(1,8)],
[X^2,(1,9)], [X^2,(1,10)])
sage: M( (2,0,1) )
[X^2,(0,1)]
sage: M( (1,1,3) )
2/7*[X^2,(1,6)] + 1/14*[X^2,(1,7)] - 4/7*[X^2,(1,8)]
+ 3/14*[X^2,(1,10)]
```
We compute a modular symbols representation for the Manin symbol $$(2,1,6)$$, and verify this by converting back.
```sage: a = M.1; a
[X^2,(1,6)]
sage: a.modular_symbol_rep()
36*X^2*{-1/6, 0} + 12*X*Y*{-1/6, 0} + Y^2*{-1/6, 0}
sage: 36*M([2,-1/6,0]) + 12*M([1,-1/6,0]) + M([0,-1/6,0])
[X^2,(1,6)]
```
### Table Of Contents
#### Previous topic
Modular Forms and Hecke Operators
Method of Graphs
### Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 35, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.89687180519104, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/162451/cardinality-of-the-set-of-countable-partitions-of-mathbbr | # cardinality of the set of countable partitions of $\mathbb{R}$
What is the cardinality of the set $$A=\{ P| P\ \text{is a countable partition of the reals} \}$$ ? I am searching on this for a while. I think the cardinality is $2^w$ where $w$ is the cardinality of $\mathbb{N}$. Clearly $|A|\geq |\mathbb{R}|=2^w$, but I cannot prove the other direction. I tried (but did not succeed)to construct an injection $A \rightarrow C$ for some set $C$ with cardinality equal to $2^w$.
-
I'd say its more likely to be $2^{\mathfrak c}$. – tomasz Jun 24 '12 at 16:05
I am inclined to agree with @tomasz but the 'countable' bit is making it difficult. Interesting question. – nullUser Jun 24 '12 at 16:06
Related is the 17-18 March 2011 sci.math thread The cardinality of the collection of all partitions of the natural numbers equals that of the real numbers. See my response and Fred Galvin's follow-up comments. – Dave L. Renfro Jun 25 '12 at 17:00
## 3 Answers
I think it's not very hard. A countable partition of reals can be seen as just a function from the reals into natural numbers, up to a permutation of the naturals. So we have $$\lvert \mathbf N^{\mathbf R}\rvert=\aleph_0^{2^{\aleph_0}}=2^{2^{\aleph_0}}$$ (because $\aleph_0\leq 2^{\aleph_0}$ we can do the last substitution)
On the other hand, there are only $\aleph_0^{\aleph_0}=\mathfrak c<2^{\mathfrak c}$ permutations of natural numbers, so there are $2^{2^{\aleph_0}}$ equivalence classes of functions from reals to naturals under the relation of equivalence up to a permutation of naturals (because each has at most $\mathfrak c$ elements), which are precisely correspondent to the countable partitions of reals.
-
Different functions to $\omega$ can give rise to the same partition. – Michael Greinecker Jun 24 '12 at 16:11
Yes, but there are at most $\mathfrak c$ different functions that give rise to each partition. – Henning Makholm Jun 24 '12 at 16:13
Right. I just noticed that. Correcting now. – tomasz Jun 24 '12 at 16:13
What is a countable partition? It means that we have some function from $\mathbb R$ into $\mathbb N$ and every part in the partition is the fiber of such function. However counting like this we count many partitions several times, for example take a partition into two parts, then we can see it as a function whose range is $\{0,1\}$ or $\{2,3\}$ or any other pair of numbers.
However note that there are exactly $2^{\aleph_0}$ many ways to re-order the natural numbers therefore at most continuum many functions generate the same partition.
So we have $\aleph_0^{2^{\aleph_0}}=2^{2^{\aleph_0}}$ many functions and each is equivalent to at most $2^{\aleph_0}$ many of them. Therefore there are $2^{2^{\aleph_0}}$ many partitions.
Why can't there be another number? Well, we denote by $\frak p$ the cardinality of all countable partitions, then we have that $2^{2^{\aleph_0}}=2^{\aleph_0}\cdot\frak p$, by the general argument appearing in Does $k+\aleph_0=\mathfrak{c}$ imply $k=\mathfrak{c}$ without the Axiom of Choice? we have that $2^{2^{\aleph_0}}=\frak p$.
-
Hint: how many subsets does $\mathbb R$ have? Can't (almost) all of them be part of a partition?
-
Since $\mathbb{R}$ is separable, for every closed subset $C$ it is possible to construct a sequence whose set of limit points is exactly $C$. How many closed subsets of $\mathbb{R}$ are there? Is this enough? – nullUser Jun 24 '12 at 16:11
@nullUser: are you saying the elements of the partition need to be open sets or something like that? I was just taking $\mathbb R$ to be a collection of size $\mathfrak c$ and partitioning it arbitrarily. – Ross Millikan Jun 24 '12 at 16:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507430195808411, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Hamming_distance | # Hamming distance
3-bit binary cube for finding Hamming distance Two example distances: 100->011 has distance 3 (red path); 010->111 has distance 2 (blue path)
4-bit binary tesseract for finding Hamming distance
Two example distances: 0100->1001 has distance 3 (red path); 0110->1110 has distance 1 (blue path)
In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other.
## Examples
The Hamming distance between:
• "toned" and "roses" is 3.
• 1011101 and 1001001 is 2.
• 2173896 and 2233796 is 3.
## Special properties
For a fixed length n, the Hamming distance is a metric on the vector space of the words of length n, as it obviously fulfills the conditions of non-negativity, identity of indiscernibles and symmetry, and it can be shown easily by complete induction that it satisfies the triangle inequality as well. The Hamming distance between two words a and b can also be seen as the Hamming weight of a−b for an appropriate choice of the − operator.
For binary strings a and b the Hamming distance is equal to the number of ones (population count) in a XOR b. The metric space of length-n binary strings, with the Hamming distance, is known as the Hamming cube; it is equivalent as a metric space to the set of distances between vertices in a hypercube graph. One can also view a binary string of length n as a vector in $R^n$ by treating each symbol in the string as a real coordinate; with this embedding, the strings form the vertices of an n-dimensional hypercube, and the Hamming distance of the strings is equivalent to the Manhattan distance between the vertices.
## History and applications
The Hamming distance is named after Richard Hamming, who introduced it in his fundamental paper on Hamming codes Error detecting and error correcting codes in 1950.[1] It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate. For q-ary strings over an alphabet of size q ≥ 2 the Hamming distance is applied in case of orthogonal modulation, while the Lee distance is used for phase modulation. If q = 2 or q = 3 both distances coincide.
The Hamming distance is also used in systematics as a measure of genetic distance.[2]
On a grid (such as a chessboard), the points at a Lee distance of 1 constitute the von Neumann neighborhood of that point.
## Algorithm example
The Python function `hamming_distance()` computes the Hamming distance between two strings (or other iterable objects) of equal length, by creating a sequence of zero and one values indicating mismatches and matches between corresponding positions in the two inputs, and then summing the sequence.
```def hamming_distance(s1, s2):
assert len(s1) == len(s2)
return sum(ch1 != ch2 for ch1, ch2 in zip(s1, s2))
```
The following C function will compute the Hamming distance of two integers (considered as binary values, that is, as sequences of bits). The running time of this procedure is proportional to the Hamming distance rather than to the number of bits in the inputs. It computes the bitwise exclusive or of the two inputs, and then finds the Hamming weight of the result (the number of nonzero bits) using an algorithm of Wegner (1960) that repeatedly finds and clears the lowest-order nonzero bit.
```unsigned hamdist(unsigned x, unsigned y)
{
unsigned dist = 0, val = x ^ y;
// Count the number of set bits
while(val)
{
++dist;
val &= val - 1;
}
return dist;
}
```
## References
• This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C".
• Hamming, Richard W. (1950), "Error detecting and error correcting codes", 29 (2): 147–160, MR 0035935 .
• Pilcher, C. D.; Wong, J. K.; Pillai, S. K. (March 2008), "Inferring HIV transmission dynamics from phylogenetic sequence relationships", PLoS Med. 5 (3): e69, doi:10.1371/journal.pmed.0050069, PMC 2267810, PMID 18351799 .
• Wegner, Peter (1960), "A technique for counting ones in a binary computer", 3 (5): 322, doi:10.1145/367236.367286 . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888494074344635, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/51343?sort=oldest | ## generalization of (Rogers) dilogarithm
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $C$ and $S$ be abbreviations for $\cosh$ and $\sinh$, and consider the following function:
$$f(x,y) = \int_{-y\le r+l \le y} \frac{ (C(x)S(l)C(r) - C(l)S(r))(C(x)C(l)S(r)-S(l)C(r)) } {(C(x)C(l)C(r) - S(l)S(r))^2-1} dl dr$$
If $y=\infty$, this specializes (I think!) to $4\mathcal{L}(1/C^2(x/2))$ where $\mathcal{L}$ is the Rogers dilogarithm (maybe some constants and factors are missing). The question is whether the function $f$ is studied anywhere. References would be appreciated.
Note: This function arises as the volume of a certain region in the unit tangent bundle of a hyperbolic surface; therefore I am not looking for an answer which just translates it back into its geometric origin.
-
## 2 Answers
There are very similar integrals (coming from the same source, shockingly) in arXiv:1002.1905 (Bridgeman/Kahn), where they seem to be evaluated in closed form.
-
I know that paper, and I don't know precisely which integrals you mean. Can you be more specific? – Danny Calegari Jan 7 2011 at 4:02
@DC: The ones on page 4. – Igor Rivin Jan 7 2011 at 15:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This integral is expressible in terms of elementary functions. At least the corresponding indefinite integral. I won't say anything about convergence.
1. Use hyperbolic double angle formulas to express all hyperbolic functions of $r$ and $l$ in terms of $\cosh(r\pm l)$ and $\sinh(r\pm l)$.
2. Use the new integration variables $Z_\pm$, where $\sinh(r\pm l)=(Z_\pm-Z_\pm^{-1})/2$ and $\cosh(r\pm l)=(Z_\pm+Z_\pm^{-1})/2$.
3. The resulting integral is now rational in $Z_\pm$ with bounds $0\le Z_-\le\infty$ and $Z_+(-y)\le Z_+\le Z_+(y)$.
4. The pole structure becomes especially nice. Decompose into partial fractions wrt $Z_+$ and integrate. Decompose into partial fractions wrt $Z_-$ and integrate again.
Doing this in a Maxima session gives (up to factors of 2): \begin{gather} -{{(C^2(x)+2 C(x)+1) \log { Z_+} \log { Z_-}}\over{4}} \cr -{{(4 C^2(x)-8 C(x)+4) { Z_+}^2 \log { Z_+}+(-C^2 (x)+2 C(x)-1) { Z_+}^4+C^2(x )-2 C(x)+1}\over{32 { Z_+}^2 ( { Z_-}+1)}} \cr +{{(4 C^2(x)-8 C(x )+4) { Z_+}^2 \log { Z_+}+(-C^2(x )+2 C(x)-1) { Z_+}^4+C^2(x)- 2 C(x)+1}\over{32 { Z_+}^2 ({ Z_-}-1 )}} \end{gather}
Hopefully, I hadn't made any typos on the way to the above answer. In any case, the procedure to evaluate this integral correctly should be about the same.
-
Oddly, I don't see how this reduces to a dilogarithm, which is non-elementary. And then there are all these singularities on the boundaries of the integration region. It's possible that I didn't enter some of the expressions correctly. However, the method of getting the integrand to rational form should work. – Igor Khavkine Mar 4 2011 at 9:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.877357542514801, "perplexity_flag": "middle"} |
http://www.sagemath.org/doc/reference/arithgroup/sage/modular/arithgroup/congroup_generic.html | # Congruence arithmetic subgroups of $${\rm SL}_2(\ZZ)$$¶
Sage can compute extensively with the standard congruence subgroups $$\Gamma_0(N)$$, $$\Gamma_1(N)$$, and $$\Gamma_H(N)$$.
AUTHORS:
• William Stein
• David Loeffler (2009, 10) – modifications to work with more general arithmetic subgroups
class sage.modular.arithgroup.congroup_generic.CongruenceSubgroup(*args, **kwds)¶
Bases: sage.modular.arithgroup.congroup_generic.CongruenceSubgroupFromGroup
One of the “standard” congruence subgroups $$\Gamma_0(N)$$, $$\Gamma_1(N)$$, $$\Gamma(N)$$, or $$\Gamma_H(N)$$ (for some $$H$$).
This class is not intended to be instantiated directly. Derived subclasses must override __call__, _repr_, and image_mod_n.
image_mod_n()¶
Raise an error: all derived subclasses should override this function.
EXAMPLE:
sage: sage.modular.arithgroup.congroup_generic.CongruenceSubgroup(5).image_mod_n() Traceback (most recent call last): ... NotImplementedError
modular_abelian_variety()¶
Return the modular abelian variety corresponding to the congruence subgroup self.
EXAMPLES:
```sage: Gamma0(11).modular_abelian_variety()
Abelian variety J0(11) of dimension 1
sage: Gamma1(11).modular_abelian_variety()
Abelian variety J1(11) of dimension 1
sage: GammaH(11,[3]).modular_abelian_variety()
Abelian variety JH(11,[3]) of dimension 1
```
modular_symbols(sign=0, weight=2, base_ring=Rational Field)¶
Return the space of modular symbols of the specified weight and sign on the congruence subgroup self.
EXAMPLES:
```sage: G = Gamma0(23)
sage: G.modular_symbols()
Modular Symbols space of dimension 5 for Gamma_0(23) of weight 2 with sign 0 over Rational Field
sage: G.modular_symbols(weight=4)
Modular Symbols space of dimension 12 for Gamma_0(23) of weight 4 with sign 0 over Rational Field
sage: G.modular_symbols(base_ring=GF(7))
Modular Symbols space of dimension 5 for Gamma_0(23) of weight 2 with sign 0 over Finite Field of size 7
sage: G.modular_symbols(sign=1)
Modular Symbols space of dimension 3 for Gamma_0(23) of weight 2 with sign 1 over Rational Field
```
class sage.modular.arithgroup.congroup_generic.CongruenceSubgroupBase(level)¶
Bases: sage.modular.arithgroup.arithgroup_generic.ArithmeticSubgroup
Create a congruence subgroup with given level.
EXAMPLES:
```sage: Gamma0(500)
Congruence Subgroup Gamma0(500)
```
is_congruence()¶
Return True, since this is a congruence subgroup.
EXAMPLE:
```sage: Gamma0(7).is_congruence()
True
```
level()¶
Return the level of this congruence subgroup.
EXAMPLES:
```sage: SL2Z.level()
1
sage: Gamma0(20).level()
20
sage: Gamma1(11).level()
11
sage: GammaH(14, [5]).level()
14
```
class sage.modular.arithgroup.congroup_generic.CongruenceSubgroupFromGroup(G)¶
Bases: sage.modular.arithgroup.congroup_generic.CongruenceSubgroupBase
A congruence subgroup, defined by the data of an integer $$N$$ and a subgroup $$G$$ of the finite group $$SL(2, \ZZ / N\ZZ)$$; the congruence subgroup consists of all the matrices in $$SL(2, \ZZ)$$ whose reduction modulo $$N$$ lies in $$G$$.
This class should not be instantiated directly, but created using the factory function CongruenceSubgroup_constructor(), which accepts much more flexible input, and checks the input to make sure it is valid.
TESTS:
```sage: G = CongruenceSubgroup(5, [[0,-1,1,0]]); G
Congruence subgroup of SL(2,Z) of level 5, preimage of:
Matrix group over Ring of integers modulo 5 with 1 generators:
[[[0, 4], [1, 0]]]
sage: G == loads(dumps(G))
True
```
image_mod_n()¶
Return the subgroup of $$SL(2, \ZZ / N\ZZ)$$ of which this is the preimage, where $$N$$ is the level of self.
EXAMPLE:
```sage: G = MatrixGroup([matrix(Zmod(2), 2, [1,1,1,0])])
sage: H = sage.modular.arithgroup.congroup_generic.CongruenceSubgroupFromGroup(G); H.image_mod_n()
Matrix group over Ring of integers modulo 2 with 1 generators:
[[[1, 1], [1, 0]]]
sage: H.image_mod_n() == G
True
```
index()¶
Return the index of self in the full modular group. This is equal to the index in $$SL(2, \ZZ / N\ZZ)$$ of the image of this group modulo $$\Gamma(N)$$.
EXAMPLE:
```sage: sage.modular.arithgroup.congroup_generic.CongruenceSubgroupFromGroup(MatrixGroup([matrix(Zmod(2), 2, [1,1,1,0])])).index()
2
```
to_even_subgroup()¶
Return the smallest even subgroup of $$SL(2, \ZZ)$$ containing self.
EXAMPLE:
```sage: G = Gamma(3)
sage: G.to_even_subgroup()
Congruence subgroup of SL(2,Z) of level 3, preimage of:
Matrix group over Ring of integers modulo 3 with 1 generators:
[[[2, 0], [0, 2]]]
```
sage.modular.arithgroup.congroup_generic.CongruenceSubgroup_constructor(*args)¶
Attempt to create a congruence subgroup from the given data.
The allowed inputs are as follows:
• A MatrixGroup object. This must be a group of matrices over $$\ZZ / N\ZZ$$ for some $$N$$, with determinant 1, in which case the function will return the group of matrices in $$SL(2, \ZZ)$$ whose reduction mod $$N$$ is in the given group.
• A list of matrices over $$\ZZ / N\ZZ$$ for some $$N$$. The function will then compute the subgroup of $$SL(2, \ZZ)$$ generated by these matrices, and proceed as above.
• An integer $$N$$ and a list of matrices (over any ring coercible to $$\ZZ / N\ZZ$$, e.g. over $$\ZZ$$). The matrices will then be coerced to $$\ZZ / N\ZZ$$.
The function checks that the input G is valid. It then tests to see if $$G$$ is the preimage mod $$N$$ of some group of matrices modulo a proper divisor $$M$$ of $$N$$, in which case it replaces $$G$$ with this group before continuing.
EXAMPLES:
```sage: from sage.modular.arithgroup.congroup_generic import CongruenceSubgroup_constructor as CS
sage: CS(2, [[1,1,0,1]])
Congruence subgroup of SL(2,Z) of level 2, preimage of:
Matrix group over Ring of integers modulo 2 with 1 generators:
[[[1, 1], [0, 1]]]
sage: CS([matrix(Zmod(2), 2, [1,1,0,1])])
Congruence subgroup of SL(2,Z) of level 2, preimage of:
Matrix group over Ring of integers modulo 2 with 1 generators:
[[[1, 1], [0, 1]]]
sage: CS(MatrixGroup([matrix(Zmod(2), 2, [1,1,0,1])]))
Congruence subgroup of SL(2,Z) of level 2, preimage of:
Matrix group over Ring of integers modulo 2 with 1 generators:
[[[1, 1], [0, 1]]]
sage: CS(SL(2, 2))
Modular Group SL(2,Z)
```
Some invalid inputs:
```sage: CS(SU(2, 7))
Traceback (most recent call last):
...
TypeError: Ring of definition must be Z / NZ for some N
```
sage.modular.arithgroup.congroup_generic.is_CongruenceSubgroup(x)¶
Return True if x is of type CongruenceSubgroup.
Note that this may be False even if $$x$$ really is a congruence subgroup – it tests whether $$x$$ is “obviously” congruence, i.e.~whether it has a congruence subgroup datatype. To test whether or not an arithmetic subgroup of $$SL(2, \ZZ)$$ is congruence, use the is_congruence() method instead.
EXAMPLES:
```sage: from sage.modular.arithgroup.congroup_generic import is_CongruenceSubgroup
sage: is_CongruenceSubgroup(SL2Z)
True
sage: is_CongruenceSubgroup(Gamma0(13))
True
sage: is_CongruenceSubgroup(Gamma1(6))
True
sage: is_CongruenceSubgroup(GammaH(11, [3]))
True
sage: G = ArithmeticSubgroup_Permutation(L = "(1, 2)", R = "(1, 2)"); is_CongruenceSubgroup(G)
False
sage: G.is_congruence()
True
sage: is_CongruenceSubgroup(SymmetricGroup(3))
False
```
#### Previous topic
Elements of Arithmetic Subgroups
#### Next topic
Congruence Subgroup $$\Gamma_H(N)$$
### Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 40, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7048487663269043, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2010/06/14/the-integral-mean-value-theorem-2/?like=1&source=post_flair&_wpnonce=c589d5bcbb | # The Unapologetic Mathematician
## The Integral Mean Value Theorem
We have an analogue of the integral mean value theorem that holds not just for single integrals, not just for multiple integrals, but for integrals over any measure space.
If $f$ is an essentially bounded measurable function with $\alpha\leq f\leq\beta$ a.e. for some real numbers $\alpha$ and $\beta$, and if $g$ is any integrable function, then there is some real number $\gamma$ with $\alpha\leq\gamma\leq\beta$ so that
$\displaystyle\int f\lvert g\rvert\,d\mu=\gamma\int\lvert g\rvert\,d\mu$
Actually, this is a statement about finite measure spaces; the function $g$ is here so that the indefinite integral of $\lvert g\rvert$ will give us a finite measure on the measurable space $X$ to replace the (possibly non-finite) measure $\mu$. This explains the $g$ in the multivariable case, which wasn’t necessary when we were just integrating over a finite interval in the one-variable case.
Okay, we know that $\alpha\leq f\leq\beta$ a.e., and so $\alpha\lvert g\rvert\leq f\lvert g\rvert\leq\beta\lvert g\rvert$ a.e. as well. This tells us that $f\lvert g\rvert$ is integrable. And thus we conclude
$\displaystyle\alpha\int\lvert g\vert\,d\mu\leq\int f\lvert g\rvert\,d\mu\leq\beta\int\lvert g\rvert\,d\mu$
Now either the integral of $\lvert g\rvert$ is zero or it’s not. If it’s zero, then $g$ is zero a.e., and so is $f\lvert g\rvert$, and our assertion follows for any $\gamma$ we like. On the other hand, if it’s not we can divide through to find
$\displaystyle\alpha\leq\frac{\int f\lvert g\rvert\,d\mu}{\int\lvert g\vert\,d\mu}\leq\beta$
this term in the middle is our $\gamma$.
About these ads
### Like this:
Like Loading...
Posted by John Armstrong | Analysis, Measure Theory
## 5 Comments »
1. unapologetic.wordpress.com’s done it once again. Amazing article.
Comment by | June 15, 2010 | Reply
2. Hi,
That’s great: I was exactly looking for that result somewhere (googling ‘mean value theorem measure’, you came up second – I am rarely so lucky…). Do you know a reference (text book/article) that I could refer to to mention this result?
Thanks a lot,
Armando
Comment by Armando Sano | September 8, 2010 | Reply
• I believe it shows up in Halmos’ Measure Theory.
Comment by | September 8, 2010 | Reply
3. Perfect, there it is, thanks again
Armando
Comment by Armando Sano | September 8, 2010 | Reply
• No problem. Remember to tell your friends where to look
Comment by | September 8, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90938800573349, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/38262?sort=oldest | ## Computable rings similar to Z
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(This is related to my question at http://mathoverflow.net/questions/38160/computable-nonstandard-models-for-weak-systems-of-arithemtic )
Is there a nontrivial computable discrete ordered ring with Euclidean division that is not isomorphic to Z?
If so, what other first-order properties could it share with Z?
.
Possible properties include:
all numbers with rational square roots are perfect squares
Lagrange 4-square theorem
prime elements are unbounded
.
Could it satisfy all Pi_1 properties satisfied by Z?
(bounded quantifiers referring to the absolute value being bounded)
-
## 1 Answer
Berarducci and Otero in "A Recursive Nonstandard Model of Normal Open Induction" (Journal of Symbolic Logic v61, 1996) give a discretely ordered ring $R$ with recursive operations having the following properties:
1. $R$ is integrally closed in its quotient field. (So elements with "rational" square roots are perfect squares.)
2. The prime elements of $R$ are cofinal
3. $R$ satisfies the induction axioms for quantifier-free formulas
In an earlier paper of Otero (Journal of Symbolic Logic, vol 55, 1990) Otero proves that every model of Open Induction extends to one in which Lagrange's Four-Square Theorem holds. Whether this can be done effectively I don't know.
As for the general question "What properties can a recursively presented discretely ordered ring share with $\mathbb{Z}$": Let's say that a discretely ordered ring $R$ is "diophantine correct" if it satisfies all universal sentences that hold in $\mathbb{Z}$. Assuming that the language of rings has signature `$(+, -, \times , \le,0 ,1)$`, diophantine correctness amounts to the requirement that any system of polynomial equations and inequalities that are solvable in $R$ is solvable in the standard integers. Incidentally, models of open induction satisfy a weaker property: Any system of equations solvable in some (at least one) model of open induction has a p-adic solution.
The question whether a nonstandard diophantine correct model of open induction can be effectively constructed was raised by Adamowic and Morales-Luna in "A Recursive Model for Arithmetic with Weak Induction", (Journal of Symbolic Logic v50, 1985). I believe that this question is still open. I also believe that the models constructed in the Otero-Berarducci paper are in fact diophantine correct, but the proof of this seems to bump up against open problems in number theory. This is all discussed in an article on diophantine correct open induction by Sidney Raffer in "Set theory, Arithmetic, Philosophy: Essays in Memory of Stanley Tennenbaum (edited by J. Kennedy and R. Kossak), Cambridge University Press. (To appear.)
This is a response to a question from the comments: It is too long to fit there.
Proof of the Euclidean Division Theorem from the axioms of Open Induction: The problem is to show that if $A$ is a model of open induction and if $x,y$ are elements of $A$ with `$y>0$` and `$x \ge 0 $` then there are unique elements $r, q$ of $A$ such that $x=yq+r$ and `$0 \le r < q$`. (The statement actually holds for all $x$ and the proof for `$x < 0 $` is similar.)
1. First show that for every $x\in A$ with `$x\ge 0$`, there is some $q\in A$ such that `$yq\le x < y(q+1)$`. (Suppose, by way of by contradiction, that `$x \ge 0$` and there is no such $q$. Let $S$ be the subset of $A$ defined by the quantifier-free formula `$\sigma(q): q \ge 0 \wedge yq\le x$`. Show that $S$ contains 0 and is closed under successor. By induction $S$ contains every positive element of $A$. But this is impossible because $x+1$ cannot be in $S$.)
2. Next, given `$y > 0$` and `$x \ge 0 $` choose $q$ (as in Part 1) such that `$yq \le x < y(q+1)$`. Put $r=x-yq$. Using that fact that $A$ is discretely ordered it is follows easily that $r$ and $q$ are the unique elements of $A$ satisfying $x=yq+r$ and `$0 \le r < q$`.
-
Do you know if the ring that article gives has Euclidean division? – Ricky Demer Sep 10 2010 at 4:22
Every model of Open Induction has Euclidean Division. – SJR Sep 10 2010 at 4:23
Do you know where I could find a proof of that? – Ricky Demer Sep 10 2010 at 4:37
@Ricky - I don't have access to anything at the moment, but I've added a brief proof-sketch of the Euclidean division theorem to the end of my answer. The proof is exactly what you would think of if you set out to prove the theorem by induction over the ordinary integers. – SJR Sep 10 2010 at 6:24
Nice. Although, if I set out to prove the theorem by induction over the ordinary integers, I would do the induction on x in the statement of Euclidean division, rather than q in the formula you used. – Ricky Demer Sep 10 2010 at 6:52
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341808557510376, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/3096/what-is-the-relation-between-renormalization-in-physics-and-divergent-series-in | What is the relation between renormalization in physics and divergent series in mathematics?
The theory of Divergent Series was developed by Hardy and other mathematicians in the first half of the past century, giving rigorous methods of summation to get unique and consistent results from divergent series. Or so.
In physics, it is said that the pertubative expansion for the calculation of QFT scattering amplitudes is a divergent series and that its summation is solved via the renormalisation group.
Is there some explicit connection between the mathematical theory and the physics formalism?
Perhaps the question can have different answers for two different layers: the "renormalized series", which can be still divergent, and the structure of counter-terms doing the summation at a given order. If so, please make clear what layer are you addressing in the answer. Thanks!
-
What the renormalization group resums, in most applications, are only those terms in the perturbation series that involve large logarithms. This is not a solution to the problem Dyson pointed out that implies that the series as a whole is only asymptotic, not convergent. Physical attempts to construct means of summing asymptotic series are a different story from the renormalization group, and one I can't tell you much about. (There's an old argument by 't Hooft that perturbation series in QCD are not Borel-summable, for instance, which might be the first thing one would have attempted.) – Matt Reece Jan 16 '11 at 22:00
Yep, the upper layer, the whole series, is said to be asymptotic, I forgot to mention Dyson. Modernly, I do not know what is really proved, and if it is proved for the bare or the renormalized series, so please consider it as part of the question. As for the contributions for a fixed power in the expansion, it is not even argued to be related to an asymptotic series, is it? Still, it has structure (Connes Kreimer etc)... – arivero Jan 16 '11 at 22:32
Lubosh wrote: "No, Vladimir, you can't throw the quantum corrections altogether..." But I do not propose to discard all quantum corrections altogether! My question is: Can we consider renormalizations as discarding unnecessary corrections to the initially physical fundamental constants? In other words, doesn't discarding give the same result? Isn't it the same as adjusting (fitting) the constants? There is no linearization in it. – Vladimir Kalitvianski Jan 17 '11 at 17:30
1
@Vladimir: Converted your question to a comment; hope you don't mind. – Noldorin Jan 17 '11 at 17:31
this topic has always given me headaches to understand, and i am glad someone could articulate it into a question – lurscher Jul 10 '11 at 18:31
3 Answers
You are conflating three conceptually different categories of "regularizations" of seemingly divergent series (and integrals).
The type of resummations that Hardy would talk about are similar to the zeta-function regularization - the example that is most familiar to the physicists. For example, $$S=\sum_{n=1}^\infty n= -\frac{1}{12}$$ is the most famous sum. Note that this result is unique; it is a well-defined number. In particular, that allows one to calculate the critical dimension of bosonic string theory from $(D-2)S/2+1=0$ and the result is $D=26$. Fundamentally speaking, there is no real divergence in the sum. The "divergent pieces" may be subtracted "completely".
However, in the usual cases of renormalization - of a loop diagram - in quantum field theory, there are divergences. Renormalization removes the "infinite part" of these terms. A finite term is left but the magnitude of the term is not uniquely determined, like it was in the case of the sum of positive integers. Instead, every type of a divergence in a loop diagram produces one parameter - analogous to the coupling constant - that has to be adjusted. Because the finite results can be "anything", this is clearly something else than the zeta-regularization and, more generally, Hardy's procedures whose very goal was to produce unique, well-defined results for seemingly divergent expressions. Infinitesimally speaking, the Renormalization Group only mixes the lower-order contributions (by the number of loops) into a higher-order contribution.
So these are two different things that one should distinguish.
There is another category of problems that is different from both categories above: the summation of the perturbative expansions to all orders. It can be demonstrated that in almost all fields theories - and perturbative string theories as well - the perturbative expansions diverge. For a small coupling, one can sum them up to the smallest term, before the factorial-like coefficient begins to increase the terms again, despite the $g^{2L}$ suppression. The smallest term is of the same order as the leading non-perturbative contributions.
At the very end, if the theory can be non-perturbatively well-defined - and both QCD-like theories and string theory can, at least in principle - the full function as a function of the coupling constant $g$ exists. But it just can't be fully obtained from the perturbative expansion. The Renormalization Group won't really help you because it only mixes the perturbative terms of another order to a perturbative diagram you want to calculate. If you don't know the non-perturbative physics, the equations of the Renormalization Group won't fill the gap because they will keep you in the perturbative realm.
So I have sketched three different things: in the Hardy/zeta problems, the answer to the divergent series was unique; in the particular $L$-loop diagrams in QFT, it wasn't unique but the infinite part was subtracted and the finite part was obtained by a comparison with the experiments; and in the perturbative expansion resummed to all orders, the sum actually didn't converge and indeed, it didn't know about all the information about the full result for a finite $g$.
The last statement may have some subtleties; at least for some theories, the non-perturbative physics is fully determined by the perturbative physics. But I think it is not quite general and we have counterexamples - e.g. for AdS/CFT with orthogonal groups and different discrete values of $B$ etc. So it means that the perturbative expansion doesn't uniquely determine the theory non-perturbatively.
Because the three examples differ at the level of "what can be calculated" and "what cannot", they are different.
-
I was only expecting to conflat two concepts, the renormalization of a diagram and the sum to all orders, and in fact I suggested, in the question body, that people should indicate which of the concepts (I told layers) was being addressed in each answer. Regretly my mention of Hardy brought the third concept (the first in your answer) into play too, it seems. – arivero Jan 16 '11 at 22:46
Lubosh wrote: "Renormalization removes the "infinite part" of these terms. A finite term is left but the magnitude of the term is not uniquely determined... Instead, every type of a divergence in a loop diagram produces one parameter - analogous to the coupling constant - that has to be adjusted." My question to Lubosh is: Can we consider renormalizations as discarding unnecessary corrections to the initially physical fundamental constants? In other words, doesn't discarding give the same result? – Vladimir Kalitvianski Jan 17 '11 at 4:22
No, Vladimir, you can't throw the quantum corrections altogether because the resulting theory wouldn't be unitary: it wouldn't conserve probabilities and they would no longer sum to one. For example, if one computes the scattering S-matrix, it just ends up being the time-ordered exponential of the integral of the Hamiltonian (or Lagrangian) - an exponential is what solves similar equations - which is a nonlinear function of the Hamiltonian, allowing any number of vertices. $\exp(iH)$ for a Hermitean $H$ is unitary; a linearization of it wouldn't be unitary. – Luboš Motl Jan 17 '11 at 9:12
Lubosh wrote: "No, Vladimir, you can't throw the quantum corrections altogether..." But I do not propose to discard all quantum corrections altogether! My question is: Can we consider renormalizations as discarding unnecessary corrections to the initially physical fundamental constants? In other words, doesn't discarding give the same result? Isn't it the same as adjusting (fitting) the constants? There is no linearization in it. – Vladimir Kalitvianski Jan 19 '11 at 22:54
There are overlaps between QFT renormalization problems and mathematical approaches to divergent series summations (see for instance the section 'physics' in the Wikipedia article $1 + 2 + 3 + 4 + \dots$ ) However, most of the infinities encountered in modern physics are 'harder' than those that can be attacked by existing divergent series summation methods.
-
Here, is the method to deal with DIVERGENT INTEGRALS (single and multiple) in quantum physics by using the method of DIVERGENT SERIES and Zeta regularization
http://vixra.org/pdf/1009.0047v4.pdf
http://vixra.org/pdf/1009.0047v4.pdf
for multiple integrals we can apply the method on each variable, so we can get FINITE RESULTS using zeta regularization for multi-loop integrals.
The logarithmic divergence is handled with the FUNCTIONAL DETERMINANT or infinite product over all the integers (1+x)(2+x)........ using Hurwitz Zeta function.
so we have managed to get only finite results in QFT :) using Zeta regularization.
not only for DIVERGENT SUMS but also for DIVERGENT INTEGRALS we can obtain finite corrections using the Zeta regularization algorithm
my paper in the subject
http://vixra.org/abs/1009.0047
this can be used to regularize DIVERGENT INTEGRALS in quantum mechanics, and MULTIPLE integrals by doing a term by term integration on each variable, you are free to check it out and give your opinion.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339146018028259, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2011/02/05/intertwinors-from-generalized-tableaux/?like=1&source=post_flair&_wpnonce=a34a077b61 | # The Unapologetic Mathematician
## Intertwinors from Generalized Tableaux
Given any generalized Young tableau $T$ with shape $\lambda$ and content $\mu$, we can construct an intertwinor $\theta_T:M^\lambda\to M^\mu$. Actually, we’ll actually go from $M^\lambda$ to $\mathbb{C}[T_{\lambda\mu}$, but since we’ve seen that this is isomorphic to $M^\mu$, it’s good enough. Anyway, first, we have to define the row-equivalence class $\{T\}$ and column-equivalence class $[T]$. These are the same as for regular tableaux.
So, let $t$ be our reference tableau and let $\{t\}$ be the associated tabloid. We define
$\displaystyle\theta_T\left(\{t\}\right)=\sum\limits_{S\in\{T\}}S$
Continuing our example, with
$\displaystyle T=\begin{array}{ccc}2&1&1\\3&2&\end{array}$
we we define
$\displaystyle\begin{aligned}\theta_T\left(\begin{array}{ccc}\cline{1-3}1&2&3\\\cline{1-3}4&5&\\\cline{1-2}\end{array}\right)&=\begin{array}{ccc}2&1&1\\3&2&\end{array}+\begin{array}{ccc}1&2&1\\3&2&\end{array}+\begin{array}{ccc}1&1&2\\3&2&\end{array}\\&+\begin{array}{ccc}2&1&1\\2&3&\end{array}+\begin{array}{ccc}1&2&1\\2&3&\end{array}+\begin{array}{ccc}1&1&2\\2&3&\end{array}\end{aligned}$
Now, we extend in the only way possible. The module $M^\lambda$ is cyclic, meaning that it can be generated by a single element and the action of $\mathbb{C}[S_n]$. In fact, any single tabloid will do as a generator, and in particular $\{t\}$ generates $M^\lambda$.
So, any other module element in $M^\lambda$ is of the form $\pi\{t\}$ for some $\pi\in\mathbb{C}[S_n]$. And so if $\theta_T$ is to be an intertwinor we must define
$\displaystyle\theta_T\left(\pi\{t\}\right)=\pi\theta_T(\{t\})=\sum\limits_{S\in\{T\}}\pi S$
Remember here that $\pi$ acts on generalized tableaux by shuffling the entries by place, not by value. Thus in our example we find
$\displaystyle\begin{aligned}\theta_T\left(\begin{array}{ccc}\cline{1-3}2&4&3\\\cline{1-3}1&5&\\\cline{1-2}\end{array}\right)&=\theta_T\left((1\,2\,4)\begin{array}{ccc}\cline{1-3}1&2&3\\\cline{1-3}4&5&\\\cline{1-2}\end{array}\right)\\&=(1\,2\,4)\theta_T\left(\begin{array}{ccc}\cline{1-3}1&2&3\\\cline{1-3}4&5&\\\cline{1-2}\end{array}\right)\\&=(1\,2\,4)\begin{array}{ccc}2&1&1\\3&2&\end{array}+(1\,2\,4)\begin{array}{ccc}1&2&1\\3&2&\end{array}+(1\,2\,4)\begin{array}{ccc}1&1&2\\3&2&\end{array}\\&+(1\,2\,4)\begin{array}{ccc}2&1&1\\2&3&\end{array}+(1\,2\,4)\begin{array}{ccc}1&2&1\\2&3&\end{array}+(1\,2\,4)\begin{array}{ccc}1&1&2\\2&3&\end{array}\\&=\begin{array}{ccc}3&2&1\\1&2&\end{array}+\begin{array}{ccc}3&1&1\\2&2&\end{array}+\begin{array}{ccc}3&1&2\\1&2&\end{array}\\&+\begin{array}{ccc}2&2&1\\1&3&\end{array}+\begin{array}{ccc}2&1&1\\2&3&\end{array}+\begin{array}{ccc}2&1&2\\1&3&\end{array}\end{aligned}$
Now it shouldn’t be a surprise that since so much of our construction to this point has depended on an aribtrary choice of a reference tableau $t$, the linear combination of generalized tableaux on the right doesn’t quite seem like it comes from the tabloid on the left. But this is okay. Just relax and go with it.
## 5 Comments »
1. Hope you don’t mind me posting this Q here John. But I was reading a piece from Keith Devlin on MAA on how we learn math: http://www.maa.org/devlin/devlin_03_06.html. His basic conclusion is that when learning mathematics, understanding of the concepts can come only AFTER mastery of the procedural elements. In other words, much of the mathematics will be carried out with little (if any) understanding of the concepts and initially it will really only be a symbolic manipulation game. Once the rules are fully internalized, only some time after can any understanding eventually arise. As a mathematician, I’d love to hear your take on this. Do you think it’s possible to understand (even if vaguely) concepts first; or must procedural skill always come before understanding?
Comment by Jubayer K | February 5, 2011 | Reply
2. I think there’s a place for both.
Comment by | February 5, 2011 | Reply
3. [...] Generalized Tableaux We want to take our intertwinors and restrict them to the Specht modules. If the generalized tableau has shape and content , we [...]
Pingback by | February 8, 2011 | Reply
4. [...] Let’s start with the semistandard generalized tableaux and use them to construct intertwinors . I say that this collection is linearly [...]
Pingback by | February 9, 2011 | Reply
5. [...] that we’ve shown the intertwinors that come from semistandard tableaux are independent, we want to show that they span the space . [...]
Pingback by | February 11, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135169982910156, "perplexity_flag": "middle"} |
http://shreevatsa.wordpress.com/tag/primes/ | # The Lumber Room
"Consign them to dust and damp by way of preserving them"
## Testing irreducibility using prime numbers
Here’s a simple and nice test for irreducibility in ${\mathbb{Z}[x]}$ that N told me about a year ago. (I just noticed this lying around while cleaning; I don’t have a year’s buffer like Raymond Chen.) Apologies for the ugly formatting; you’ll have to trust that the result (Theorem 1, or Corollary 8) is more beautiful than it looks. :-)
Actually I’m not sure why I wrote this originally, given that it’s all already well-explained in the originals and even partially on Wikipedia. Perhaps my proofs are different or simpler or I was bored or something.
1. Irreducibility test
In its simplest form, the test can be stated as follows.
Theorem 1 Given a polynomial ${f(x) = a_nx^n + a_{n-1}x^{n-1} + \dots + a_o}$ with integer coefficients, let ${G = \max_{i}|\frac{a_i}{a_n}|}$. If there exists an integer ${m \ge G+2}$ such that ${f(m)}$ is prime, then ${f}$ is irreducible.
For example, with the polynomial $x^2 + 3x + 1$, we have $G = 3$, and $f(5)=f(G+2)=41$ is prime, which proves that it is irreducible. (We could also evaluate f at e.g. 7, 8, 9, or 10 to get the same conclusion.)
Read the rest of this entry »
Written by S
Sat, 2009-09-05 at 22:01:44 +05:30
Posted in mathematics
Tagged with algebra, irreducibility, irreducible polynomials, polynomials, primes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191060662269592, "perplexity_flag": "middle"} |
http://www.citizendia.org/Affine_transformation | In geometry, an affine transformation or affine map or an affinity (from the Latin, affinis, "connected with") between two vector spaces (strictly speaking, two affine spaces) consists of a linear transformation followed by a translation:
$x \mapsto A x+ b$
In the finite-dimensional case each affine transformation is given by a matrix A and a vector b, which can be written as the matrix A with an extra column b. Geometry ( Greek γεωμετρία; geo = earth metria = measure is a part of Mathematics concerned with questions of size shape and relative position In Mathematics, a vector space (or linear space) is a collection of objects (called vectors) that informally speaking may be scaled and added In Mathematics, an affine space is an abstract structure that generalises the affine-geometric properties of Euclidean space. In Mathematics, a linear map (also called a linear transformation, or linear operator) is a function between two Vector spaces that In Euclidean geometry, a translation is moving every point a constant distance in a specified direction
Physically, an affine transform is one that preserves
1. Collinearity between points, i. e. , three points which lie on a line continue to be collinear after the transformation
2. Ratios of distances along a line, i. e. , for distinct colinear points p1, p2, p3, the ratio | p2 − p1 | / | p3 − p2 | is preserved
In general, an affine transform is composed of zero or more linear transformations (rotation, scaling or shear) and translation (shift). In Mathematics, a linear map (also called a linear transformation, or linear operator) is a function between two Vector spaces that In Geometry and Linear algebra, a rotation is a transformation in a plane or in space that describes the motion of a Rigid body around a fixed In Euclidean geometry, uniform scaling or Isotropic scaling is a Linear transformation that enlarges or diminishes objects the Scale factor In Mathematics, a shear or transvection is a particular kind of Linear mapping. In Euclidean geometry, a translation is moving every point a constant distance in a specified direction Several linear transformations can be combined into a single matrix, thus the general formula given above is still applicable.
## Representation of affine transformations
Ordinary vector algebra uses matrix multiplication to represent linear transformations, and vector addition to represent translations. Using an augmented matrix, it is possible to represent both using matrix multiplication. In Linear algebra, the augmented matrix of a matrix is obtained by combining two matrices In Mathematics, matrix multiplication is the operation of multiplying a matrix with either a scalar or another matrix The technique requires that all vectors are augmented with a "1" at the end, and all matrices are augmented with an extra row of zeros at the bottom, an extra column — the translation vector — to the right, and a "1" in the lower right corner. If A is a matrix,
$\begin{bmatrix} \vec{y} \\ 1 \end{bmatrix} = \begin{bmatrix} A & \vec{b} \ \\ 0, \ldots, 0 & 1 \end{bmatrix} \begin{bmatrix} \vec{x} \\ 1 \end{bmatrix}$
is equivalent to the following
$\vec{y} = A \vec{x} + \vec{b}.$
Ordinary matrix-vector multiplication always maps the origin to the origin. Since the set of vectors with 1 in the last entry does not contain the origin, translations within this subset using linear transformations are possible. This is the homogeneous coordinates system. In Mathematics, homogeneous coordinates, introduced by August Ferdinand Möbius in his 1827 work Der barycentrische Calcul, allow Affine transformations
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into one by multiplying the matrices. This is used extensively by graphics software.
## Properties of affine transformations
An affine transformation is invertible if and only if A is invertible. In Mathematics, the idea of inverse element generalises the concepts of negation, in relation to Addition, and reciprocal, in relation to ↔ In the matrix representation, the inverse is:
$\begin{bmatrix} A^{-1} & -A^{-1}\vec{b} \ \\ 0,\ldots,0 & 1 \end{bmatrix}.$
The invertible affine transformations form the affine group, which has the general linear group of degree n as subgroup and is itself a subgroup of the general linear group of degree n + 1. In Mathematics, the affine group or general affine group of any Affine space over a field K is the group of all invertible In Mathematics, the general linear group of degree n is the set of n × n invertible matrices, together with the operation
The similarity transformations form the subgroup where A is a scalar times an orthogonal matrix. In Linear algebra, two n -by- n matrices A and B over the field K are called similar if there exists In Matrix theory, a real orthogonal matrix is a square matrix Q whose Transpose is its inverse: Q^T If and only if the determinant of A is 1 or –1 then the transformation preserves area; these also form a subgroup. Combining both conditions we have the isometries, the subgroup of both where A is an orthogonal matrix. For the Mechanical engineering and Architecture usage see Isometric projection.
Each of these groups has a subgroup of transformations which preserve orientation: those where the determinant of A is positive. See also Orientation (geometry. In Mathematics, an orientation on a real Vector space is a choice of which In the last case this is in 3D the group of rigid body motions (proper rotations and pure translations). In Physics, a rigid body is an idealization of a solid body of finite size in which Deformation is neglected In 3D Geometry, an improper rotation, also called rotoreflection or rotary reflection is depending on context a Linear transformation or
For any matrix A the following propositions are equivalent:
• A – I is invertible
• A does not have an eigenvalue equal to 1
• for all b the transformation has exactly one fixed point
• there is a b for which the transformation has exactly one fixed point
• affine transformations with matrix A can be written as a linear transformation with some point as origin
If there is a fixed point we can take that as the origin, and the affine transformation reduces to a linear transformation. In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes In Mathematics, a fixed point (sometimes shortened to fixpoint) of a function is a point that is mapped to itself by the function This may make it easier to classify and understand the transformation. For example, describing a transformation as a rotation by a certain angle with respect to a certain axis is easier to get an idea of the overall behavior of the transformation than describing it as a combination of a translation and a rotation. However, this depends on application and context. Describing such a transformation for an object tends to make more sense in terms of rotation about an axis through the center of that object, combined with a translation, rather than by just a rotation with respect to some distant point. For example "move 200 m north and rotate 90° anti-clockwise", rather than the equivalent "with respect to the point 141 m to the northwest, rotate 90° anti-clockwise".
Affine transformations in 2D without fixed point (so where A has eigenvalue 1) are:
• pure translations
• scaling in a given direction, with respect to a line in another direction (not necessarily perpendicular), combined with translation that is not purely in the direction of scaling; the scale factor is the other eigenvalue; taking "scaling" in a generalized sense it includes the cases that the scale factor is zero (projection) and negative; the latter includes reflection, and combined with translation it includes glide reflection. In Euclidean geometry, uniform scaling or Isotropic scaling is a Linear transformation that enlarges or diminishes objects the Scale factor A scale factor is a number which scales, or multiplies some quantity In Mathematics, a reflection (also spelled reflexion) is a map that transforms an object into its Mirror image. In Geometry, a glide reflection is a type of Isometry of the Euclidean plane: the combination of a reflection in a line and a translation
• shear combined with translation that is not purely in the direction of the shear (there is no other eigenvalue than 1; it has algebraic multiplicity 2, but geometric multiplicity 1)
## Affine transformations and linear transformations
In a geometric setting, affine transformations are precisely the functions that map straight lines to straight lines. In Mathematics, a shear or transvection is a particular kind of Linear mapping. In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes
A linear transformation is a function that preserves all linear combinations; an affine transformation is a function that preserves all affine combinations. In Mathematics, linear combinations are a concept central to Linear algebra and related fields of mathematics An affine combination is a linear combination in which the sum of the coefficients is 1. In Mathematics, an affine combination of vectors x 1. x n is vector \sum_{i=1}^{n}{\alpha_{i} \cdot
An affine subspace of a vector space (sometimes called a linear manifold) is a coset of a linear subspace; i. In Mathematics, if G is a group, H is a Subgroup of G, and g is an element of G, then gH The concept of a linear subspace (or vector subspace) is important in Linear algebra and related fields of Mathematics. e. , it is the result of adding a constant vector to every element of the linear subspace. A linear subspace of a vector space is a subset that is closed under linear combinations; an affine subspace is one that is closed under affine combinations.
For example, in R3, the origin, lines and planes through the origin and the whole space are linear subspaces, while points, lines and planes in general as well as the whole space are the affine subspaces.
Just as members of a set of vectors are linearly independent if none is a linear combination of the others, so also they are affinely independent if none is an affine combination of the others. In Linear algebra, a family of vectors is linearly independent if none of them can be written as a Linear combination of finitely many other vectors The set of linear combinations of a set of vectors is their "linear span" and is always a linear subspace; the set of all affine combinations is their "affine span" and is always an affine subspace. For example, the affine span of a set of two points is the line that contains both; the affine span of a set of three non-collinear points is the plane that contains all three. Vectors
v1, v2, . . . , vn
are linearly dependent if there exists a vector a
a = [a1, a2, … ,an]
such that both:
∃ i∊[1…n]: ai ≠ 0
and
[v1T, v2T, … , vnT] × aT = 0
are true.
Similarly they are affinely dependent if the same is true and also
∑i ai; i ∊ [1…n] = 0
Vector a is an affine dependence among the vectors v1, v2, …, vn.
The set of all invertible affine transformations forms a group under the operation of composition of functions. In Mathematics, a group is a set of elements together with an operation that combines any two of its elements to form a third element That group is called the affine group, and is the semidirect product of Kn and GL(n, k). In Mathematics, the affine group or general affine group of any Affine space over a field K is the group of all invertible In Mathematics, especially in the area of Abstract algebra known as Group theory, a semidirect product is a particular way in which a group can
## Affine transformation of the plane
To visualise the general affine transformation of the Euclidean plane, take labelled parallelograms ABCD and A′B′C′D′. Euclidean geometry is a mathematical system attributed to the Greek Mathematician Euclid of Alexandria. In Geometry, a parallelogram is a Quadrilateral with two sets of Parallel sides Whatever the choices of points, there is an affine transformation T of the plane taking A to A′, and each vertex similarly. Supposing we exclude the degenerate case where ABCD has zero area, there is a unique such affine transformation T. Area is a Quantity expressing the two- Dimensional size of a defined part of a Surface, typically a region bounded by a closed Curve. Drawing out a whole grid of parallelograms based on ABCD, the image T(P) of any point P is determined by noting that T(A) = A′, T applied to the line segment AB is A′B′, T applied to the line segment AC is A′C′, and T respects scalar multiples of vectors based at A. [If A, E, F are colinear then the ratio length(AF)/length(AE) is equal to length(A′F′)/length(A′E′). ] Geometrically T transforms the grid based on ABCD to that based in A′B′C′D′.
Affine transformations don't respect lengths or angles; they multiply area by a constant factor
area of A′ B′ C′ D′ / area of ABCD.
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by its effect on signed areas (as defined, for example, by the cross product of vectors). In Mathematics, the cross product is a Binary operation on two vectors in a three-dimensional Euclidean space that results in another vector which
## Example of an affine transformation
The following equation expresses an affine transformation in GF(2) (with "+" representing XOR):
$\{\,a'\,\} = M\{\,a\,\} + \{\,v\,\}.$
where [M] is the matrix
$\begin{bmatrix}1&0&0&0&1&1&1&1 \\1&1&0&0&0&1&1&1 \\1&1&1&0&0&0&1&1 \\1&1&1&1&0&0&0&1 \\1&1&1&1&1&0&0&0 \\0&1&1&1&1&1&0&0 \\0&0&1&1&1&1&1&0 \\0&0&0&1&1&1&1&1\end{bmatrix}$
and {v} is the vector
$\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{bmatrix}.$
For instance, the affine transformation of the element {a} = x7 + x6 + x3 + x = {11001010} in big-endian binary notation = {CA} in big-endian hexadecimal notation, is calculated as follows:
$a_0' = a_0 \oplus a_4 \oplus a_5 \oplus a_6 \oplus a_7 \oplus 1 = 0 \oplus 0 \oplus 0 \oplus 1 \oplus 1 \oplus 1 = 1$
$a_1' = a_0 \oplus a_1 \oplus a_5 \oplus a_6 \oplus a_7 \oplus 1 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 1 = 0$
$a_2' = a_0 \oplus a_1 \oplus a_2 \oplus a_6 \oplus a_7 \oplus 0 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1$
$a_3' = a_0 \oplus a_1 \oplus a_2 \oplus a_3 \oplus a_7 \oplus 0 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1$
$a_4' = a_0 \oplus a_1 \oplus a_2 \oplus a_3 \oplus a_4 \oplus 0 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 0 \oplus 0 = 0$
$a_5' = a_1 \oplus a_2 \oplus a_3 \oplus a_4 \oplus a_5 \oplus 1 = 1 \oplus 0 \oplus 1 \oplus 0 \oplus 0 \oplus 1 = 1$
$a_6' = a_2 \oplus a_3 \oplus a_4 \oplus a_5 \oplus a_6 \oplus 1 = 0 \oplus 1 \oplus 0 \oplus 0 \oplus 1 \oplus 1 = 1$
$a_7' = a_3 \oplus a_4 \oplus a_5 \oplus a_6 \oplus a_7 \oplus 0 = 1 \oplus 0 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1.$
Thus, {a′} = x7 + x6 + x5 + x3 + x2 + 1 = {11101101} = {ED}
## See also
• the transformation matrix for an affine transformation
• matrix representation of a translation
• affine geometry
• homothetic transformation
• similarity transformation
• linear (the second meaning is affine transformation in 1D)
• 3D projection
• flat (geometry) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9032978415489197, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/109901?sort=newest | ## Do mixed Hodge modules form a stack?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The question should be reasonably self-contained: do MHM's form a stack? This is well-known for perverse sheaves, proved already in [BBD(G)]; does it hold for mixed Hodge modules? In other words, if I have a variety $X$ over the complex numbers, an open cover $(U_i)$ and mixed Hodge modules on all the $U_i$ with gluing isomorphisms on overlaps with the obvious cocycle condition, then do I get a unique MHM on $X$? I would like this in the analytic category but a reference for algebraic varities over the complex numbers is likely to satisfy me.
-
## 1 Answer
This is an extremely partial (and sketchy) answer that I haven't fully thought through. But, I am essentially just running the proof for perverse sheaves. Let me first construct the required MHM if the cover just consists of two elements. In this case the required MHM can be define as the cone in the Mayer-Vietoris distinguished triangle. Now use induction to prove the statement for a countable cover. For arbitrary covers I am not quite sure how to proceed. On the other hand, all I am saying is that define the MHM as the cohomology of the Cech complex given by your data. Unless I am missing something, this (modulo some technicalities with finiteness assumptions) gives the construction. Now for uniqueness, probably one can just observe that the usual isomorphism that shows uniqueness for perverse sheaves is a morphism of MHM. Since it is an isomorphism on underlying perverse sheaves, it is an isomorphism on the MHMs. In the case of a 2-element cover this also follows from the fact that there are no negative Exts between MHMs (which leads to the cone in the Mayer-Vietoris triangle being unique).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298104047775269, "perplexity_flag": "head"} |
http://www.haskell.org/haskellwiki/index.php?title=Free_structure&oldid=33920 | # Free structure
### From HaskellWiki
Revision as of 17:10, 2 March 2010 by Michael T (Talk | contribs)
## Contents
### 1 Introduction
This article attempts to give a relatively informal understanding of "free" structures from algebra/category theory, with pointers to some of the formal material for those who desire it.
### 2 Algebra
#### 2.1 What sort of structures are we talking about?
The distinction between free structures and other, non-free structures, originates in abstract algebra, so that provides a good place to start. Some common structures considered in algebra are:
• Monoids
• consisting of
• A set M
• An identity $e \in M$
• A binary operation $* : M \times M \to M$
• And satisfying the equations
• x * (y * z) = (x * y) * z
• e * x = x = x * e
• Groups
• consisting of
• A monoid (M,e, * )
• An additional unary operation $\,^{-1} : M \to M$
• satisfying
• x * x − 1 = e = x − 1 * x
• Rings
• consisting of
• A set R
• A unary operation $- : R \to R$
• Two binary operations $+, * : R \times R \to R$
• Distinguished elements $0, 1 \in R$
• such that
• (R,0, + , − ) is a group
• (R,1, * ) is a monoid
• x + y = y + x
• (x + y) * z = x * z + y * z
• x * (y + z) = x * y + x * z
So, for algebraic structures, we have sets equipped with operations that are expected to satisfy equational laws.
#### 2.2 Free algebraic structures
Now, given such a description, we can talk about the free structure over a particular set S (or, possibly over some other underlying structure; but we'll stick with sets now). What this means is that given S, we want to find some set M, together with appropriate operations to make M the structure in question, along with the following two criteria:
• There is an injection $i : S \to M$
• The structure generated is as 'simple' as possible.
• M should contain only elements that are required to exist by i and the operations of the structure.
• The only equational laws that should hold for the generated structure are those that are required to hold by the equational laws for the structure.
So, in the case of a free monoid (from here on out, we'll assume that the structure in question is a monoid, since it's simplest), the equation x * y = y * x should not hold unless x = y, x = e or y = e.
For monoids, the free structure over a set is given by the monoid of lists of elements of that set, with concatenation as multiplication. It should be easy to convince yourself of the following (in pseudo-Haskell):
```M = [S]
e = []
* = (++)
i : S -> [S]
i x = [x] -- i x = x : []
[] ++ xs = xs = xs ++ []
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs
xs ++ ys = ys ++ xs iff xs == ys || xs == [] || ys == []
-- etc.```
### 3 The category connection
#### 3.1 Free structure functors
One possible objection to the above description (even a more formal version thereof) is that the characterization of "simple" is somewhat vague. Category theory gives a somewhat better solution. Generally, structures-over-a-set will form a category, with arrows being structure-preserving homomorphisms. "Simplest" (in the sense we want) structures in that category will then either be initial or terminal, [1] and thus, freeness can be defined in terms of such universal constructions.
In its full categorical generality, freeness isn't necessarily categorized by underlying set structure, either. Instead, one looks at "forgetful" functors [2] from the category of structures to some other category. For our free monoids above, it'd be:
• $U : Mon \to Set$
The functor taking monoids to their underlying set. Then, the relevant universal property is given by finding an adjoint functor:
• $F : Set \to Mon$, F ⊣ U
F being the functor taking sets to the free monoids over those sets. So, free structure functors are left adjoint to forgetful functors. It turns out this categorical presentation also has a dual: cofree structure functors are right adjoint to forgetful functors.
#### 3.2 Algebraic constructions in a category
Category theory also provides a way to extend specifications of algebraic structures to more general categories, which can allow us to extend the above informal understanding to new contexts. For instance, one can talk about monoid objects in an arbitrary monoidal category. Such categories have a tensor product $\otimes$ of objects, with a unit object I (both of which satisfy various laws).
A monoid object in a monoidal category is then:
• An object M
• A unit 'element' $e : I \to M$
• A multiplication $m : M \otimes M \to M$
such that:
• $m \circ (id_{M} \otimes e) = u_l$
• $m \circ (e \otimes id_M) = u_r$
• $m \circ (id_M \otimes m) = m \circ (m \otimes id_M) \circ \alpha$
Where:
• $u_l : M \otimes I \to M$ and $u_r : I \otimes M \to M$ are the identity isomorphisms for the monoidal category, and
• $\alpha : M \otimes (M \otimes M) \to (M \otimes M) \otimes M$ is part of the associativity isomorphism of the category.
So, hopefully the connection is clear: we've generalized the carrier set to a carrier object, generalized the operations to morphisms in a category, and equational laws are promoted to being equations about composition of morphisms.
#### 3.3 Monads
One example of a class of monoid objects happens to be monads. Given a base category C, we have the monoidal category CC:
• Objects are endofunctors $F : C \to C$
• Morphisms are natural transformations [3] between the functors
• The tensor product is composition: $F \otimes G = F \circ G$
• The identity object is the identity functor, I, taking objects and morphisms to themselves
If we then specialize the definition of a monoid object to this situation, we get:
• An endofunctor $M : C \to C$
• A natural transformation $\eta : I \to M$
• A natural transformation $\mu : M \circ M \to M$
which satisfy laws that turn out to be the standard monad laws. So, monads turn out to be monoid objects in the category of endofunctors.
#### 3.4 Free Monads
But, what about our intuitive understanding of free monoids above? We wanted to promote an underlying set, but we have switched from sets to functors. So, presumably, a free monad is generated by an underlying (endo)functor, $F : C \to C$. We then expect there to be a natural transformation $i : F \to M$, 'injecting' the functor into the monad.
```data Free f a = Return a | Roll (f (Free f a))
instance Functor f => Monad (Free f) where
return a = Return a
Return a >>= f = f a
Roll ffa >>= f = Roll $ fmap (>>= f) ffa
-- join (Return fa) = fa
-- join (Roll ffa) = Roll (fmap join ffa)
inj :: Functor f => f a -> Free f a
inj fa = Roll $ fmap Return fa```
This should bear some resemblance to free monoids over lists. `Return` is analogous to `[]`, and `Roll` is analogous to `(:)`. Lists let us create arbitrary length strings of elements from some set, while `Free f` lets us create structures involving `f` composed with itself an arbitrary number of times (recall, functor composition was the tensor product of our category). `Return` gives our type a way to handle the 0-ary composition of `f` (as `[]` is the 0-length string), while `Roll` is the way to extend the nesting level by one (just as `(:)` lets us create (n+1)-length strings out of n-length ones). Finally, both injections are built in a similar way:
```inj_list x = (:) x []
inj_free fx = Roll (fmap Return fx)```
This, of course, is not completely rigorous, but it is a nice extension of the informal reasoning we started with.
### 4 Further reading
For those looking for an introduction to the necessary category theory used above, Steve Awodey's Category Theory is a popular, freely available reference.
### 5 Notes
#### 5.1 Universal constructions
Initial (final) objects are those that have a single unique arrow from (to) the object to (from) every other object in the category. For instance, the empty set is initial in the category of sets, and any one-element set is final. Initial objects play an important role in the semantics of algebraic datatypes. For a datatype like:
`data T = C1 A B C | C2 D E T`
we consider the following:
• A functor $F : Hask \to Hask$, $F X = A \times B \times C + D \times E \times X$
• F-algebras which are:
• An object $A \in Hask$
• An action $a : FA \to A$
• Algebra homomorphisms $(A, a) \to (B, b)$
• These are given by $h : A \to B$ such that $b \circ Fh = h \circ a$
The datatype `T` is then given by an initial F-algebra. This works out nicely because the unique algebra homomorphism whose existence is guaranteed by initiality is the fold or 'catamorphism' for the datatype.
Intuitively, though, the fact that `T` is an F-algebra means that it is in some sense closed under forming terms of shape F---suppose we took the simpler signature `FX = 1 + X` of the natural numbers; then both Z = inl () and Sx = inr x can be incorporated into Nat. However, there are potentially many algebras; for instance, the naturals modulo some finite number, and successor modulo that number are an algebra for the natural signature.
However, initiality constrains what Nat can be. Consider, for instance, the above modular sets 2 and 3. There can be no homomorphism $h : 2 \to 3$:
• $h0=0 \,\, ;\, h1=0$
• $S(h1) = S0 = 1\,$ but $h(S1) = h0 = 0 \neq 1$
• $h0=0 \,\,;\, h1=1$
• $S(h1) = S1 = 2\,$ but $h(S1) = h0 = 0 \neq 2$
• $h0=0 \,\,;\, h1=2$
• $S(h0) = S0 = 1\,$ but $h(S0) = h1 = 2 \neq 2$
• $h0 \neq 0$
• $0 = Z \neq hZ = h0$
This is caused by these algebras identifying elements in incompatible ways (2 makes SSZ = Z, but 3 doesn't, and 3 makes SSSZ = Z, but 2 doesn't). So, the values of an initial algebra must be compatible with any such identification scheme, and this is accomplished by identifying none of the terms in the initial algebra (so that h is free to send each term to an appropriate value in the target, according to the identifications there). A similar phenomenon occurs in the main section of this article, except that the structures in question have additional equational laws that terms must satisfy, so the initial structure is allowed to identify those, but no more than those.
By the same argument, we can determine that 3 is not a final algebra. Nor are the naturals (for any modular set M, S(hM) = SM, but h(SM) = h0 = 0). The final algebra is the set {0}, with S0 = 0 and Z = 0, with unique homomorphism hx = 0. This can be seen as identifying as many elements as possible, rather than as few. Naturally, final algebras don't receive that much interest. However, finality is an important property of coalgebras.
#### 5.2 Forgetful functors
The term "forgetful functor" has no formal specification; only an intuitive one. The idea is that one starts in some category of structures, and then defines a functor by forgetting part or all of what defines those structures. For instance:
• $U : Str \to Set$, where Str is any category of algebraic structures, and U simply forgets about all of the n-ary operations and equational laws, and takes structures to their underlying sets, and homomorphisms to functions over those sets.
• $U : Grp \to Mon$, which takes a group and forgets about the inverse operation to give a monoid. This functor would then be related to "free groups over a monoid".
#### 5.3 Natural transformations
The wikipedia article gives a formal definition of natural transformations, but a Haskell programmer can think of a natural transformation between functors F and G as:
`trans :: forall a. F a -> G a` | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121942520141602, "perplexity_flag": "middle"} |
http://terrytao.wordpress.com/tag/poincare-inequality/ | What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘Poincare inequality’ tag.
## 285G, Lecture 8: Ricci flow as a gradient flow, log-Sobolev inequalities, and Perelman entropy
24 April, 2008 in math.CA, math.AP, math.DG, 285G - poincare conjecture | Tags: gradient flow, least eigenvalue, log-Sobolev inequality, Nash entropy, non-collapsing, Perelman entropy, Poincare inequality, semigroup method | by Terence Tao | 9 comments
It is well known that the heat equation
$\dot f = \Delta f$ (1)
on a compact Riemannian manifold (M,g) (with metric g static, i.e. independent of time), where $f: [0,T] \times M \to {\Bbb R}$ is a scalar field, can be interpreted as the gradient flow for the Dirichlet energy functional
$\displaystyle E(f) := \frac{1}{2} \int_M |\nabla f|_g^2\ d\mu$ (2)
using the inner product $\langle f_1, f_2 \rangle_\mu := \int_M f_1 f_2\ d\mu$ associated to the volume measure $d\mu$. Indeed, if we evolve f in time at some arbitrary rate $\dot f$, a simple application of integration by parts (equation (29) from Lecture 1) gives
$\displaystyle \frac{d}{dt} E(f) = - \int_M (\Delta f) \dot f\ d\mu = \langle -\Delta f, \dot f \rangle_\mu$ (3)
from which we see that (1) is indeed the gradient flow for (3) with respect to the inner product. In particular, if f solves the heat equation (1), we see that the Dirichlet energy is decreasing in time:
$\displaystyle \frac{d}{dt} E(f) = - \int_M |\Delta f|^2\ d\mu$. (4)
Thus we see that by representing the PDE (1) as a gradient flow, we automatically gain a controlled quantity of the evolution, namely the energy functional that is generating the gradient flow. This representation also strongly suggests (though does not quite prove) that solutions of (1) should eventually converge to stationary points of the Dirichlet energy (2), which by (3) are just the harmonic functions (i.e. the functions f with $\Delta f = 0$).
As one very quick application of the gradient flow interpretation, we can assert that the only periodic (or “breather”) solutions to the heat equation (1) are the harmonic functions (which, in fact, must be constant if M is compact, thanks to the maximum principle). Indeed, if a solution f was periodic, then the monotone functional E must be constant, which by (4) implies that f is harmonic as claimed.
It would therefore be desirable to represent Ricci flow as a gradient flow also, in order to gain a new controlled quantity, and also to gain some hints as to what the asymptotic behaviour of Ricci flows should be. It turns out that one cannot quite do this directly (there is an obstruction caused by gradient steady solitons, of which we shall say more later); but Perelman nevertheless observed that one can interpret Ricci flow as gradient flow if one first quotients out the diffeomorphism invariance of the flow. In fact, there are infinitely many such gradient flow interpretations available. This fact already allows one to rule out “breather” solutions to Ricci flow, and also reveals some information about how Poincaré’s inequality deforms under this flow.
The energy functionals associated to the above interpretations are subcritical (in fact, they are much like $R_{\min}$) but they are not coercive; Poincaré’s inequality holds both in collapsed and non-collapsed geometries, and so these functionals are not excluding the former. However, Perelman discovered a perturbation of these functionals associated to a deeper inequality, the log-Sobolev inequality (first introduced by Gross in Euclidean space). This inequality is sensitive to volume collapsing at a given scale. Furthermore, by optimising over the scale parameter, the controlled quantity (now known as the Perelman entropy) becomes scale-invariant and prevents collapsing at any scale – precisely what is needed to carry out the first phase of the strategy outlined in the previous lecture to establish global existence of Ricci flow with surgery.
The material here is loosely based on Perelman’s paper, Kleiner-Lott’s notes, and Müller’s book.
Read the rest of this entry » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274085760116577, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/probability-distributions+transformation | # Tagged Questions
1answer
65 views
### How to deal with non random data in statistical analysis?
I have a set of monthly water quality data, and I want to use them in a few statistical analysis (such as finding distribution or using in copula models) which require random variables as input. I ...
2answers
195 views
### Kernel density estimation for heavy-tailed distributions using the champernowne transformation
I am trying to follow this paper to estimate the density for a heavy-tailed distributions using the champernowne transformation. Alternative link to the paper Another alternative link to the paper ...
1answer
50 views
### Transforming a Continuous Function
My math is quite limited so please bear with me. I will get to the point: Is there a way to transform a continuous function into a bounded one? In essence I have a normalized Gaussian distribution ...
0answers
193 views
### Joint distribution of transformed variables
I have a problem in deriving the transformed joint distribution for continuous random variables. The textbook says use jacobian which makes sense but I wanted to go from first principles like below... ...
0answers
59 views
### Transfer of random variables, uniqueness
If $X$ is a continuous random variable with known distribution, and $Y_1= f_1(X)$, $Y_2= f_2(X)$ where $f_1$ and $f_2$ are strictly increasing functions and distribution of $Y_1$ and $Y_2$ is the ...
2answers
204 views
### Transformations that leave a binomial distribution invariant
The binomial distribution is written as $$p(r|n,\theta )=\binom{n}{r}\theta ^r(1-\theta )^{n-r}$$ where $n$ is a positive integer, $0\leq\theta\leq1$, and $r$ is an integer taking values from $0$ to ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042553901672363, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2011/10/01/inner-products-on-1-forms/ | # The Unapologetic Mathematician
## Inner Products on 1-Forms
Our next step after using a metric to define an inner product on the module $\mathfrak{X}(U)$ of vector spaces over the ring $\mathcal{O}(U)$ of smooth functions is to flip it around to the module $\Omega^1(U)$ of $1$-forms. The nice thing is that the hard part is already done. All we really need to do is define an inner product on the cotangent space $\mathcal{T}^*_p(M)$; then the extension to $1$-forms is exactly like extending from inner products on each tangent space to an inner product on vector fields.
And really this construction is just a special case of a more general one. Let’s say that $\langle\underbar{\hphantom{X}},\underbar{\hphantom{X}}\rangle$ is an inner product on a vector space $V$. As we mentioned when discussing adjoint transformations, this gives us an isomorphism from $V$ to its dual space $V^*$. That is, when we have a metric floating around we have a canonical way of identifying tangent vectors in $\mathcal{T}_pM$ with cotangent vectors in $\mathcal{T}^*_pM$.
Everything is perfectly well-defined at this point, but let’s consider this a bit more explicitly. Say that $\{e_i\}$ is a basis of $\mathcal{T}_pM$. We automatically have a dual basis $\{\eta^i\}$ defined by $\eta^i(e_j)=\delta^i_j$, even before defining the metric. So if the inner product $g_p$ defines a mapping $\mathcal{T}_pM\to\mathcal{T}^*_pM$, what does it look like with respect to these bases? It takes the vector $e_i$ and sends it to a linear functional whose value at $e_j$ is $g_p(e_i,e_j)$. Since we get a number at each point $p$, we will also write this as a function $g_{ij}(p)$ That is, we can break the image of $e_i$ out as the linear combination
$\displaystyle e_i\mapsto\sum\limits_{j=1}^ng_p(e_i,e_j)\eta^j=\sum\limits_{j=1}^ng_{ij}(p)\eta^j$
What about a vector with components $v^i$? We easily calculate
$\displaystyle\begin{aligned}\sum\limits_{i=1}^nv^ie_i&\mapsto\sum\limits_{i=1}^nv^i\sum\limits_{j=1}^ng_{ij}(p)\eta^j\\&=\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)\eta^j\end{aligned}$
So $g_{ij}$ is the matrix of this transformation. The fact that both indices on the bottom tells us that we are moving from vectors to covectors.
The same sort of reasoning can be applied to the inner product on the dual space. If we write it again by $g_p$, then we get another matrix:
$\displaystyle g^{ij}(p)=g_p(\eta^i,\eta^j)$
which tells us how to send a basis covector $\eta^i$ to a vector:
$\displaystyle \eta^i\mapsto\sum\limits_{j=1}^ng^{ij}(p)e_j$
and thus we can calculate the image of any covector with components $\lambda_i$:
$\displaystyle\begin{aligned}\sum\limits_{i=1}^n\lambda_i\eta^i&\mapsto\sum\limits_{i=1}^n\lambda_i\sum\limits_{j=1}^ng^{ij}(p)e_j\\&=\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^n\lambda_ig^{ij}(p)\right)e_j\end{aligned}$
But these are supposed to be inverses to each other! Thus we can send a vector $v=v^ie_i$ to a covector and back:
$\displaystyle\begin{aligned}\sum\limits_{i=1}^nv^ie_i&\mapsto\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)\eta^j\\&\mapsto\sum\limits_{k=1}^n\left(\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)g^{jk}(p)\right)e_k\\&=\sum\limits_{k=1}^n\left(\sum\limits_{i=1}^nv^i\left(\sum\limits_{j=1}^ng_{ij}(p)g^{jk}(p)\right)\right)e_k\end{aligned}$
If this is to be the original vector back, the coefficient of $e_k$ must be $v^k$, which means the inner sum — the matrix product of $g_{ij}$ and $g^{jk}$ — must be the Kronecker delta. That is, $g^{jk}$ must be the right matrix inverse of $g_{ij}$.
Similarly, if we start with a covector we will find that $g^{ij}$ must be the left matrix inverse of $g_{jk}$. Since it’s a left and a right inverse, it must be the inverse; in particular, $g_{ij}$ must be invertible, which is equivalent to the assumption that $g_p$ is nondegenerate! It also means that we can always find the matrix of the inner product on the dual space in terms of the dual basis, assuming we have the matrix of the inner product on the original space.
And to return to differential geometry, let’s say we have a coordinate patch $(U,x)$. We get a basis of coordinate vector fields, which let us define the matrix-valued function
$\displaystyle g_{ij}(p)=g_p\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)$
This much we calculate from the metric we are given by assumption. But then we can invert the matrix at each point to get another one:
$\displaystyle g^{ij}(p)=g_p\left(dx^i,dx^j\right)$
where this is how we define the inner product on covectors. Of course, the situation is entirely symmetric, and if we’d started with a symmetric tensor field of type $(2,0)$ that defined an inner product at each point, we could flip it over to get a metric.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 5 Comments »
1. [...] that we’ve defined inner products on -forms we can define them on forms for all other . In fact, our construction will not depend on [...]
Pingback by | October 4, 2011 | Reply
2. [...] Now, we already have something else floating around in our discussion: the metric tensor . When we pick coordinates we get a matrix-valued function: [...]
Pingback by | October 6, 2011 | Reply
3. [...] Armstrong: (Pseudo)-Riemannian Metrics, Isometries, Inner Products on 1-Forms, The Hodge Star in Coordinates, The Hodge Star on Differential Forms, Inner Products on [...]
Pingback by | October 8, 2011 | Reply
4. [...] we know that and are inverse matrices. And so we get the canonical volume [...]
Pingback by | October 11, 2011 | Reply
5. [...] Inner Products on 1-Forms (unapologetic.wordpress.com) [...]
Pingback by | January 14, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193483591079712, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/5832/what-exactly-is-a-negligible-and-non-negligible-function/5840 | # What exactly is a negligible (and non-negligible) function?
The mathematical definition of neglible and non-neglible functions is fairly clear-cut, but why they are important and how they are used in cryptography?
-
## 2 Answers
In perfectly secret schemes like the one-time pad, the probability of success does not improve with greater computational power. However, in modern cryptographic schemes, we generally do not try to achieve perfect secrecy(yes governments may use the one time pad, but this is generally not practical for the average user). In fact, given unbounded computational power, all of our non-perfectly-secret schemes are insecure(also note that for public-key cryptography, perfect secrecy is unachievable using classic cryptography so all schemes are insecure against unbounded computational power). Instead, we define security against a specific set of adversaries whose computational power is bounded. Generally, we assume an adversary that is bounded to run in time polynomial to $n$, where $n$ is the security parameter given to the key generation algorithm(more precisely, the key generation algorithm is given input $1^n$ so that $n$ will be its input size and its output--the key--will be polynomial in the size of its input.)
So consider a scheme $\Pi$ where the only attack against it is brute force attack. We consider $\Pi$ to be secure if it cannot be broken by a brute force attack in polynomial time.
The idea of negligible probability encompasses this exact notion. In $\Pi$, let's say that we have a polynomial-bounded adversary. Brute force attack is not an option. But instead of brute force, the adversary can guess (a polynomial number of) random values and hope to chance upon the right one. In this case, we define security using negligible functions: The probability of success has to be smaller than any inverse polynomial function.
And this makes a lot of sense: If the success probability for an individual guess is an inverse polynomial function, then the adversary can try a polynomial amount of guesses and succeed with high probability. In sum then, if the overall success rate is $1/poly(n)$ then we consider this a feasible attack and the scheme is insecure.
So, we require that the success probability must be less than every inverse polynomial function. This way, even if the adversary tries poly(n) guesses, it will not be significant since it will only have tried:
$$poly(n)/superpoly(n)$$
As $n$ grows, the denominator grows far faster than the numerator and the success probability will not be significant.
Edited to add Here is an informal argument that may make this clearer: To see that the notions of superpolynomial brute force attack and negligible probability guessing are equivalent, consider a scheme with $K$ possible keys.
Brute force attack on the key set runs in $K$ time. Moreover, the probability of choosing a key at random and it being the correct key is $1/K$. Now, if $K$ is polynomial in $n$, (the security parameter), then this scheme can be brute forced in time $K = poly(n)$. Moreover a random guess succeeds with probability $1/K= 1/poly(n)=nonnegliglbe$ and the scheme is by both definitions insecure.
To secure the scheme then, we want to make brute force run in superpolynomial time. In other words,$K$ must be superpolynomial in $n$. Well then, the probability of guessing correctly on a single guess is $$1/K=1/superpoly(n)$$ and this is by definition negligible probability.
Although informal, I think this last part motivates the use of negligible functions in security proofs.
-
2
That was a nice explanation, I would +1 you if I could. – Nico Bellic Dec 26 '12 at 6:42
Asymptotic security claims aren't that popular nowadays. Concrete security claims are preferred, because they make a statement about the concrete key sizes we use. – CodesInChaos Dec 26 '12 at 10:06
You should add what $n$ is. Otherwise "polynomial" doesn't make much sense. – Paŭlo Ebermann♦ Dec 26 '12 at 12:47
@PaŭloEbermann I did mention that $n$ is the security parameter but I'll edit it to make it clearer – AFS Dec 26 '12 at 18:25
Thanks, it is now better. – Paŭlo Ebermann♦ Dec 26 '12 at 19:31
Very good explanation. I would like to add that you will see negligible functions also in other proofs. One example are peusdorandom strings. If an attacker looks at a string, he should only be able to decide if this string is pseudo-random or "real" random" with probability (distribution) 1/2 + negl(n). He can always toss a coin (that gives hom probability 1/2) but maybe he can extract some piece of information that "improves" his guess.
-
True. This also is related to a brute force attack. An equivalent definition for PRF security is that no adversary with oracle access to some function $f:\{0,1)^n \rightarrow \{0,1\}^n$ can determine if it's a PRF or a random function in time $n$(the adversary is given $1^n$ as input). With unbounded time, one way to distinguish is to brute force and see if the oracle is a PRF(this works since there are at most $2^n$ functions is a PRF family with an $n$-length key while there are $2^{n*2^n}$ random functions. Without brute force though, the distinguisher can only hope to guess the right – AFS Dec 28 '12 at 16:12
key(and if it guesses this it can easily verify it with high probability using the oracle). So we require the probability of a guess to be negligible. +1 btw – AFS Dec 28 '12 at 16:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125531911849976, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/114081/explicit-computations-of-the-etale-homotopy-type/118334 | ## Explicit computations of the étale homotopy type?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi,
I'm currently trying to learn about etale homotopy for schemes as introduced by Artin-Mazur. I know that by the Artin-Mazur comparision theorem, it is possible to compute the etale homotopy type of certain class of varieties as the profinite completion of the complex points. However, in most other cases for schemes, it seems quite cumbersome to calculate the étale homotopy type of a locally noetherian scheme say. Are there any explicit computations of the étale homotopy type that are particularly helpful for understanding the general theory? Or am I missing something here?
Sorry if my question is a bit vague.
-
This answer to one of my questions: mathoverflow.net/questions/112007/… has an interesting property, described in the comments, that might be helpful to work out the computations of. Or it might not be - I don't know much about etale homotopy. – Will Sawin Nov 22 at 2:02
## 2 Answers
Here's an example which is, in my opinion, illuminating. It is also quite easy, which I view as a plus.
Namely, consider the étale homotopy type of $\text{Spec}~\mathbb{R}$. By your comparision theorem, this is (pro)-equivalent to $B(\mathbb{Z}/2\mathbb{Z})$. But in fact the (pro)-simplicial set one gets is precisely the bar construction for `$G=\text{Gal}(\mathbb{C}/\mathbb{R})=\mathbb{Z}/2\mathbb{Z}$`(!!!), showing that computing the étale cohomology of $\text{Spec}~\mathbb{R}$ is "the same" as computing the group cohomology of $G$. This is a good, and not hard, exercise.
In general, if $k$ is a field with finite Galois group $G$, its étale homotopy type will equal(!) the bar construction of $BG$ for $G=\text{Gal}(k^s/k)$; with an appropriate version of $BG$ for $G$ profinite, this will be true for any field.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Maybe my answer will not fit exactly your question. What I like very much is D. Sullivan's use of Artin-Mazur's theory in his proof of Adams'conjecture. What D. Sullivan does is the computation of the étale homotopy type of the classifying space $BU_n$ of the complex unitary group and he does this computation by considering this classifying space as a direct limit of complex Grassmannians:
$$G_{n,k}\cong GL(n+k,\mathbb{C})/(GL(n,\mathbb{C})\times GL(k,\mathbb{C}))$$
Then he analysises the étale homotopy type of $BU_n$ by looking at its associated arithmetic square. What is important in Sullivan's proof of the Adams'conjecture is the understanding of the action of the absolute galois group on the étale homotopy type of $BU_n$ which has a deep impact on Adams operations in $K$-theory. In his MIT notes "Geometric Topology Localization, Periodicity, and Galois Symmetry" he also states a conjecture, now a theorem: "the Sullivan's conjecture", that has some important implications on the study of the étale homotopy type of real algebraic varieties. Of course all this material can be found in section 5 "Algebraic geometry (étale homotopy type)" of the notes cited above with many examples.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237097501754761, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Volume_of_an_n-ball | # Volume of an n-ball
In geometry, a ball is a region in space consisting of all points within a fixed distance from a fixed point, and an n-ball is a ball in Rn. The volume of an n-ball is an important constant that occurs in formulas throughout mathematics.
## Formulas
### The volume
The n-dimensional volume of a Euclidean ball of radius R in n-dimensional Euclidean space is:
$V_n(R) = \frac{\pi^{n/2}}{\Gamma(\frac{n}{2} + 1)}R^n,$
where Γ is Leonhard Euler's gamma function. Using explicit formulas for particular values of the gamma function at the integers and half integers gives formulas for the volume of a unit ball that do not require an evaluation of the gamma function. These are:
$V_{2k}(R) = \frac{\pi^k}{k!}R^{2k},$
$V_{2k+1}(R) = \frac{2^{k+1}\pi^k}{(2k+1)!!}R^{2k+1},$
where the double factorial is defined for odd integers 2k + 1 as (2k + 1)!! = 1 · 3 · 5 ··· (2k − 1) · (2k + 1).
A consequence of the volume formula is a formula for the radius of a ball of volume V:
$R_n(V) = \frac{\Gamma(\frac{n}{2} + 1)^{1/n}}{\sqrt{\pi}}V^{1/n}.$
This formula, too, can be separated into even and odd dimensional cases:
$R_{2k}(V) = \frac{(k!V)^{1/2k}}{\sqrt{\pi}},$
$R_{2k+1}(V) = \left(\frac{(2k+1)!!V}{2^{k+1}\pi^k}\right)^{1/(2k+1)}.$
### Recursions
The volume satisfies several recursive formulas. These formulas can either be proved directly or proved as consequences of the general volume formula above. The simplest to state is a formula for the volume of an n-ball in terms of the volume of an (n − 2)-ball of the same radius:
$V_n(R) = \frac{2\pi R^2}{n} V_{n-2}(R).$
There is also a formula for the volume of an n-ball in terms of the volume of an (n − 1)-ball of the same radius:
$V_n(R) = R\sqrt{\pi}\frac{\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2} + 1)} V_{n-1}(R).$
Using explicit formulas for the gamma function again shows that the one-dimension recursion formula can also be written as:
$\begin{align} V_{2k}(R) &= R\pi \frac{(2k - 1)!!}{2^k k!} V_{2k-1}(R) = R\pi \frac{(2k-1)(2k-3) \cdots 5 \cdot 3 \cdot 1}{(2k)(2k - 2) \cdots 6 \cdot 4 \cdot 2} V_{2k-1}(R), \\ V_{2k+1}(R) &= 2R\frac{2^k k!}{(2k+1)!!} V_{2k}(R) = 2R\frac{(2k)(2k - 2) \cdots 6 \cdot 4 \cdot 2}{(2k-1)(2k-3) \cdots 5 \cdot 3 \cdot 1} V_{2k}(R). \end{align}$
### Low dimensions
In low dimensions, these volume and radius formulas simplify to the following:
Dimension Volume of a ball of radius R Radius of a ball of volume V
0 1 All balls have volume 1
1 $2R$ $V/2$
2 $\pi R^2$ $\frac{V^{1/2}}{\sqrt{\pi}}$
3 $\frac{4}{3}\pi R^3$ $\left(\frac{3V}{4\pi}\right)^{1/3}$
4 $\frac{\pi^2}{2} R^4$ $\frac{(2V)^{1/4}}{\sqrt{\pi}}$
5 $\frac{8\pi^2}{15} R^5$ $\left(\frac{15V}{8\pi^2}\right)^{1/5}$
6 $\frac{\pi^3}{6} R^6$ $\frac{(6V)^{1/6}}{\sqrt{\pi}}$
7 $\frac{16\pi^3}{105} R^7$ $\left(\frac{105V}{16\pi^3}\right)^{1/7}$
8 $\frac{\pi^4}{24} R^8$ $\frac{(24V)^{1/8}}{\sqrt{\pi}}$
9 $\frac{32\pi^4}{945} R^9$ $\left(\frac{945V}{32\pi^4}\right)^{1/9}$
10 $\frac{\pi^5}{120} R^{10}$ $\frac{(120V)^{1/10}}{\sqrt{\pi}}$
### High dimensions
Suppose that R is fixed. Then the volume of an n-ball of radius R approaches zero as n tends to infinity. This can be shown using the two-dimension recursion formula. At each step, the new factor being multiplied into the volume is proportional to 1 / n, where the constant of proportionality 2πR2 is independent of n. Eventually, n is so large that the new factor is less than 1. From then on, the volume of an n-ball must decrease at least geometrically, and therefore it tends to zero.
A variant on this proof uses the one-dimension recursion formula. Here, the new factor is proportional to a quotient of gamma functions. Gautschi's inequality bounds this quotient above by n−1/2. The argument concludes as before by showing that the volumes decrease at least geometrically.
### Relation with surface areas
Let $A_n(R)$ denote the surface area of the n-sphere of radius R. The n-sphere is the boundary of the (n + 1)-ball of radius R. The (n + 1)-ball is a union of concentric spheres, and consequently the surface area and the volume are related by:
$A_n(R) = \frac{d}{dR}V_{n+1}(R).$
Since the volume is proportional to a power of the radius, the above relation leads to a simple recurrence equation relating the surface area of an n-ball and the volume of an (n + 1)-ball. By applying the two-dimension recursion formula, it also gives a recurrence equation relating the surface area of an n-ball and the volume of an (n − 1)-ball:
$V_0(R) = 1,$
$A_0(R) = 2,$
$V_{n+1}(R) = \frac{R}{n+1}A_n(R),$
$A_{n+1}(R) = (2\pi R)V_n(R).$
## Proofs
There are many proofs of the above formulas.
### The volume is proportional to the nth power of the radius
An important step in several proofs about volumes of n-balls, and a generally useful fact besides, is that the volume of the n-ball of radius R is proportional to Rn:
$V_n(R) \propto R^n.$
The proportionality constant is the volume of the unit ball.
The above relation has a simple inductive proof. The base case is n = 0, where the proportionality is obvious. For the inductive case, assume that proportionality is true in dimension n − 1. Note that the intersection of an n-ball with a hyperplane is an (n − 1)-ball. When the volume of the n-ball is written as an integral of volumes of (n − 1)-balls:
$V_n(R) = \int_{-R}^R V_{n-1}(\sqrt{R^2 - x^2}) \,dx,$
it is possible by the inductive assumption to remove a factor of R from the radius of the n − 1 ball to get:
$V_n(R) = R^{n-1} \int_{-R}^R V_{n-1}\left(\sqrt{1 - (x/R)^2}\right) \,dx.$
Making the change of variables t = x/R leads to:
$V_n(R) = R^n \int_{-1}^1 V_{n-1}(\sqrt{1 - t^2}) \,dt = R^n V_n(1),$
which demonstrates the proportionality relation in dimension n. By induction, the proportionality relation is true in all dimensions.
### The two-dimension recursion formula
A proof of the recursion formula relating the volume of the n-ball and an (n − 2)-ball can be given using the proportionality formula above and integration in cylindrical coordinates. Fix a plane through the center of the ball. Let r denote the distance between a point in the plane and the center of the sphere, and let θ denote the azimuth. Intersecting the n-ball with the (n − 2)-dimensional plane defined by fixing a radius and an azimuth gives an (n − 2)-ball of radius $\sqrt{R^2 - r^2}$. The volume of the ball can therefore be written as an iterated integral of the volumes of the (n − 2)-balls over the possible radii and azimuths:
$V_n(R) = \int_0^{2\pi} \int_0^R V_{n-2}(\sqrt{R^2 - r^2}) \,r\,dr\,d\theta,$
The azimuthal coordinate can be immediately integrated out. Applying the proportionality relation shows that the volume equals:
$V_n(R) = (2\pi) V_{n-2}(R) \int_0^R (1 - (r/R)^2)^{(n-2)/2}\,r\,dr.$
The integral can be evaluated by making the substitution u = 1 − (r/R)2 to get:
$\begin{align} V_n(R) &= (2\pi) V_{n-2}(R) \cdot \left(-\frac{R^2}{n}(1 - (r/R)^2)^{n/2}\right)\bigg|_{r=0}^{r=R} \\ &= \frac{2\pi R^2}{n} V_{n-2}(R), \end{align}$
which is the two-dimension recursion formula.
The same technique can be used to give an inductive proof of the volume formula. The base cases of the induction are the 0-ball and the 1-ball, which can be checked directly using the facts $\Gamma(1) = 1$ and $\Gamma(3/2) = (1/2) \cdot \Gamma(1/2) = \sqrt{\pi}/2$. The inductive step is similar to the above, but instead of applying proportionality to the volumes of the (n − 2)-balls, the inductive assumption is applied instead.
### The one-dimension recursion formula
The proportionality relation can also be used to prove the recursion formula relating the volumes of an n-ball and an (n − 1)-ball. As in the proof of the proportionality formula, the volume of an n-ball can be written as an integral over the volumes of (n − 1)-balls. Instead of making a substitution, however, the proportionality relation can be applied to the volumes of the (n − 1)-balls in the integrand:
$V_n(R) = V_{n-1}(R) \int_{-R}^R (1 - (x/R)^2)^{(n-1)/2} \,dx.$
The integrand is an even function, so by symmetry the interval of integration can be restricted to [0, R]. On the interval [0, R], it is possible to apply the substitution u = 1 - (x/R)2. This transforms the expression into:
$V_{n-1}(R) \cdot R \cdot \int_0^1 u^{(n-1)/2}(1-u)^{-1/2}\,du$
The integral is a value of a well-known special function called the beta function, and the volume in terms of the beta function is:
$V_n(R) = V_{n-1}(R) \cdot R \cdot B(\textstyle\frac{n + 1}{2}, \textstyle\frac{1}{2}).$
The beta function can be expressed in terms of the gamma function in much the same way that factorials are related to binomial coefficients. Applying this relationship gives:
$V_n(R) = V_{n-1}(R) \cdot R \cdot \frac{\Gamma(\frac{n + 1}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{n}{2} + 1)}.$
Using the value $\Gamma(1/2) = \sqrt{\pi}$ gives the one-dimension recursion formula:
$V_n(R) = R\sqrt{\pi}\frac{\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2} + 1)} V_{n-1}(R).$
As with the two-dimension recursive formula, the same technique can be used to give an inductive proof of the volume formula.
### Direct integration in spherical coordinates
The volume can be computed by integrating the volume element in spherical coordinates. The spherical coordinate system has a radial coordinate r and angular coordinates φ1, ..., φn − 1, where the domain of each φ except φn − 1 is [0, π), and the domain of φn − 1 is [0, 2π). The spherical volume element is:
$dV = r^{n-1}\sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2) \cdots \sin(\phi_{n-2})\, dr\,d\phi_1\,d\phi_2 \cdots d\phi_{n-1},$
and the volume is the integral of this quantity over r between 0 and R and all possible angles:
$V_n(R) = \int_0^R \int_0^\pi \cdots \int_0^{2\pi} r^{n-1}\sin^{n-2}(\phi_1) \cdots \sin(\phi_{n-2})\, d\phi_{n-1} \cdots d\phi_1\,dr.$
Each of the factors in the integrand depends on only a single variable, and therefore the iterated integral can be written as a product of integrals:
$V_n(R) = \bigg(\int_0^R r^{n-1}\,dr\bigg)\bigg(\int_0^\pi \sin^{n-2}(\phi_1)\,d\phi_1\bigg)\cdots\bigg(\int_0^{2\pi} d\phi_{n-1}\bigg).$
The integral over the radius is Rn/n. The intervals of integration on the angular coordinates can, by symmetry, be changed to [0, π/2]:
$V_n(R) = \frac{R^n}{n} \bigg(2\int_0^{\pi/2} \sin^{n-2}(\phi_1)\,d\phi_1\bigg) \cdots \bigg(4\int_0^{\pi/2} d\phi_{n-1}\bigg).$
Each of the remaining integrals is now a particular value of the beta function:
$V_n(R) = \frac{R^n}{n} \textstyle B(\frac{n-1}{2}, \frac{1}{2}) B(\frac{n-2}{2}, \frac{1}{2}) \cdots B(\frac{2}{2}, \frac{1}{2}) \cdot 2B(\frac{1}{2}, \frac{1}{2}).$
The beta functions can be rewritten in terms of gamma functions:
$V_n(R) = \frac{R^n}{n} \frac{\Gamma(\frac{n-1}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{n}{2})} \frac{\Gamma(\frac{n-2}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{n - 1}{2})} \cdots \frac{\Gamma(\frac{2}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{3}{2})} \cdot 2 \frac{\Gamma(\frac{1}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{2}{2})}.$
This product telescopes. Combining this with the values $\Gamma(1/2) = \sqrt{\pi}$ and $\Gamma(1) = 1$ and the functional equation zΓ(z) = Γ(z + 1) leads to:
$V_n(R) = \frac{2\pi^{n/2}R^n}{n\Gamma(\frac{n}{2})} = \frac{\pi^{n/2}R^n}{\Gamma(\frac{n}{2} + 1)}.$
### Gaussian integrals
The volume formula can be proved directly using Gaussian integrals. Consider the function:
$f(x_1, \ldots, x_n) = \exp\Big(\mathord{-}\textstyle\frac{1}{2} \displaystyle\sum_{i=1}^n x_i^2\Big).$
This function is both rotationally invariant and a product of functions of one variable each. Using the fact that it is a product and the formula for the Gaussian integral gives:
$\int_{\mathbf{R}^n} f \,dV = \prod_{i=1}^n \Big(\int_{-\infty}^\infty \exp\left(-x_i^2/2\right)\,dx_i\Big) = (2\pi)^{n/2},$
where dV is the n-dimensional volume element. Using rotational invariance, the same integral can be computed in spherical coordinates:
$\int_{\mathbf{R}^n} f \,dV = \int_0^\infty \int_{S^{n-1}(r)} \exp\left(-r^2/2\right) \,dA\,dr,$
where Sn − 1(r) is an (n − 1)-sphere of radius r and dA is the area element (equivalently, the (n − 1)-dimensional volume element). The surface area of the sphere satisfies a proportionality equation similar to the one for the volume of a ball: If An − 1(r) is the surface area of an (n − 1)-sphere of radius r, then:
$A_{n-1}(r) = r^{n-1} A_{n-1}(1).$
Applying this to the above integral gives the expression:
$A_{n-1}(1) \int_0^\infty \exp\left(-r^2/2\right)\,r^{n-1}\,dr.$
By substituting t = r2/2, the expression is transformed into:
$A_{n-1}(1) 2^{n/2 - 1} \int_0^\infty e^{-t} t^{n/2 - 1}\,dt.$
This is the gamma function evaluated at n/2.
Combining the two integrations shows that:
$A_{n-1}(1) = \frac{2\pi^{n/2}}{\Gamma(\frac{n}{2})}.$
To derive the volume of an n-ball of radius R from this formula, integrate the surface area of a sphere of radius r for r between 0 and R and apply the functional equation zΓ(z) = Γ(z + 1):
$V_n(R) = \int_0^R \frac{2\pi^{n/2}}{\Gamma(\frac{n}{2})} \,r^{n-1}\,dr = \frac{2\pi^{n/2}}{n\Gamma(\frac{n}{2})}R^n = \frac{\pi^{n/2}}{\Gamma(\frac{n}{2} + 1)}R^n.$
## Balls in Lp norms
There are also explicit expressions for the volumes of balls in Lp norms. The Lp norm of the vector x = (x1, ..., xn) in Rn is $\textstyle (\sum x_i^p)^{1/p}$, and an Lp ball is the set of all vectors whose Lp norm is less than or equal to a fixed number called the radius of the ball. The case p = 2 is the standard Euclidean distance function, but other values of p occur in diverse contexts such as information theory, coding theory, and dimensional regularization.
The volume of an Lp ball of radius R is:
$V^p_n(R) = \frac{(2\Gamma(\frac{1}{p} + 1)R)^n}{\Gamma(\frac{n}{p} + 1)}.$
These volumes satisfy a recurrence relation similar to the one dimension recurrence for p = 2:
$V^p_n(R) = 2\Gamma(\textstyle\frac{1}{p} + 1) \displaystyle\frac{\Gamma(\frac{n-1}{p} + 1)}{\Gamma(\frac{n}{p} + 1)}R.$
Notice that for p = 2, we recover the recurrence for the volume of a Euclidean ball because $2\Gamma(3/2) = \sqrt{\pi}$.
For example, in the cases p = 1 and p = ∞, the volumes are:
$V^1_n(R) = \frac{2^n}{n!}R^n,$
$V^\infty_n(R) = (2R)^n.$
These agree with elementary calculations of the volumes of cross-polytopes and hypercubes.
For most values of p, the surface area of an Lp sphere (the boundary of an Lp ball) cannot be calculated by differentiating the volume of an Lp ball with respect to its radius. While the volume can be expressed as an integral over the surface areas using the coarea formula, the coarea formula contains a correction factor that accounts for how the p-norm varies from point to point. For p = 2 and p = ∞, this factor is one. However, if p = 1, then the correction factor is $\sqrt{n}$: The surface area of an L1-(n − 1)-sphere of radius R is $\sqrt{n}$ times the derivative at R of the volume of an L1-n-ball. For most values of p, the constant is a complicated integral.
The volume formula can be generalized even further. For positive real numbers p1, ..., pn, define the unit (p1, ..., pn) ball to be:
$B_{p_1, \ldots, p_n} = \{ x = (x_1, \ldots, x_n) \in \mathbf{R}^n : \vert x_1 \vert^{1/p_1} + \cdots + \vert x_n \vert^{1/p_n} \le 1 \}.$
The volume of this ball is:[1]
$\operatorname{Vol}(B_{p_1, \ldots, p_n}) = 2^n \frac{\Gamma(1 + p_1^{-1}) \cdots \Gamma(1 + p_n^{-1})}{\Gamma(1 + p_1^{-1} + \cdots + p_n^{-1})}.$
## References
1. Wang, Xianfu, Volumes of Generalized Unit Balls, Mathematics Magazine, Vol. 78, No. 5 (Dec 2005), 390–395. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 78, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.85332190990448, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/136820/how-to-generalise-the-fourier-transform/142191 | # How to generalise the Fourier transform
The Fourier transform approximates a signal using a bunch of sine and cosine waves. The inverse Fourier transform then reconstructs the original signal from this information.
I am told that it's possible to decompose a signal using some other set of functions, rather than the usual sine and cosine. My question is, how do you do this?
For a start, I'm assuming that for your set of functions to be able to approximate every possible signal, you need to have "enough" of these functions, and ideally you want them to be "different" such that each one measures an unrelated aspect of the signal.
To be completely clear, I'm mostly interested in the case of digital sampled data. But the continuous case might be interesting too...
Edit:
I'm not sure why my question isn't producing any answers. Maybe it's because nobody actually knows the answer, or maybe it's because the answer is "too obvious" to somebody who actually possesses formal mathematical training. I'm not sure. But I've wanted to know the answer to this question for years, so let's try one more time...
The discrete Fourier transform works by computing the correlation of the input signal with several different sine waves. The inverse transform then adds together the specified amplitudes of those waves, recovering the original signal. That much seems clear.
It looks like I could just invent a family of functions to use instead of the sine and cosine functions, and do exactly the same process... except that when I do this, it doesn't work in any way, shape or form. If I transform and then inverse-transform, I get gibberish. And I don't know why... but it seems like the key phrase is "complete set of orthonormal functions", whatever that means.
Update:
I had assumed that if I could just find a system of basis functions such that none of them are correlated, and their number equals the number of points in the input, the transform would work. Apparently, it does not.
Consider the following set of functions:
$$f_1 = [1,1,1,1]$$ $$f_2 = [0,1,0,-1]$$ $$f_3 = [1,0,-1,0]$$ $$f_4 = [1,-1,1,-1]$$
Clearly, there are 4 functions. As far as I can tell, none of them are correlated. For example,
$$f_1 * f_4 = (1 * 1) + (1 * -1) + (1 * 1) + (1 * -1) = 1 - 1 + 1 - 1 = 0$$
If we take, say, $x = [1,2,3,4]$ and compute the correlations, we get
$$f_1 * x = 1 + 2 + 3 + 4 = 10$$ $$f_2 * x = 0 + 2 + 0 - 4 = -2$$ $$f_3 * x = 1 + 0 - 3 + 0 = -2$$ $$f_4 * x = 1 - 2 + 3 - 4 = -2$$
Now, computing $10 f_1 - 2 f_2 - 2 f_3 - 2 f_4$, we get
$$10 + 0 - 2 - 2 = 6$$ $$10 - 2 + 0 + 2 = 10$$ $$10 + 0 + 2 - 2 = 10$$ $$10 + 2 + 0 + 2 = 14$$
Clearly $[6, 10, 10, 14]$ is nothing like $[1,2,3,4]$, even with scaling. So... what am I missing?
-
3
You just need a basis set whose members satisfy an orthogonality condition. – J. M. Apr 25 '12 at 15:22
1
Also, look up wavelets. – J. M. Apr 25 '12 at 15:25
1
– leonbloy Apr 25 '12 at 15:47
3
– leonbloy May 1 '12 at 12:00
4
Rahul and Tom are in essence telling you to look into the Gram-Schmidt orthogonalization procedure. It works for any entities with an associated inner product, be they vectors or functions... – J. M. May 6 '12 at 14:44
show 10 more comments
## 4 Answers
As the previous answers have stated, your functions need to be an orthonormal basis for your procedure to work. Your basis is orthogonal, but not normalized. Try the same thing using
$\frac{1}{\sqrt{4}} [1, 1, 1, 1,]$
$\frac{1}{\sqrt{2}} [0, 1, 0, -1]$
$\frac{1}{\sqrt{2}} [1, 0, -1, 0]$
$\frac{1}{\sqrt{4}} [1, -1, 1, -1]$
You will have much more luck finding useful information on this by looking up dot products rather than correlation.
-
Since I saw your last answer to yourself and that I do not entirely agree, I will add my little stone to the edifice, hoping it might bring some clarity. It relies heavily on Mallat's book (a Wavelet Tour of Signal Processing, see e.g. chap 5) but I think most of it was already present in other's comments.
Let $f\in \mathcal H$ with a Hilbert space $\mathcal H$. Let $\{\phi_n\}_{n\in \Gamma}$ a family of vectors in $\mathcal H$ (anything really, this is where I somehow disagree with your point 1. and 2.) then you can define an operator based on the family of vectors as follows: $$\forall n\in \Gamma, \quad Uf[n] = \langle f,\phi_n\rangle$$ So basically this operator does just one thing, it associates to a function $f$ a set of numbers, the $n$th number corresponding to the inner product (or correlation as you put it) of $f$ with $\phi_n$.
Now the big thing is under which general conditions can one recover $f$ from its $Uf[n]$? Which is nothing but a question about the invertibility of $U$. In a practical setting, a typical example of a sequence is a family of sines and cosines up to a certain frequency. Already here, and rather intuitively, you can guess that it is not possible to exactly recover a general $f$ from its $Uf[n]$ but there are some $f$ for which it will be possible etc..
So from this, it should be clear that the characterization of the operator $U$ given the family of $\phi_n$ is essential to see whether or not we'll be able to recover the function (or signal) from its decomposition in the sequence of $\phi_n$.
Some more stuff from Mallat's book: the sequence is a frame of $\mathcal H$ if there exists two positive constants $A,B$ s.t for any $f\in\mathcal H$ one has: $$A\|f\|^2 \le \sum_{n\in\Gamma} |\langle f,\phi_n\rangle|^2 \le B\|f\|^2$$ and when $A=B$, the frame is said to be tight.
If you have that, then $U$ is called (very originally) a frame operator and one can prove that iff $U$ is a frame operator, it is invertible on its image with bounded inverse.
Some comments:
1. you can have redundancy in the family of $\phi_n$ (sometimes interesting, see e.g. curvelets)
2. you can normalize your vectors to have $\|\phi_n\|=1$
3. if the $\phi_n$ are linearly independent and form a frame then $A\le 1\le B$.
4. the frame is an orthonormal basis iff $A=B=1$. (e.g. Fourier basis wrt L2)
So basically, you can decompose a function/signal with respect to any kind of family but you might not be able to recover it (fully) from its decomposition. To give some insight about possible applications here are two (and there are many many more):
• Compression: express a signal in a family with a limited number of $\phi_n$ and remove those $Uf[n]$ which are ''too small''. The signal can then be stored with some loss but with (hopefully) very few ''important'' coefficients (hence the compression). For images this can be done very efficiently in a wavelet basis for example (e.g. JPEG2000). A good basis will be a basis in which the decomposition of $f$ has very few ''important coefficients'' $Uf[n]$ and the rest can be ignored. Fourier basis is usually pretty bad in that sense (tends to smear out the data).
• Denoising: given a noisy signal expressed in a family, can try to recover a less noisy signal by inverting the frame operator on coefficients which are sufficiently big and hence should have a high signal to noise ratio.
-
I don't completely understand everything you said, but it's interesting none the less. And yes, compression is what I'm eventually interested in investigating, once I have a better intuitive understanding of this stuff. (Although it looks like there may be other, more appropriate ways to do that.) – MathematicalOrchid May 8 '12 at 9:21
The intuition I have about compression is you try to express a signal in a dictionnary of signals $\phi_i$: $f=\sum_{i=1}^n \alpha_i \phi_i$ and you try to remove those $|\alpha_i|<\epsilon$. In a discrete setting, the idea is, eventually, to express a signal of length $n$ with $m\ll n$ coefficients in a given dictionary (so you just need to store $m$ things + an indicator of the dictionnary) and to be able to recover the signal from these $m$ coefficients efficiently. (the notion of "small", "efficient", etc., can be specified mathematically with norms etc.) – tibL May 9 '12 at 8:44
One aspect of this type of decomposition (i.e. Taylor, Fourier) is that it is an approximation that is made better iteratively. I have always thought of it as $n$-to-$\infty$-dimensional coordinate transformation that is reduced to $n$ dimensions with the folding operation (which is addition in the case of Taylor/Fourier series).
You can rewrite a vector space $(a,b)\in\mathbb{R}^2$ (such as $a+b i\in\mathbb{C}$) in terms of another vector space $(u,v)\in\mathbb{R}^2$ according to the Cauchy-Riemann equations. Maybe this will help you find a way to express the proper conditions for your work (which for CR boils down to "everything must be perpendicular everywhere).
-
Based on the hundreds of comments and the experimental work I've done, it appears the answer to my original question should have looked likt this:
A [discrete] function can be decomposed over a set of basis functions provided that the set of basis functions has the following properties:
1. The correlation of every function with itself equals exactly one.
2. The correlation of every function with every other function equals exactly zero.
3. The number of basis functions is at least as large as the number of data points.
Once you have this set, you can "transform" your function simply by computing the correlation between your function and each of the basis functions. To "untransform", simply multiply each basis function by the coefficient you found, and add the results back together.
Somebody said something about a "Gram-Schmidt process" - and looking this up on Wikipedia gives me a nice way of taking a system of functions and making an orthonormal system from it.
The bounty on this question is still open for a few more hours. If anybody can summarise this better than I have, or add any interesting additional information (e.g., the exact conditions under which the set of functions is "complete"), you can still earn yourself a tidy little rep bonus...
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469491839408875, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-equations/170294-application-growth.html | # Thread:
1. ## An application in growth
The population of a bacterial culture grows at a rate proportional to the number of bacteria present at any time. After 3 hours it is observed that there are 400 bacteria present. After 10 hours there are 2000 present. What was the initial number of bacteria?
My attempt:
It is implied by context that a model for the rate of growth is
$\frac{dP}{dt}=kP$ which implies that $P(t)=ce^{kt}$.
But the fact that $P(3)=200$, and $P(10)=2000$ we see that
$400=ce^{3k}$ and $2000=ce^{10k}$
Solving for $c$ in the first equation gives
$c=400e^{-3k}$. Substituting into the second...
$2000=400e^{-3k}e^{10k}$ and we see that $k=\frac{\ln5}{7}$
Now, again using the fact that $P(3)=400$ we can conclude that $c=400e^{(-3\ln5)/7}$
So that $P(t)=400e^{\frac{\ln5}{7}(t-3)}$
and $P(0)=400e^{\frac{\ln5}{7}(0-3)}$
Have I done this right. If so, was there a faster way?
2. Everything looks good to me. The only thing that might have speeded things up a hair, would have been if you had divided one equation by the other. That would have gotten rid of the c a little quicker. There's not a lot of fluff in your solution, though. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9723904132843018, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/hilbert-spaces?page=10&sort=votes&pagesize=50 | # Tagged Questions
Complete normed spaces whose norm comes from an inner product.
1answer
51 views
### Orthonormal Family in a Hilbert Space
If we have an orthonormal family, $\{u_n\}_{i=1}^\infty$ in a Hilbert Space $H$, I need to show that for $x\in H$ we have the following inequality: \left|\left\{n|\langle x, u_n \rangle > ...
1answer
94 views
### Finding Riesz basis
Let H be a Hilbert space .Is there always a non orthogonal Riesz basis $D$ on it such that following holds? $$\sup_{g\in D }\sum_{g'\in D,g'\not=g}|\langle g,g'\rangle|<1/3$$ And is there Riesz ...
1answer
53 views
### Banach space geometry without bounded operators?
I understand that $B(X)$ can be think of as the collection of symmetries of a Banach space $X$, and that they provide important information concerning the geometric structure of the space. But I am ...
1answer
103 views
### Show the existence and uniqueness of a closed ball containing a bounded subset of a Hilbert space
The problem: Assume $A$ is a bounded subset of a Hilbert space $H$. Let $r$ be the infimum of the radii of closed balls containing $A$, so $r = \inf \{s \geq 0$ $\vert$ there exists $x \in H$ such ...
1answer
73 views
### weak convergence condition
Let $l^{2}=\left\{x=(x^{(1)},x^{(2)},...):\sum_{i=1}^{\infty }\left\vert x ^{(i)}\right\vert ^{2}<\infty \right\}$. Would you help me to prove that $({\vert|x_n |\vert})$ is bounded sequence and ...
1answer
108 views
### Unique extension to a bounded operator
Suppose $\left\{ e_{1},e_{2},\ldots\right\}$ is an orthonormal basis for a Hilbert space $\mathcal{H}$ and for each $n$ there is a vector $Ae_{n}$ in $\mathcal{H}$ such that \$\sum\left\Vert ...
1answer
79 views
### Contractibility of the sphere and Stiefel manifolds of a separable Hilbert space
Why are the sphere $$S=\lbrace |x|=1\rbrace$$ and the Stieffel manifolds of orthonormal $n$-frames V_n=\lbrace (x_1,\dots,x_n)\in S^n\mathrm{~s.t.~}i\neq j\Rightarrow\langle ...
1answer
126 views
### Proving Fréchet differentiability
Am learning about Fréchet differentials and was wondering if for a real matrix $X$ and positive semidefinite real matrices $A,B$ the function $f(X)=TrX^TAX-X^TBX$ is twice Fréchet differentiable or ...
1answer
41 views
### Can this type of series retain the same value?
Let $H$ be a Hilbert space and $\sum_k x_k$ a countable infinite sum in it. Lets say we partition the sequence $(x_k)_k$ in a sequence of blocks of finite length and change the order of summation ...
1answer
77 views
### Reproducing Kernel Hilbert Space- notation and basics
Am reading about Reproducing Kernel Hilbert Space(RKHS) while reading through Functional Analysis and Hilbert Space material and am unable to get the notation : $k(·,xi)$ correctly. What does the dot ...
1answer
133 views
### Representing with Hilbert Schmidt Norm
Am trying to see, if the following Trace function can be expressed using a Hilbert Schmidt Norm: $\operatorname{Tr}(X^TAX)$. Here, $X$ is a matrix whose entries take values that are finite and reals ...
1answer
77 views
### Equivalents norms in Sobolev Spaces
I know that this is classical but I have never do the calculations to show that the norms in the sobolev space $W^{k,p}(\Omega)$ \begin{equation} \|u\|_{k,p,\Omega}= \Bigl(\int_{\Omega} ...
2answers
95 views
### Cauchy+pointwise convergence $\Rightarrow$ uniform converges (for an operator in a Hilbert space)
Suppose that the sequence of operators in a Hilbert space $H$, $\left(T_{n}\right)_{n}$, is Cauchy (with respect to the operator norm) and that there is an operator $L$, such that ...
1answer
123 views
### Analysis operator $T_\Phi$ is injective and has a closed range
Definition of the problem Let $\mathcal{H}$ be a separable Hilbert space on $J\subset\mathbb{N}$ an index set. Let $\Phi:=\left(\varphi_{j}\right)_{j\in J}\subset\mathcal{H}$ be a frame for ...
1answer
47 views
### An explicit example of an invariant halfspace of the unilateral shift?
In a recent talk, A. Popov stated the following fact The unilateral shift on $\ell^2$ has invariant halfspaces. Halfspaces are closed subspaces whose dimension and codimension are both infinite. ...
2answers
84 views
### closedness of image of closed, unbounded operator
I want to prove the following: Suppose $\mathcal{H}_1$ and $\mathcal{H_2}$ are Hilbert spaces and let $T: \mathcal{D} \rightarrow \mathcal{H}_2$ be a closed operator, where \$\mathcal{D} \subset ...
1answer
83 views
### Changing of integration and operator
I have a question which maybe looks very simple: Let $T$ be an orthogonal projection on a Hilbert space $H$. If $g(x,u)\in H$, for all $u\in \mathbb R$, and the inner product is defined by \langle ...
1answer
95 views
### Checking axioms for inner product
I'm going through a question checking that an inner product satisfies the inner product axioms. I have a Hilbert space $H=C[-1,1]$ and for $f,g\in H$ the inner product is defined as \langle ...
1answer
150 views
### Hilbert Spaces and Closed Subspaces
Let $H$ be a Hilbert Space, and $M$ a closed subspace. Is it true that $H = M \bigoplus M^{\perp}$ Does this hold if $M$ is not closed? Or only if $H$ is finite/infinite dimensional?
1answer
107 views
### Hilbert space $H$ is strictly smooth
I am trying to show that every Hilbert space $H$ is strictly smooth with modulus of smoothness $\phi_H(t)=\sqrt{1+t^2} -1$. To show this I think I should show $H$ is uniformly smooth first. ...
1answer
104 views
### Null Space and Range of Particular kind of Operator on Hilbert Space
Let $H$ be the real separable Hilbert space with orthonormal basis $\{e_n\}$ and consider the operator $T:H \times H \to H \times H$ given by T(\sum a_ne_n, \sum b_ne_n) = \sum A_n(a_ne_n, ...
2answers
256 views
### Question about limits of weakly convergent sequence in $H^1_0(\Omega)$
Let $H = H_{0}^{1}(\Omega)$ where $\Omega$ is a bounded domain in $R^N$ whose boundary $\partial\Omega$ is a smooth manifold. We know that the embedding $$H\hookrightarrow L^s(\Omega)$$ is compact for ...
1answer
80 views
### Riesz sequences in Hilbert spaces
Is it true that if $\{x_{n}\}_{n=1}^{\infty}$ is a finite union of Riesz sequences in a Hilbert space H, then $\{x_{n}\}$ itself will be a Riesz sequence? What about Frames and Bessel seuences, do we ...
1answer
138 views
### Show $\{u_n\}$ orthonormal, A compact implies $\|Au_n\| \to 0$
I'm having a bit trouble with this homework exercise. Let $\mathcal{H}$ be a Hilbert space and $\{u_n\}_{n=1}^\infty$ an orthonormal sequence in $\mathcal{H}$. Let $A$ be a compact operator on ...
0answers
15 views
### Is $H^1(M) \subset L^2(M) \subset H^{-1}(M)$ a Hilbert triple for $M$ a manifold with boundary?
Is $H^1(M) \subset L^2(M) \subset H^{-1}(M)$ a Hilbert triple for $M$ a manifold with boundary? What smoothness is required of the boundary? I would be grateful for some references to this.
0answers
20 views
### closest point property of subset of Hilbert space - what are the conditions for existence of inf?
I'm proving the closest point property of a subset of a Hilbert space, ie: $$H$$ is a Hilbert space with a norm generated by the inner product and so on. $$h\in H$$ is a point in H $$M\subset H$$ M ...
1answer
28 views
### Weak convergence in Hilbert space L2 implies convergence in distribution?
Does weak convergence in $L^2$ (for $X_n, X \in L^2$ we say that $X_n$ converges weakly to $X$ ($X_n \rightarrow^w X$) if for every $Y\in L^2$ we have $\mathbb{E}X_nY \rightarrow \mathbb{E}XY$) ...
1answer
53 views
### Computing an explicit solution to an integral equation via the Neumann Series.
I am hoping that someone can provide guidance for solving the integral equation $$u=f+\lambda Au$$ where $1/\lambda\notin\sigma(A)$, $f\in L^2[0,2\pi]$, and $A:L^2[0,2\pi]\to L^2[0,2\pi]$ is defined ...
0answers
32 views
### Prove that for every $f$ in $H$, the sequence $u_n$ which is the projection of $f$ on $K_n$ converges to a limit
$K_n$ is non-increasing sequence of closed convex sets in Hilbert space $H$ such that the intersection of $K_n$ is different from emptiness. Prove that for every $f \in H$, the sequence $u_n$ which is ...
0answers
32 views
### Unbounded self- adjoint and von Neumann algebra
I am reading Conway's Functional Analysis. Here is one exercise problem.I don't know how to show the following fact. For unbounded self-adjoint $T$ in Hilbert space $H$ 1) $T$ commutes with its Borel ...
1answer
47 views
### Orthogonal family in Hilbert Space
Let $(x_k)_1^\infty$ be an orthogonal family of points in X a Hilbert space. Then $\sum_{i=1}^\infty x_i$ converges if and only if $\sum_{i=1}^\infty ||x_k||^2$ converges. Also need to show that ...
0answers
15 views
### Weighted inner product space and representation of dual space
Let $H$ be a Hilbert space and define $H_c$ to be the weighted Hilbert space with inner product $$(u,v)_{H_c} = c(u,v)_H$$ where $c$ is a positive constant. Then is it true that c\langle f, u ...
0answers
38 views
### Countable orthonormal basis of product of separable Hilbert spaces
If I have 2 separable Hilbert spaces $X$ and $Y$ which have (different) orthonormal bases $x_i$ and $y_i$, then clearly $x_i \times y_j$ is a basis for $X \times Y$ (which is also a separable space). ...
1answer
73 views
### Riesz Representation theorem-pde
Consider \$\sum_{i,j=1}^n \displaystyle\int_{\mathbb{R}^n} \dfrac{\partial^2 u}{\partial^2 x_i} \overline{\dfrac{\partial^2 v}{\partial^2 x_j} } dx + \lambda \displaystyle\int_{\mathbb{R}^n} u ...
1answer
22 views
### Is the Strong Limit of a Linear Operator in a Hilbert Space the Same as the Norm Limit?
If $H$ is a Hilbert Space, and I have an operator $F:H \rightarrow H$ which is the limit of a sequence of operators $F_n$ with respect to the operator norm; and this same sequence of operators ...
1answer
38 views
### Weak convergence-exercice
Let $\Omega$ be an open set in $\mathbb{R}^n$ and let $(u_n)$ be a bounded sequence in $H^1_0(\Omega).$ Who's the theorem say that we can extract a subsequence denoted $u_{n}$ as $u_n$ weakly ...
0answers
30 views
### How can projection operators be limits of powers of unitary operators?
Consider a (fixed) unitary operator $U$ acting on the Hilbert space $\mathcal{H}$. Because the unit ball is compact in the weak topology, it is not hard to see that there exists a (smallest) compact ...
1answer
57 views
### Calculating the Norm of an operator in $L^2(0,1)$
If I have the following operator for $H=L^2(0,1)$: $$Tf(s)=\int_0^1 (5s^2t^2+2)(f(t))dt$$ and I wish to calculate $||T||$, how do I go about doing this: I know that in $L^2(0,1)$ we have that ...
0answers
30 views
### Closed unit ball in infinite dimensional normed linear space
I have to prove that in any infinite dimension normed linear space we have that the closed unit ball is not compact. I know that I have to construct a sequence such that $||x_n||=1$ and ...
0answers
38 views
### Orthogonal Projection on hilbert spaces
I found this exercise on a book, I guess it's not hard but don't know what to do. Let $H$ be a Hilbert space and let $P:H \rightarrow H$ be linear. If $P$ is a projection, i.e $P^2 =P$, and ...
0answers
26 views
### Prove that $S$is a closed subspace of $H^2$ invariant under multiplication by $z$. Find the inner function $F$ such that $S=FH^2$
Let ${\alpha_n}$ be a sequence of points in the open unit disc such that $\sum(1-|\alpha_n|)<\infty$. Let $S$ be the set of all functions $f$ in $H^2$ spaces such that $f(\alpha_n)=f'(\alpha_n)=0$ ...
0answers
51 views
### Confused about Bessel's inequality
I know that if $H$ is a Hilbert space and $(e_{j})_{j\in\mathbb{N}}$ is an orthonormal system in $H$ and $f\in H$. Then one has Bessel's inequality \sum_{j=1}^{\infty}|\langle f,e_{j}\rangle ...
1answer
44 views
### Particular series on Hilbert Space
Let $(H, \langle\cdot,\cdot\rangle)$ a Hilbert space and consider a sequence $\{x_n\}_{n\in\mathbb{N}}$ of $H$ such that: \langle x_n,x_m\rangle\ =\ \delta_{mn}\ =\ \left\{\begin{array}{ll}1, & ...
1answer
35 views
### Norm of oblique projector and angle between subspaces
Take $V$ and $W$ closed subspaces of $H$ a Hilbert space with $V\oplus W=H$ (we'll assume this holds in the sequel, it may not be required everywhere but in the context of interest, it is always ...
1answer
85 views
### Hilbert spaces and orthogonality sets
I need to prove if $X$ is a Hilbert space and $M$ and $N$ it's closed: $$(M+N)^\perp=M^\perp\cap N^\perp$$ thanks
1answer
83 views
### Orthonormal basis in Hilbert spaces
I have a general question but I'm going got ask it in a very restrictive setup. It is known that an equivalent condition for a system $\left\{e^{i\lambda t}\right\} _{\lambda\in\Lambda}$ being an ONB ...
1answer
68 views
### Infimum of a Hilbert space inner product
This is exercise 5.11 in Brezis's Functional Analysis, Sobolev Spaces, and PDEs. Let $H$ be a Hilbert space, and let $M \subset H$ be a nonzero closed linear subspace. Let $f \in H$, \$f \notin ...
0answers
37 views
### Hilbert basis of $L^2([-1,1])$?
Could you please specify hilbert basis of $L^2([-1,1])$? How will be the representation of a function f $\in L^2([-1,1])$ by means of its Fourier series? My solution: \$E_k=1/\sqrt2 e^{kit\pi}, k\in ...
1answer
31 views
### Kernel inclusion implies factorization
I have a question whether a certain fact is true for arbitrary operators on a Hilbert space. Namely, consider Hilbert spaces $H,K$, an operator $A\in B(H)$ and another $B\in B(H,K)$. Moreover, assume ...
1answer
24 views
### Functional to inner product in Hilbert triple
If $V \subset H \subset V^*$ is a Hilbert triple, and $f \in V^*$, I cannot represent $f(v) = (e,v)_V$ because we don't identify $V$ with $V^*$. But is it true that $f(v) = (e,v)_H$ for some $e$? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 183, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8933009505271912, "perplexity_flag": "head"} |
http://nrich.maths.org/5987/index?nomenu=1 | ## 'Cannon Balls' printed from http://nrich.maths.org/
### Show menu
A cannon ball is fired vertically upwards into the air. How fast would it have to be fired to take 1 second to land?
How fast would it have to be fired to take 10, 100, 1,000 or 1,000,000 seconds to land?
What would be the highest point of the ball in each case?
(Assume that gravity is a constant $10ms^{-2}$ in your calculations)
Given that the radius of the earth is about 6000km, which of your calculations would give a good approximation to reality? At what speed would the approximation break down, in your opinion?
Extension activity: Suppose that the balls are fired upwards from a trampoline with coefficient of restitution 0.5. In each case, after how many bounces would the balls bounce less than 1m high? Try to make an estimate before performing a full calculation.
Extension problem: why not try the extension question Escape From Planet Earth ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954203188419342, "perplexity_flag": "middle"} |
http://gauravtiwari.org/tag/infrared-spectroscopy/ | # MY DIGITAL NOTEBOOK
A Personal Blog On Mathematical Sciences and Technology
Home » Posts tagged 'Infrared spectroscopy'
# Tag Archives: Infrared spectroscopy
## Classical Theory of Raman Scattering
The classical theory of Raman effect, also called the polarizability theory, was developed by G. Placzek in 1934. I shall discuss it briefly here. It is known from electrostatics that the electric field $E$ associated with the electromagnetic radiation induces a dipole moment $\mu$ in the molecule, given by
$\mu = \alpha E$ …….(1)
where $\alpha$ is the polarizability of the molecule. The electric field vector $E$ itself is given by
$E = E_0 \sin \omega t = E_0 \sin 2\pi \nu t$ ……(2)
where $E_0$ is the amplitude of the vibrating electric field vector and $\nu$ is the frequency of the incident light radiation.
Thus, from Eqs. (1) & (2),
$\mu= \alpha E_0 \sin 2\pi \nu t$ …..(3)
Such an oscillating dipole emits radiation of its own oscillation with a frequency $\nu$, giving the Rayleigh scattered beam. If, however, the polarizability varies slightly with molecular vibration, we can write
$\alpha =\alpha_0 + \frac {d\alpha} {dq} q$ …..(4)
where the coordinate q describes the molecular vibration. We can also write q as:
$q=q_0 \sin 2\pi \nu_m t$ …..(5)
Where $q_0$ is the amplitude of the molecular vibration and $\nu_m$ is its (molecular) frequency. From Eqs. 4 & 5, we have
$\alpha =\alpha_0 + \frac {d\alpha} {dq} q_0 \sin 2\pi \nu_m t$ …..(6)
Substituting for $\alpha$ in (3), we have
$\mu= \alpha_0 E_0 \sin 2\pi \nu t + \frac {d\alpha}{dq} q_0 E_0 \sin 2\pi \nu t \sin 2\pi \nu_m t$ …….(7)
Making use of the trigonometric relation $\sin x \sin y = \frac{1}{2} [\cos (x-y) -\cos (x+y) ]$ this equation reduces to:
$\mu= \alpha_0 E_0 \sin 2\pi \nu t + \frac {1}{2} \frac {d\alpha}{dq} q_0 E_0 [\cos 2\pi (\nu - \nu_m) t - \cos 2\pi (\nu+\nu_m) t]$ ……(8)
Thus, we find that the oscillating dipole has three distinct frequency components:
1• The exciting frequency $\nu$ with amplitude $\alpha_0 E_0$
2• $\nu - \nu_m$
3• $\nu + \nu_m$ (2 & 3 with very small amplitudes of $\frac {1}{2} \frac {d\alpha}{dq} q_0 E_0$. Hence, the Raman spectrum of a vibrating molecule consists of a relatively intense band at the incident frequency and two very weak bands at frequencies slightly above and below that of the intense band.
If, however, the molecular vibration does not change the polarizability of the molecule then $(d\alpha / dq )=0$ so that the dipole oscillates only at the frequency of the incident (exciting) radiation. The same is true for the molecular rotation. We conclude that for a molecular vibration or rotation to be active in the Raman Spectrum, it must cause a change in the molecular polarizability, i.e., $d\alpha/dq \ne 0$ …….(9)
Homonuclear diatomic molecules such as $\mathbf {H_2 \, N_2 \, O_2}$ which do not show IR Spectra since they don’t possess a permanent dipole moment, do show Raman spectra since their vibration is accompanied by a change in polarizability of the molecule. As a consequence of the change in polarizability, there occurs a change in the induced dipole moment at the vibrational frequency.
###### Related Articles
• Raman Effect- Raman Spectroscopy- Raman Scattering (wpgaurav.wordpress.com)
26.740278 83.888889
## Raman Effect- Raman Spectroscopy- Raman Scattering
Tuesday, February 1st, 2011 06:01 / 4 Comments
In constrast to other conventional brances of spectroscopy, Raman spectroscopy deals with the scattering of light & not with its absorption.
# Raman Effect
Raman Effect: An Overview
Chandrasekhar Venkat Raman discovered in 1928 that if light of a definite frequency is passed through any substance in gaseous, liquid or solid state, the light scattered at right angles contains radiations not only of the original frequency (Rayleigh Scattering) but also of some other frequencies which are generally lower but occasionally higher than the frequency of the incident light.
The phenomenon of scattering of light by a substance when the frequencies of radiations scattered at right angles are different (generally lower and only occasionally higher) from the frequency of the incident light, is known as Raman Scattering or Raman effect.
The lines of lower frequencies as known as Stokes lines while those of higher frequencies are called anti-stokes lines.
If f is the frequency of the incident light & f’ that of a particular line in the scattered spectrum, then the difference f-f’ is known as the Raman Frequency. This frequency is independent of the frequency of the incident light. It is constant and is characteristic of the substance exposed to the incident light.
A striking feature of Raman Scattering is that Raman Frequencies are identical, within the limits of experimental error, with those obtained from rotation-vibration (infrared) spectra of the substance.
Here is a home made video explaining the Raman Scattering of Yellow light:
And here is another video guide for Raman Scattering:
## Advantage of Raman Effects
• Raman Spectroscopy can be used not only for gases but also for liquids & solids for which the infrared spectra are so diffuse as to be of little quantitative value.
• Raman Effect is exhibited not only by polar molecules but also by non-polar molecules such as O2, N2, Cl2 etc.
• The rotation-vibration changes in non-polar molecules can be observed only by Raman Spectroscopy.
• The most important advantage of Raman Spectra is that it involves measurement of frequencies of scattered radiations, which are only slightly different from the frequencies of incident radiations. Thus, by appropriate choice of the incident radiations, the scattered spectral lines are brought into a convenient region of the spectrum, generally in the visible region where they are easily observed. The measurement of the corresponding infrared spectra is much more difficult.
• It uses visible or ultraviolet radiation rather than infrared radiation.
### Uses
• Investigation of biological systems such as the polypeptides and the proteins in aqueous solution.
• Determination of structures of molecules.
RAMAN was awarded the 1930 Physics Nobel Prize for this.
# Classical Theory of Raman Effect
The classical theory of Raman effect, also called the polarizability theory, was developed by G. Placzek in 1934. I shall discuss it briefly here. It is known from electrostatics that the electric field $E$ associated with the electromagnetic radiation induces a dipole moment $\mu$ in the molecule, given by
$\mu = \alpha E$ …….(1)
where $\alpha$ is the polarizability of the molecule. The electric field vector $E$ itself is given by
$E = E_0 \sin \omega t = E_0 \sin 2\pi \nu t$ ……(2)
where $E_0$ is the amplitude of the vibrating electric field vector and $\nu$ is the frequency of the incident light radiation.
Thus, from Eqs. (1) & (2),
$\mu= \alpha E_0 \sin 2\pi \nu t$ …..(3)
Such an oscillating dipole emits radiation of its own oscillation with a frequency $\nu$, giving the Rayleigh scattered beam. If, however, the polarizability varies slightly with molecular vibration, we can write
$\alpha =\alpha_0 + \frac {d\alpha} {dq} q$ …..(4)
where the coordinate q describes the molecular vibration. We can also write q as:
$q=q_0 \sin 2\pi \nu_m t$ …..(5)
Where $q_0$ is the amplitude of the molecular vibration and $\nu_m$ is its (molecular) frequency. From Eqs. 4 & 5, we have
$\alpha =\alpha_0 + \frac {d\alpha} {dq} q_0 \sin 2\pi \nu_m t$ …..(6)
Substituting for $\alpha$ in (3), we have
$\mu= \alpha_0 E_0 \sin 2\pi \nu t + \frac {d\alpha}{dq} q_0 E_0 \sin 2\pi \nu t \sin 2\pi \nu_m t$ …….(7)
Making use of the trigonometric relation $\sin x \sin y = \frac{1}{2} [\cos (x-y) -\cos (x+y) ]$ this equation reduces to:
$\mu= \alpha_0 E_0 \sin 2\pi \nu t + \frac {1}{2} \frac {d\alpha}{dq} q_0 E_0 [\cos 2\pi (\nu - \nu_m) t - \cos 2\pi (\nu+\nu_m) t]$ ……(8)
Thus, we find that the oscillating dipole has three distinct frequency components:
1• The exciting frequency $\nu$ with amplitude $\alpha_0 E_0$
2• $\nu - \nu_m$
3• $\nu + \nu_m$ (2 & 3 with very small amplitudes of $\frac {1}{2} \frac {d\alpha}{dq} q_0 E_0$. Hence, the Raman spectrum of a vibrating molecule consists of a relatively intense band at the incident frequency and two very weak bands at frequencies slightly above and below that of the intense band.
If, however, the molecular vibration does not change the polarizability of the molecule then $(d\alpha / dq )=0$ so that the dipole oscillates only at the frequency of the incident (exciting) radiation. The same is true for the molecular rotation. We conclude that for a molecular vibration or rotation to be active in the Raman Spectrum, it must cause a change in the molecular polarizability, i.e., $d\alpha/dq \ne 0$ …….(9)
Homonuclear diatomic molecules such as $\mathbf {H_2 \, N_2 \, O_2}$ which do not show IR Spectra since they don’t possess a permanent dipole moment, do show Raman spectra since their vibration is accompanied by a change in polarizability of the molecule. As a consequence of the change in polarizability, there occurs a change in the induced dipole moment at the vibrational frequency.
REFERENCE:-
Principles in Physical Chemistry
[7th edition]
Puri, Sharma & Pathania
26.740278 83.888889 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8954850435256958, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/7357/manually-calculated-r2-doesnt-match-up-with-randomforest-r2-for-testing | # Manually calculated $R^2$ doesn't match up with randomForest() $R^2$ for testing new data
I know this is a fairly specific `R` question, but I may be thinking about proportion variance explained, $R^2$, incorrectly. Here goes.
I'm trying to use the `R` package `randomForest`. I have some training data and testing data. When I fit a random forest model, the `randomForest` function allows you to input new testing data to test. It then tells you the percentage of variance explained in this new data. When I look at this, I get one number.
When I use the `predict()` function to predict the outcome value of the testing data based on the model fit from the training data, and I take the squared correlation coefficient between these values and the actual outcome values for the testing data, I get a different number. These values don't match up.
Here's some `R` code to demonstrate the problem.
````# use the built in iris data
data(iris)
#load the randomForest library
library(randomForest)
# split the data into training and testing sets
index <- 1:nrow(iris)
trainindex <- sample(index, trunc(length(index)/2))
trainset <- iris[trainindex, ]
testset <- iris[-trainindex, ]
# fit a model to the training set (column 1, Sepal.Length, will be the outcome)
set.seed(42)
model <- randomForest(x=trainset[ ,-1],y=trainset[ ,1])
# predict values for the testing set (the first column is the outcome, leave it out)
predicted <- predict(model, testset[ ,-1])
# what's the squared correlation coefficient between predicted and actual values?
cor(predicted, testset[, 1])^2
# now, refit the model using built-in x.test and y.test
set.seed(42)
randomForest(x=trainset[ ,-1], y=trainset[ ,1], xtest=testset[ ,-1], ytest=testset[ ,1])
````
-
## 1 Answer
The reason that the $R^2$ values are not matching is because `randomForest` is reporting variation explained as opposed to variance explained. I think this is a common misunderstanding about $R^2$ that is perpetuated in textbooks. I even mentioned this on another thread the other day. If you want an example, see the (otherwise quite good) textbook Seber and Lee, Linear Regression Analysis, 2nd. ed.
A general definition for $R^2$ is $$R^2 = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2} .$$
That is, we compute the mean-squared error, divide it by the variance of the original observations and then subtract this from one. (Note that if your predictions are really bad, this value can go negative.)
Now, what happens with linear regression (with an intercept term!) is that the average value of the $\hat{y}_i$'s matches $\bar{y}$. Furthermore, the residual vector $y - \hat{y}$ is orthogonal to the vector of fitted values $\hat{y}$. When you put these two things together, then the definition reduces to the one that is more commonly encountered, i.e., $$R^2_{\mathrm{LR}} = \mathrm{Corr}(y,\hat{y})^2 .$$ (I've used the subscripts $\mathrm{LR}$ in $R^2_{\mathrm{LR}}$ to indicate linear regression.)
The `randomForest` call is using the first definition, so if you do
```` > y <- testset[,1]
> 1 - sum((y-predicted)^2)/sum((y-mean(y))^2)
````
you'll see that the answers match.
-
+1, great answer. I always wondered why the original formula is used for $R^2$ instead of square of correlation. For linear regression it is the same, but when applied to other contexts it is always confusing. – mpiktas Feb 18 '11 at 8:22
(+1) Very elegant response, indeed. – chl♦ Feb 18 '11 at 9:14
@mpiktas, @chl, I'll try to expand on this a little more later today. Basically, there's a close (but, perhaps, slightly hidden) connection to hypothesis testing in the background. Even in a linear regression setting, if the constant vector is not in the column space of the design matrix, then the "correlation" definition will fail. – cardinal Feb 18 '11 at 13:26
thanks! this really helps! – Stephen Turner Feb 18 '11 at 20:10
If you have a reference other than the Seber/Lee textbook (not accessible to me) I would love to see a good explanation of how variation explained (i.e. 1-SSerr/SStot) differs from the squared correlation coefficient, or variance explained. Thanks again for the tip. – Stephen Turner Feb 18 '11 at 21:25
default | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8508654832839966, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/283235/about-henselization | # About Henselization
I have some question about Henselization of valued field. If $(K_{1}, \nu_{1})$ is a Henselization of valued field $(K, \nu)$. Which one is true.
1. $K_{1}/K$ is an algebraic extension.
2. $K_{1}/K$ is a transcendental extension.
If $[K(a) : K] = n$ then when $[K_{1}(a): K] = n$ or $< n$. Thanks
-
1
Do you already know for some reason that only one or the other can occur? If so, then think of the special case where $K$ is already Henselian. – Matt Jan 21 at 7:14
@ Matt, Thanks. I am just reading about the Henselization and I have no idea about those. – Rajesh Jan 21 at 16:28
## 1 Answer
A henselization is the a subextension of a separable closure $K^s$ of $K$, so it is always algebraic (extend the valuation $\nu$ to some valuation $\nu_s$ on $K^s$ and take the invariants by the decomposition group at $\nu_s$; you should find this in Engler & Prestel).
You second question doesn't make sense. You probably meant $[K_1(a): K_1]$. Its degree is always at most $n$ (for any field extension $K_1/K$), it can be $1$ if $a\in K_1$ and but can also be $n$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135924577713013, "perplexity_flag": "head"} |
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_2&diff=30407&oldid=30406 | # User:Michiexile/MATH198/Lecture 2
### From HaskellWiki
(Difference between revisions)
Line 1: Line 1:
- These notes are approximately done. Any comments or feedback on how to make them clearer much appreciated.
-
===Morphisms and objects=== ===Morphisms and objects===
Line 174: Line 172:
# Suppose <math>g,h</math> are two-sided inverses to <math>f</math>. Prove that <math>g=h</math>. # Suppose <math>g,h</math> are two-sided inverses to <math>f</math>. Prove that <math>g=h</math>.
# (requires some familiarity with analysis) There is a category with object <math>\mathbb R</math> (or even all smooth manifolds) and with morphisms smooth (infinitely differentiable) functions. Prove that being a bijection does not imply being an isomorphisms. Hint: What about <math>x\mapsto x^3?</math>. # (requires some familiarity with analysis) There is a category with object <math>\mathbb R</math> (or even all smooth manifolds) and with morphisms smooth (infinitely differentiable) functions. Prove that being a bijection does not imply being an isomorphisms. Hint: What about <math>x\mapsto x^3?</math>.
- # (do this if you don't do 2) In the category of posets, with order-preserving maps as morphisms, show that not all bijective homomorphisms are isomorphisms. + # (try to do this if you don't do 2) In the category of posets, with order-preserving maps as morphisms, show that not all bijective homomorphisms are isomorphisms.
# Consider the partially ordered set <math>P</math> as a category. Prove: every arrow is both monic and epic. Is every arrow thus an isomorphism? # Consider the partially ordered set <math>P</math> as a category. Prove: every arrow is both monic and epic. Is every arrow thus an isomorphism?
# What are the terminal and initial objects in a poset? Give an example of a poset that has both, either and none. Give an example of a poset that has a zero object. # What are the terminal and initial objects in a poset? Give an example of a poset that has both, either and none. Give an example of a poset that has a zero object.
## Contents
### 1 Morphisms and objects
Some morphisms and some objects are special enough to garner special names that we will use regularly.
In morphisms, the important properties are
• cancellability - the categorical notion corresponding to properties we use when solving, e.g., equations over $\mathbb N$:
$3x = 3y \Rightarrow x = y$
• existence of inverses - which is stronger than cancellability. If there are inverses around, this implies cancellability, by applying the inverse to remove the common factor. Cancellability, however, does not imply that inverses exist: we can cancel the 3 above, but this does not imply the existence of $1/3\in\mathbb N$.
Thus, we'll talk about isomorphisms - which have two-sided inverses, monomorphisms and epimorphisms - which have cancellability properties, and split morphisms - which are mono's and epi's with correspodning one-sided inverses. We'll talk about what these concepts - defined in terms of equationsolving with arrows - apply to more familiar situations. And we'll talk about how the semantics of some of the more wellknown ideas in mathematics are captured by these notions.
For objects, the properties are interesting in what happens to homsets with the special object as source or target. An empty homset is pretty boring, and a large homset is pretty boring. The real power, we find, is when all homsets with the specific source or target are singleton sets. This allows us to formulate the idea of a 0 in categorical terms, as well as capturing the roles of the empty set and of elements of sets - all using only arrows.
### 2 Isomorphisms
An arrow $f:A\to B$ in a category C is an isomorphism if it has a twosided inverse g. In other words, we require the existence of a $g:B\to A$ such that fg = 1B and gf = 1A.
#### 2.1 In Set
In a category of sets with structure with morphisms given by functions that respect the set structure, isomorphism are bijections respecting the structure. In the category of sets, the isomorphisms are bijections.
#### 2.2 Representative subcategories
Very many mathematical properties and invariants are interesting because they hold for objects regardless of how, exactly, the object is built. As an example, most set theoretical properties are concerned with how large the set is, but not what the elements really are.
If all we care about are our objects up to isomorphisms, and how they relate to each other - we might as well restrict ourselves to one object for each isomorphism class of objects.
Doing this, we get a representative subcategory: a subcategory such that every object of the supercategory is isomorphic to some object in the subcategory.
The representative subcategory ends up being a more categorically interesting concept than the idea of a wide subcategory: it doesn't hit every object in the category, but it hits every object worth hitting in order to capture all the structure.
Example The category of finite sets has a representative subcategory given by all sets $[n]=\{1,\ldots,n\}$.
#### 2.3 Groupoids
A groupoid is a category where all morphisms are isomorphisms. The name originates in that a groupoid with one object is a bona fide group; so that groupoids are the closest equivalent, in one sense, of groups as categories.
### 3 Monomorphisms
We say that an arrow f is left cancellable if for any arrows g1,g2 we can show $fg_1 = fg_2 \Rightarrow g_1=g_2$. In other words, it is left cancellable, if we can remove it from the far left of any equation involving arrows.
We call a left cancellable arrow in a category a monomorphism.
#### 3.1 In Set
Left cancellability means that if, when we do first g1 and then f we get the same as when we do first g2 and then f, then we had equality already before we followed with f.
In other words, when we work with functions on sets, f doesn't introduce relations that weren't already there. Anything non-equal before we apply f remains non-equal in the image. This, translated to formulae gives us the well-known form for injectivity:
$x\neq y\Rightarrow f(x)\neq f(y)$ or moving out the negations,
$f(x)=f(y) \Rightarrow x=y$.
#### 3.2 Subobjects
Consider the subset $\{1,2\}\subset\{1,2,3\}$. This is the image of an accordingly chosen injective map from any 2-element set into {1,2,3}. Thus, if we want to translate the idea of a subset into categorical language, it is not enough talking about monomorphisms, though the fact that inclusion is an injection indicates that we are on the right track.
The trouble that remains is that we do not want to view {1,2} as different subsets when it occurs as an image of the 2-element set {1,2} or when it occurs as an image of the 2-element set {5,6}. So we need some way of figuring out how to catch these situations and parry for them.
We'll say that a morphism f factors through a morphism g if there is some morphism h such that f = gh.
We can also talk about a morphism $f:A\to C$ factoring through an object B by requiring the existence of morphisms $g:A\to B, h:B\to C$ that compose to f.
Now, we can form an equivalence relation on monomorphisms into an object A, by saying f˜g if f factors through g and g factors through f. The arrows implied by the factoring are inverse to each other, and the source objects of equivalent arrows are isomorphic.
Equipped with this equivalence relation, we define a subobject of an object A to be an equivalence class of monomorphisms.
### 4 Epimorphisms
Right cancellability, by duality, is the implication
$g_1f = g_2f \Rightarrow g_1 = g_2$
The name, here comes from that we can remove the right cancellable f from the right of any equation it is involved in.
A right cancellable arrow in a category is an epimorphism.
#### 4.1 In Set
For epimorphims the interpretation in set functions is that whatever f does, it doesn't hide any part of the things g1 and g2 do. So applying f first doesn't influence the total available scope g1 and g2 have.
### 5 More on factoring
In Set, and in many other categories, any morphism can be expressed by a factorization of the form f = ip where i is a monomorphism and p is an epimorphism. For instance, in Set, we know that a function is surjective onto its image, which in turn is a subset of the domain, giving a factorization into an epimorphism - the projection onto the image - followed by a monomorphism - the inclusion of the image into the domain.
Note that in Set, every morphisms that is both a mono and an epi is immediately an isomorphism. We shall see in the homework that the converse does not necessarily hold.
### 6 Initial and Terminal objects
An object 0 is initial if for every other object C, there is a unique morphism $0\to C$. Dually, an object 1 is terminal if there is a unique morphism $C\to 1$.
First off, we note that the uniqueness above makes initial and terminal objects unique up to isomorphism whenever they exist: we shall perform the proof for one of the cases, the other is almost identical.
Proposition Initial (terminal) objects are unique up to isomorphism.
Proof: Suppose C and C' are both initial (terminal). Then there is a unique arrow $C\to C'$ and a unique arrow $C'\to C$. The compositions of these arrows are all endoarrows of one or the other. Since all arrows from (to) an initial (terminal) objects are unique, these compositions have to be the identity arrows. Hence the arrows we found between the two objects are isomorphisms. QED.
• In Sets, the empty set is initial, and any singleton set is terminal.
• In the category of Vector spaces, the single element vector space 0 is both initial and terminal.
#### 6.1 Zero objects
This last example is worth taking up in higher detail. We call an object in a category a zero object if it is simultaneously initial and terminal.
Some categories exhibit a richness of structure similar to the category of vectorspaces: all kernels exist (nullspaces), homsets are themselves abelian groups (or even vectorspaces), et.c. With the correct amount of richness, the category is called an Abelian category, and forms the basis for homological algebra, where techniques from topology are introduced to study algebraic objects.
One of the core requirements for an Abelian category is the existence of zero objects in it: if a category does have a zero object 0, then for any Hom(A,B), the composite $A\to 0\to B$ is a uniquely determined member of the homset, and the addition on the homsets of an Abelian category has this particular morphism as its identity element.
#### 6.2 Pointless sets and generalized elements
Arrows to initial objects and from terminal objects are interesting too - and as opposed to the arrows from initial and to the terminals, there is no guarantee for these arrows to be uniquely determined. Let us start with arrows $A\to 0$ into initial objects.
In the category of sets, such an arrow only exists if A is already the empty set.
In the category of all monoids, with monoid homomorphisms, we have a zero object, so such an arrow is uniquely determined.
For arrows $1\to A$, however, the situation is significantly more interesting. Let us start with the situation in Set. 1 is some singleton set, hence a function from 1 picks out one element as its image. Thus, at least in Set, we get an isomorphism of sets A = Hom(1,A).
As with so much else here, we build up a general definition by analogy to what we see happening in the category of sets. Thus, we shall say that a global element, or a point, or a constant of an object A in a category with terminal objects is a morphism $x:1\to A$.
This allows us to talk about elements without requiring our objects to even be sets to begin with, and thus reduces everything to a matter of just morphisms. This approach is fruitful both in topology and in Haskell, and is sometimes called pointless.
The important point here is that we can replace function application f(x) by the already existing and studied function composition. If a constant x is just a morphism $x:1\to A$, then the value f(x) is just the composition $f\circ x:1\to A\to B$. Note, also, that since 1 is terminal, it has exactly one point.
In the idealized Haskell category, we have the same phenomenon for constants, but slightly disguised: a global constant is 0-ary function. Thus the type declaration
`x :: a`
can be understood as syntactic sugar for the type declaration
`x :: () -> a`
thus reducing everything to function types.
Similarly to the global elements, it may be useful to talk about variable elements, by which we mean non-specified arrows $f:T\to A$. Allowing T to range over all objects, and f to range over all morphisms into A, we are able to recover some of the element-centered styles of arguments we are used to. We say that f is parametried over T.
Using this, it turns out that f is a monomorphism if for any variable elements $x,y:T\to A$, if $x\neq y$ then $f\circ x\neq f\circ y$.
### 7 Internal and external hom
If $f:B\to C$, then f induces a set function $Hom(A,f):Hom(A,B)\to Hom(A,C)$ through $Hom(A,f)(g) = f\circ g$. Similarly, it induces a set function $Hom(f,A):Hom(C,A)\to Hom(C,B)$ through $Hom(f,A)(g) = g\circ f$.
Using this, we have an occasionally enlightening
Proposition An arrow $f:B\to C$ is
1. a monomorphism if and only if Hom(A,f) is injective for every object A.
2. an epimorphism if and only if Hom(f,A) is injective for every object A.
3. a split monomorphism if and only if Hom(f,A) is surjective for every object A.
4. a split epimorphism if and only if Hom(A,f) is surjective for every object A.
5. an isomorphism if and only if any one of the following equivalent conditions hold:
1. it is both a split epi and a mono.
2. it is both an epi and a split mono.
3. Hom(A,f) is bijective for every A.
4. Hom(f,A) is bijective for every A.
For any A,B in a category, the homset is a set of morphisms between the objects. For many categories, though, homsets may end up being objects of that category as well.
As an example, the set of all linear maps between two fixed vector spaces is itself a vector space.
Alternatively, the function type
a -> b
is an actual Haskell type, and captures the morphisms of the idealized Haskell category.
We shall return to this situation later, when we are better equipped to give a formal scaffolding to the idea of having elements in objects in a category act as morphisms. For now, we shall introduce the notations $[A\to B]$ or BA to denote the internal hom - where the morphisms between two objects live as an object of the category. This distinguishes BA from Hom(A,B).
To gain a better understanding of the choice of notation, it is worth noting that | HomSet(A,B) | = | B | | A | .
### 8 Homework
Passing mark requires at least 4 of 11.
1. Suppose g,h are two-sided inverses to f. Prove that g = h.
2. (requires some familiarity with analysis) There is a category with object $\mathbb R$ (or even all smooth manifolds) and with morphisms smooth (infinitely differentiable) functions. Prove that being a bijection does not imply being an isomorphisms. Hint: What about $x\mapsto x^3?$.
3. (try to do this if you don't do 2) In the category of posets, with order-preserving maps as morphisms, show that not all bijective homomorphisms are isomorphisms.
4. Consider the partially ordered set P as a category. Prove: every arrow is both monic and epic. Is every arrow thus an isomorphism?
5. What are the terminal and initial objects in a poset? Give an example of a poset that has both, either and none. Give an example of a poset that has a zero object.
6. What are the terminal and initial objects in the category with objects graphs and morphisms graph homomorphisms?
7. Prove that if a category has one zero object, then all initial and all terminal objects are all isomorphic and they are all zero objects.
8. Prove that the composition of two monomorphisms is a monomorphism and that the composition of two epimorphisms is an epimorphism. If $g\circ f$ is monic, do any of g,f have to be monic? If the composition is epic, do any of the factors have to be epic?
9. Verify that the equivalence relation used in defining subobjects really is an equivalence relation. Further verify that this fixes the motivating problem.
10. Describe a representative subcategory each of:
• The category of vectorspaces over the reals.
• The category formed by the preordered set of the integers $\mathbb Z$ and the order relation $a\leq b$ if a | b. Recall that a preordered set is a set P equipped with a relation $\leq$ that fulfills transitivity and reflexivity, but not necessarily anti-symmetry.
11. * An arrow $f:A\to A$ in a category C is an idempotent if $f\circ f = f$. We say that f is a split idempotent if there is some $g:A\to B, h:B\to A$ such that $h\circ g=f$ and $g\circ h=1_B$. Show that in Set, f is idempotent if and only if its image equals its set of fixed points. Show that every idempotent in Set is split. Give an example of a category with a non-split idempotent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220048785209656, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/299391/rotating-and-making-two-lines-parallel | # Rotating and making two lines parallel
I have two line segments with points:
Line 1 is a line through $(x_1,y_1)$ and $(x_2,y_2)$ (smaller line)
Line 2 is a line through $(x_3,y_3)$ and $(x_4,y_4)$ (bigger line)
How can I make the Line 1 (smaller) to rotate and make it parallel to Line 2 (bigger) using either:
1. $(x_1,y_1)$ as fixed point of rotation or
2. $(x_2,y_2)$ as fixed point of rotation or
3. center point as fixed point of rotation
Crossposted from StackOverflow.
-
1
– Sigur Feb 10 at 14:16
Code details and implementations are off-topic here, so if you want that, you should try stackoverflow or programmers.se. If you want the math of the rotations, then I think the question should be on-topic once you remove the reference to the programming language. – Paresh Feb 10 at 14:53
Removed programming reference – Ankit Sharma Feb 10 at 17:15
Smaller and bigger are not appropriate terms. – julien Feb 10 at 17:17
Length of line 2 is greater then length of line 1 – Ankit Sharma Feb 11 at 14:04
## 1 Answer
You can get the angle of each line using $\text{Atan2}(x_2-x_1,y_2-y_1)$ and similar. Note that the result is radians, but you probably don't need to worry about that. Taking the difference of the angles will show how much you have to rotate, call it $\theta$. To rotate $\theta$ around $(x_1,y_1)$ you have $$x'=(x-x_1) \cos \theta+(y-y_1) \sin \theta+x_1\\ y'=(y-y_1) \cos \theta - (x-x_1) \sin \theta + y_1$$
-
Wat is $x$ and $y$ in above equations? – Ankit Sharma Feb 11 at 4:58
$x$ and $y$ are any point you want to transform. You might not have a line segment but some more complicated shape. These equations work for any point, given that $(x_1,y_1)$ is the center of rotation. – Ross Millikan Feb 11 at 5:14
– Ankit Sharma Feb 11 at 13:59
@AnkitSharma: to my eye it looks like it worked if you were trying to rotate around the center of the short line. You can check that the slopes of the two lines agree: if they aren't vertical, see if $\frac {y_4-y-3}{x_4-x_3}=\frac {y_2-y-1}{x_2-x_1}$ to within numeric precision. – Ross Millikan Feb 11 at 14:08
i m checking the slopes of lines only but getting -0.7777778 and 0.999999762 :( – Ankit Sharma Feb 11 at 14:12
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244639277458191, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/253843/inequality-for-random-walk-with-bounded-random-variables | # inequality for random walk with bounded random variables
Let's consider the random variables $\xi_i$, independent and identically distributed. These variables take values on the finite set of integers $S=\{-s, -s+1, \, \ldots \, 0 \, ,\, \ldots \, s-1, \, \, s \}$ and are distributed according to a certain distribution $F$. Let's call $E \xi$ the expected value of each random variable.
Let's define the random variable $i(t) = \sum_{i=1}^t \xi_i$. I want to prove that, for every constant $\delta$ and for every $m$ s.t. $0<m<s/2$, there exists a constant $u<1$ such that the following inequality holds FOR EVERY $t \in [0, \infty]$,
$$i(t) - t \, \, E\xi \, \, \leq \, \, t \, \, \delta + m$$ with probability larger than $1 - u^m$.
General idea: for the law of large numbers (in the strong sense) the inequality will hold with probability 1 for all $t$ larger than a sufficiently big constant $T$, dependent on $\delta$. But how to prove that it will hold with probability larger than $1 - u^m$ for ALL times $0 \leq t \leq T$?
-
This is a (basic) large deviations result. // How come your accept rate is 0%? – Did Jan 21 at 11:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043952226638794, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/6385/file-encryption-with-one-keypair/6386 | # File encryption with one keypair?
I'm working on a program that uses an ECC keypair in a (password protected) PKCS12 file (.pfx) to encrypt files. I like this method because I think it will be higher security (using ECDH to negotiate the key) and more easy to use (one PIN for all files). How I'm going about this is by using the public and private keys of the same certificate to negotiate a secret using ECDH. A SHA-256 of this secret is used every time to encrypt a random generated key just for that file. The program outputs file.enc, kek.iv, kek.enc, file.iv all in a zip for the encryption portion of the program. file.enc is the ciphertext of the original file, kek.iv is the IV used to encrypt the file-specific key (session key if you will). kek.enc is the ciphertext of the file-specific key that has been encrypted with the shared secret. And by "I'm working on this program", I mean I already wrote the program but it's not working and I'm starting to question the fundamental logic.
My question is, is it even possible to get the same ECDH secret every time between the same public/private key pair so that it can be used to encrypt files? If not, how can I achieve the goal of encrypting files with one keypair?
Btw I'm trying to implement Suite B so I'm using ECC(P-521) keys for ECDH and AES-256/GCM with BouncyCastle
-
## 3 Answers
The ECIES (Elliptic Curve Integrated Encryption Scheme) should be used for encrypting file and data. ECC(P-521) is over kill, P-256 is enough secure and more efficient.
-
Looking through the specs on the algorithm, there is a message to be encrypted and decrypted so that wouldn't be the entire file but rather the regular symmetric AES key for encrypting the file, correct? In other words, the ECIES scheme only protects the key for regular AES file encryption? – Андрей Feb 19 at 21:16
It is not possible to get the same secret every time. You get a new random shared secret reach time. The protocol (for a known generator $g \in G$ where $G$ and $g$ are chosen correctly) is Alice computes $A=g^a$ and sends $A$ to Bob.
$$Alice \ g^a\xrightarrow{A} Bob$$ Then Bob computes $B=A^b$ and sends it to Alice. $$Alice \xleftarrow{B} B=A^b\ Bob$$ where $a$ and $b$ are choosen at random and the shared seceret that comes out is $g^{ab}$. Because $a$ and $b$ are chosen randomly each time, you get a fresh secret.
Don't use your own file encryption. There are systems out there. GPG, Truecrypt, Bitlocker if you are on windows and are ok with closed source crypto (which is still probably better than homegrown)
-
– Андрей Mar 23 at 3:42
I don't recommend implementing your own file encryption program (unless you are already a crypto expert, and maybe not even then). There are many tricky details that are easy to get wrong. Enumerating all of them is beyond the scope of what can be answered in a single question here.
Instead, I recommend using something that has already been well-vetted, like GPG or Truecrypt or some other highly regarded scheme for file encryption.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287814497947693, "perplexity_flag": "middle"} |
http://jdh.hamkins.org/the-differential-operator-ddx-binds-variables/ | # The differential operator $\frac{d}{dx}$ binds variables
Posted on December 5, 2012 by
Recently the question If $\frac{d}{dx}$ is an operator, on what does it operate? was asked on mathoverflow. It seems that some users there objected to the question, apparently interpreting it as an elementary inquiry about what kind of thing is a differential operator, and on this interpretation, I would agree that the question would not be right for mathoverflow. And so the question was closed down (and then reopened, and then closed again….sigh). (Update 12/6/12: it was opened again,and so I’ve now posted my answer over there.)
Meanwhile, I find the question to be more interesting than that, and I believe that the OP intends the question in the way I am interpreting it, namely, as a logic question, a question about the nature of mathematical reference, about the connection between our mathematical symbols and the abstract mathematical objects to which we take them to refer. And specifically, about the curious form of variable binding that expressions involving $dx$ seem to involve. So let me write here the answer that I had intended to post on mathoverflow:
————————-
To my way of thinking, this is a serious question, and I am not really satisfied by the other answers and comments, which seem to answer a different question than the one that I find interesting here.
The problem is this. We want to regard $\frac{d}{dx}$ as an operator in the abstract senses mentioned by several of the other comments and answers. In the most elementary situation, it operates on a functions of a single real variable, returning another such function, the derivative. And the same for $\frac{d}{dt}$.
The problem is that, described this way, the operators $\frac{d}{dx}$ and $\frac{d}{dt}$ seem to be the same operator, namely, the operator that takes a function to its derivative, but nevertheless we cannot seem freely to substitute these symbols for
one another in formal expressions. For example, if an instructor were to write $\frac{d}{dt}x^3=3x^2$, a student might object, “don’t you mean $\frac{d}{dx}$?” and the instructor would likely reply, “Oh, yes, excuse me, I meant $\frac{d}{dx}x^3=3x^2$. The other expression would have a different meaning.”
But if they are the same operator, why don’t the two expressions have the same meaning? Why can’t we freely substitute different names for this operator and get the same result? What is going on with the logic of reference here?
The situation is that the operator $\frac{d}{dx}$ seems to make sense only when applied to functions whose independent variable is described by the symbol “x”. But this collides with the idea that what the function is at bottom has nothing to do with the way we represent it, with the particular symbols that we might use to express which function is meant. That is, the function is the abstract object (whether interpreted in set theory or category theory or whatever foundational theory), and is not connected in any intimate way with the symbol “$x$”. Surely the functions $x\mapsto x^3$ and $t\mapsto t^3$, with the same domain and codomain, are simply different ways of describing exactly the same function. So why can’t we seem to substitute them for one another in the formal expressions?
The answer is that the syntactic use of $\frac{d}{dx}$ in a formal expression involves a kind of binding of the variable $x$.
Consider the issue of collision of bound variables in first order logic: if $\varphi(x)$ is the assertion that $x$ is not maximal with respect to $\lt$, expressed by $\exists y\ x\lt y$, then $\varphi(y)$, the assertion that $y$ is not maximal, is not correctly described as the assertion $\exists y\ y\lt y$, which is what would be obtained by simply replacing the occurrence of $x$ in $\varphi(x)$ with the symbol $y$. For the intended meaning, we cannot simply syntactically replace the occurrence of $x$ with the symbol $y$, if that occurrence of $x$ falls under the scope of a quantifier.
Similarly, although the functions $x\mapsto x^3$ and $t\mapsto t^3$ are equal as functions of a real variable, we cannot simply syntactically substitute the expression $x^3$ for $t^3$ in $\frac{d}{dt}t^3$ to get $\frac{d}{dt}x^3$. One might even take the latter as a kind of ill-formed expression, without further explanation of how $x^3$ is to be taken as a function of $t$.
So the expression $\frac{d}{dx}$ causes a binding of the variable $x$, much like a quantifier might, and this prevents free substitution in just the way that collision does. But the case here is not quite the same as the way $x$ is a bound variable in $\int_0^1 x^3\ dx$, since $x$ remains free in $\frac{d}{dx}x^3$, but we would say that $\int_0^1 x^3\ dx$ has the same meaning as $\int_0^1 y^3\ dy$.
Of course, the issue evaporates if one uses a notation, such as the $\lambda$-calculus, which insists that one be completely explicit about which syntactic variables are to be regarded as the independent variables of a functional term, as in $\lambda x.x^3$, which means the function of the variable $x$ with value $x^3$. And this is how I take several of the other answers to the question, namely, that the use of the operator $\frac{d}{dx}$ indicates that one has previously indicated which of the arguments of the given function is to be regarded as $x$, and it is with respect to this argument that one is differentiating. In practice, this is almost always clear without much remark. For example, our use of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ seems to manage very well in complex situations, sometimes with dozens of variables running around, without adopting the onerous formalism of the $\lambda$-calculus, even if that formalism is what these solutions are essentially really about.
Meanwhile, it is easy to make examples where one must be very specific about which variables are the independent variable and which are not, as Todd mentions in his comment to David’s answer. For example, cases like
\frac{d}{dx}\int_0^x(t^2+x^3)dt\qquad
\frac{d}{dt}\int_t^x(t^2+x^3)dt
are surely clarified for students by a discussion of the usage of variables in formal expressions and more specifically the issue of bound and free variables.
This entry was posted in Exposition and tagged mathoverflow by Joel David Hamkins. Bookmark the permalink.
## 8 thoughts on “The differential operator $\frac{d}{dx}$ binds variables”
1. on December 5, 2012 at 6:29 pm said:
The second integral you have written the lower limit as t – shouldn’t it be 0?
• Joel David Hamkins on December 5, 2012 at 6:48 pm said:
Well, I had wanted just to mix things up a bit, since that instance of $t$ is the only one relevant for this instance of $\frac{d}{dt}$.
2. on December 5, 2012 at 10:44 pm said:
I realized at some point that freshman calculus is full of these problems. One of the things that hammered this home for me was when I saw a thread somewhere (I have lost the link) where several people thought that $\int_0^x x\,dx$ is somehow misformed for using $x$ as the limit of integration and in the integrand.
• David Speyer on December 7, 2012 at 10:06 am said:
I do think that that is misformed. $\int_0^x x dt$ would be well formed (although likely to confuse undergraduates). But I don’t see how $x$ can both be the upper limit of integration and the variable of integration. Can you explain what this notation is meant to mean?
• Joel David Hamkins on December 7, 2012 at 12:43 pm said:
David, it is the same kind of thing as $n\cdot\Sigma_{n=0}^\infty \frac{1}{2^n}$, which is just $n\cdot 2$, or $2n$. The $n$ appearing in the sum is a bound variable, and has nothing to do with the $n$ in front. Such an expression might arise when you have a variable $n$ to be multiplied by a certain sum, which you had previously calculated using (a different copy of) the symbol $n$, resulting in just this expression. The expression is not really ill-formed, but the understanding of it is best understood by means of the concepts of free and bound variables, since the $n$ appearing in the summand is a bound variable. This kind of issue arises pervasively in the collision of local and global variables in programming—imagine that you used the variable $x$ in a subroutine, and also used $x$ with a different meaning in a deeper subroutine called by that routine. There are really degrees of globalness, for the scope is bounded usually by a quantifier or, in calculus, by a sum or integral sign.
In particular, I would write:
$$\int_0^x x\ dx\quad =\quad (x^2/2)\mid_0^x\quad = \quad x^2/2-0=x^2/2.$$
One just has to keep track of which $x$ is bound and which is free, and it is perfectly sensible (if ill-advised).
3. S. Carnahan on December 5, 2012 at 11:16 pm said:
The use of partial derivatives does in fact lead to problems when there are unannounced constraints (e.g., if we are actually working on some submanifold of the ambient space using an implicit function rule). It is common for students to first encounter this in a thermodynamics class, where one derives various relationships between energy, pressure, temperature, volume, entropy, and chemical potential, while holding some subset of the variables constant. I remember it being rather confusing, at least for my classmates and me. Your mention of lambda-calculus as a way to clear up ambiguities makes me curious about whether it can be fruitfully applied to physics education.
4. Samuel Alexander on December 8, 2012 at 4:12 pm said:
You might be interested in my paper, “The First-Order Syntax of Variadic Functions”, to appear in NDJFL, http://arxiv.org/pdf/1105.4135.pdf
It does not talk about differential operators specifically, but it deals with similar issues of variable-binding-in-terms (as opposed to in formulas). In particular (Re: David Speyer), a rigorous semantics justifies saying things like
sum_{n=0}^n n = n(n+1)/2.
In addition to what DJH already wrote to David Speyer I’d add the following amusing example. Is the first-order sentence “forall x there exists x such that x=0″ true, say, in the reals? The answer is Yes. One way to see this is to carefully apply the Tarskian definition of truth-in-a-model. You’ll see it’s entirely similar to the integral_0^x x dx question.
By the way, I don’t know how standard this is, but Stewart’s “Calculus” has the following to say about differentials (additions in brackets are mine): “If y=f(x), where f is a differentiable function, then the differential dx is an independent variable; that is, dx can be given the value of any real number. The differential dy is then [a dependent variable] defined in terms of dx [and x] by the equation dy=f’(x)dx.” This actually has the potential to be made quite logically rigorous (defining not just “variables” as in FOL but “independent variables” and “dependent variables” which depend on independent variables in various ways. One could very compellingly argue that elementary calculus isn’t just about real numbers and functions, but also about formal terms in a certain sophisticated language.
• Joel David Hamkins on December 8, 2012 at 5:39 pm said:
Interesting! Thanks a lot… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444169402122498, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/11396/physics-math-without-sqrt-1 | # Physics math without $\sqrt{-1}$
The use of imaginary and complex values comes up in many physics/engineering derivations.
My question is
Is it making the process of derivation easier or is it essential without which it would be impossible to derive some results.
1.It doesn't look like it is mandatory for a Newtonian to gen relativistic results and electrodynamics..
2.Can we say the same thing about quantum mechanics either way for sure?
Could this be a difference in quantum mechanics over the classical picture?
-
– Qmechanic♦ Jun 21 '11 at 16:06
1
Complex numbers are essential to quantum theory. You can go far with a pair of real numbers, but there's an abstract structure to complex numbers that just seems to be built in to reality. There's a paper or two by David Finkelstein in the 1960s that go into this (my notes are lost or hiding deeply). – DarenW Jun 22 '11 at 5:54
8
It bothers me to see sqrt(-1) as a definition of i; it's devoid of operational meaning. I'd rather put it as i^2 = -1. Then i looks like any operator that when applied twice makes some vector point the opposite way, for example, a 90 degree rotation. – DarenW Jun 22 '11 at 5:56
2
From a purely mathematical point of view there are many very difficult integrals that can be solved exactly by analytic extension into the complex plane and application of the method of residues. Such integrals appear in many fields of physics. – dmckee♦ Jun 22 '11 at 15:39
2
Very often, the shortest route between two real facts is a complex path. – Emilio Pisanty Nov 3 '12 at 2:10
show 1 more comment
## 3 Answers
The use of complex numbers is never really essential, but if applicable it is almost always more convenient than the equivalent representation in a 2d real vector space (in fact, one typically learns the formal properties of complex number manipulations by their effect on $(a,b) = a+ib.$)
You mention that complex numbers don't seem necessary for classical electrodynamics, and I agree -- however I can't imagine any clear-minded person forgoing their use. In fact it is in classical E&M that I think complex numbers really exhibit their gracefulness in the description of physical phenomena.
Likewise, as lurscher has mentioned, there are formulations of QM that avoid explicit reference to complex numbers -- they are equivalent mathematical representations, but the manipulations have an added degree of bookkeeping that we had already built into complex numbers.
And that's the rub. Complex numbers are a tool for describing a theory, not a property of the theory itself. Which is to say that they can not be the fundamental difference between classical and quantum mechanics. The real origin of the difference is the non-commutative nature of measurement in QM. Now this is a property that can be captured by all kinds of beasts -- even real-valued matrices.
-
4
also on the same note, you have all the other groups with a higher-than-scalar representation. complex numbers is just the first layer of abstraction you run into.. the groups express symmetries that seem to exist in nature but like you write, if you chose to call them complex numbers, U(1) or simply a system of linked scalar equations it's really up to you. its just a math engine trying to represent experiments. – Bjorn Wesen Jun 21 '11 at 20:03
I don't think it's fair to say that one can always excise complex numbers by implementing a higher degree of book-keeping. Whether one's formulation of QM is the ordinary one or has some two-component wavefunction with an operator with a square of negative identity, one is using the structure of complex numbers, which is the truly mathematically essential part. The rest is just notation. – Stan Liou Nov 3 '12 at 0:07
about 2. check this question about an alternative formalism for Quantum Mechanics with equations where only real probability densities and currents appear. The relevant wikipedia article is this one about Madelung equations.
I don't know any attempts to extend the same to QFT. Since complex residues are the butter and bread of most Feynmann loop diagrams, i would doubt it would be easy, or rewarding
-
I agree with @lurscher that it wouldn't be easy or rewarding, and also with what I think is his implied suggestion that it would still be theoretically possible. – Ben Hocking Jun 21 '11 at 17:57
Meromorphic functions of complex theory are just vector fields of point sources in vector calculus. The residue theorem is just Stokes' theorem. I don't imagine translating from complex numbers to a real, 2d vector space would be that big a deal. – Muphrid Nov 3 '12 at 4:02
Quantum mechanics necessarily needs complex numbers. Replacing complex numbers with real number is possible, but that would hide a lot of structure and is purely a mathematical trick.
the Feynman amplitude being $e^{i S}$, or the commutation relation indicates that something deep is going on and that can't be understood by treating them as 2 real numbers.
Feynman used to talk about quantum mechanics as a complex extension to classical probability theory.
see
spacetime approach to non-relativistic quantum mechanics and
The Concept of Probability in Quantum Mechanics ( http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.bsmsp/1200500252&page=record)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946429967880249, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/58569/list | Return to Question
6 tag
5 corrected the statement of the question
The following question came up in my arithmetic geometry course yesterday. Suppose $\alpha$ is an irrational real algebraic integer, and suppose $\epsilon >0$ is given. Then by Roth's theorem there are at most finitely many rational numbers $\frac{h}{q}$ with $\gcd(h,q)=1$, $q>1$, such that $$\left| \alpha - \frac{h}{q}\right| < \frac{1}{q^{2+\epsilon}}.$$ Are there any results on how small the smallest large such $q$ can be? Thanks.
4 Added $q>1$ to the statement of the question.
The following question came up in my arithmetic geometry course yesterday. Suppose $\alpha$ is a an irrational real algebraic integer, and suppose $\epsilon >0$ is given. Then by Roth's theorem there are at most finitely many rational number numbers $\frac{h}{q}$ with $\gcd(h,q)=1$, $q>0$, q>1$, such that $$\left| \alpha - \frac{h}{q}\right| < \frac{1}{q^{2+\epsilon}}.$$ Are there any results on how small the smallest such$q\$ can be? Thanks.
3 edited tags
2 added 1 characters in body
The following question came up in my arithmetic geometry course yesterday. Suppose $\alpha$ is a real algebraic numberinteger, and suppose $\epsilon >0$ is given. Then by Roth's theorem there are at most finitely many rational number $\frac{h}{q}$ with $\gcd(h,q)=1$, $q>0$, such that $$\left| \alpha - \frac{h}{q}\right| < \frac{1}{q^{2+\epsilon}}.$$ Are there any results on how small the smallest such $q$ can be? Thanks.
1
Question related to Diophantine approximations and Roth's theorem
The following question came up in my arithmetic geometry course yesterday. Suppose $\alpha$ is a real algebraic number, and suppose $\epsilon >0$ is given. Then by Roth's theorem there are at most finitely many rational number $\frac{h}{q}$ with $\gcd(h,q)=1$, $q>0$, such that $$\left| \alpha - \frac{h}{q}\right| < \frac{1}{q^{2+\epsilon}}.$$ Are there any results on how small the smallest such $q$ can be? Thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962176501750946, "perplexity_flag": "head"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.