url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/calculus/15662-find-equation-curve.html
Thread: 1. Find an equation of the curve. Find an equation of the curve that satisfies dy/dx=8x^7y and whose y-intercept is 6. 2. Originally Posted by asnxbbyx113 Find an equation of the curve that satisfies dy/dx=8x^7y and whose y-intercept is 6. The function evaluated at $x=0$ implies that $y=6$. Thus, $y(0)=6$. $y' = 8x^7 y$ note $y\not = 0$, $\frac{y'}{y} = 8x^7$ $\int \frac{y'}{y} dx = \int 8x^7dx$ $\ln |y| = x^8 + C$ $y = Ce^{x^8} \mbox{ where }C>0$ In order to satisfy the IVP we need $C=6$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.895954966545105, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/50467-finite-function-question.html
# Thread: 1. ## Finite function question Suppose that f : X --> Y is given. If X is finite, show that f(X) is also finite. Similarly, if X is countable, show that f(X) is also countable. 2. Hello, Originally Posted by flaming Suppose that f : X --> Y is given. If X is finite, show that f(X) is also finite. Similarly, if X is countable, show that f(X) is also countable. (talking about Y is an extraneous information) The solution lies in the definition of $f(X)$ $f(X)=\{f(x) ~:~ x \in X\}$ For any $x \in X$, there is at least one element $y \in f(X)$ such that $y=f(x)$ It also means that for any $y \in f(X)$, there is at least one $x \in X$ such that $y=f(x)$ You can see that this sentence is exactly the definition of a surjection. Hence $f ~:~ X \to f(X)$ is a surjection. By Cantor-Bernstein theorem (I think it's this one), we have $\text{Card}(X) \ge \text{Card}(f(X))$ If X is finite, then $\text{Card}(X) < \text{Card}(\mathbb{N})$. So we have ${\color{red}\text{Card}(f(X))} \le \text{Card}(X) < {\color{red}\text{Card}(\mathbb{N})}$. Thus f(X) is finite. If X is countable, then $\text{Card}(X) \le \text{Card}(\mathbb{N})$, etc... If you see any mistake, tell me, I'm learning this stuff too 3. Here are some remarks to clear up some questions raised by the above posts. DEFINITION: $card(A) \leqslant card(B) \Leftrightarrow \,\exists \, g:A \mapsto B$ is an injection Here is a most important theorem in many texts that deal with this material. There is an injection $F:A \mapsto B$ if and only if there exists a surjection $<br /> G:B \mapsto A$. As noted above there is a surjection $f:X \mapsto f(X)$. So by that theorem we must have an injection $g:f(X) \mapsto X$ Now apply the definition. BTW: I do not see how the Cantor-Schroder-Bernstein theorem applies Also recall that a function $f:A \mapsto B$ is simply a subset of $A \times B$. Where $A$ is the set of all first terms of the pairs and $f(A)$ is the set of second terms in the pairs. So there is a natural expectation that the $card(f(A)) \leqslant card(A)$. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602869153022766, "perplexity_flag": "head"}
http://lucatrevisan.wordpress.com/2012/02/13/things-i-learned-last-week/
in theory "Marge, I agree with you - in theory. In theory, communism works. In theory." -- Homer Simpson # things I learned last week February 13, 2012 in math, theory | Tags: analysis of boolean functions • How the graph construction of Barak, Gopalan, Hastad, Meka, Raghavendra and Steurer (which shows the near-optimality of the “Cheeger-type” bound in Arora-Barak-Steurer) works. • That Kuperberg, Lovett and Peled finally showed that, for every constant $t$, there is a sample space of size poly$n$ of permutations $\{ 1,\ldots,n \} \rightarrow \{ 1,\ldots, n \}$ such that a uniformly sampled permutation from the sample space is $t$-wise independent. This was open even for $t=4$. • That proving the following “quadratic uncertainty principle” is an open question, and probably a very difficult one: suppose that $q_1,\ldots,q_m$ are n-variate polynomials of total degree at most 2 and $c_1,\ldots,c_m$ are real coefficients such that for every $(x_1,\ldots,x_n) \in \{ 0,1\}^n$ we have $x_1 \cdot x_2 \cdots x_n = \sum_i c_i \cdot (-1)^{q_i(x_1,\ldots,x_n)}$ prove that $m$ must be exponentially large in $n$. (If the $q_i$ are all linear, then the standard uncertainty principle gives us $m \geq 2^n$.) • That women can be real men, and that they should so aspire. • That the rich really are different from you and me More here. ## 2 comments Here you probably mean \$x_i \in \{-1, 1\}\$ and not in \$\{0,1\}\$ No, the $x_i$ are in $\{0,1\}$, otherwise you can get their product with one degree-one polynomial %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371844530105591, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104213/triviality-of-determinant-sheaf/104227
## triviality of determinant sheaf ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) On a smooth algebraic variety X, every coherent sheaf F has a finite resolution by locally free sheaves. Using such resolution, we can define the determinant of F, det F, which is a line bundle on X. My question is : why if the support of F is of codimension greater or equal to 2 is the determinant of F trivial ? It is mentionned without proof on the book "The geometry of moduli spaces of sheaves", D. Huybrechts, M. Lehn. I have verified this result on some explicit examples for which I know some explicit locally free resolutions but I don't see how to do the general case. - 5 The best reference for these kinds of questions is the following article. MR0437541 (55 #10465) Reviewed Knudsen, Finn Faye; Mumford, David The projectivity of the moduli space of stable curves. I. Preliminaries on "det'' and "Div''. Math. Scand. 39 (1976), no. 1, 19–55. 14H10 (14F05 14C05) – Jason Starr Aug 7 at 19:22 ## 2 Answers Outside the support of $F$, the resolution is an exact sequence, so the alternating tensor product of the determinants is trivial. On a smooth scheme, a line bundle trivial outside a codimension $2$ subset is trivial. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Just an idea, using the first Chern class which should live in the cohomology with support in Supp($F$), you should then get that $c_1(F) = 0$ which makes $F$ trivial since it's a line bundle (perhaps modulo linear equivalence). - I want to show that the determinant bundle is algebraically trivial, what can not be seen in cohomology. Even if one works over \mathbb{C}, I don't know how to prove that it is topologically trivial. I don't understand why the first Chern class should live in the cohomology with support in Supp(F). – unknown (google) Aug 7 at 18:43 1 $c_1(F) = 0$ does not imply that $F$ is trivial: Consider an elliptic curve $E$. Set $X = E \times E$. Let $g : (x,y) \mapsto (x+1/2,-y)$ be an involution w/o fixed points on $X$. Then the canonical bundle of the quotient $Z = X / \langle g \rangle$ has zero first Chern class, but is not trivial since it has no non-zero sections. – Gunnar Magnusson Aug 7 at 18:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102575182914734, "perplexity_flag": "head"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Adjoint_representation
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Adjoint representation The adjoint representation of a Lie group G is the linearized version of the action of G on itself by conjugation. For each g in G, the inner automorphism x→gxg-1 gives a linear transformation Ad(g) from the Lie algebra of G, i.e., the tangent space of G at the identity element, to itself. The map Ad(g) is called the adjoint endomorphism; the map g→Ad(g) is the adjoint representation. Any Lie group is a representation of itself (via $h\rightarrow ghg^{-1}$) and the tangent space is mapped to itself by the group action. This gives the linear adjoint representation. ## Examples • If G is commutative of dimension n, the adjoint representation of G is the trivial n-dimensional representation. • The kernel of the adjoint representation of G is the center of G. • If G is SL2(R) (real 2×2 matrices with determinant 1), the Lie algebra of G consists of real 2×2 matrices with trace 0. The representation is equivalent to that given by the action of G by linear substitution on the space of binary (i.e., 2 variable) quadratic forms. ## Variants and analogues The adjoint representation of a Lie algebra L sends x in L to ad(x), where ad(x)(y) = [x y]. If L arises as the Lie algebra of a Lie group G, the usual method of passing from Lie group representations to Lie algebra representations sends the adjoint representation of G to the adjoint representation of L. The adjoint representation can also be defined for algebraic groups over any field. The co-adjoint representation is the contragradient representation of the adjoint representation. A. Kirillov observed that the orbit of any vector in a co-adjoint representation is a symplectic manifold. According to the philosophy in representation theory known as the orbit method, the irreducible representations of a Lie group G should be indexed in some way by its co-adjoint orbits. This relationship is closest in the case of nilpotent Lie groups . ## Roots of a semisimple Lie group If G is semisimple, the non-zero weights of the adjoint representation form a root system. To see how this works, consider the case G=SLn(R). We can take the group of diagonal matrices diag(t1,...,tn) as our maximal torus T. Conjugation by an element of T sends $\begin{bmatrix} a_{11}&a_{12}&\cdots&a_{1n}\\ a_{21}&a_{22}&\cdots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}&a_{n2}&\cdots&a_{nn}\\ \end{bmatrix} \mapsto \begin{bmatrix} a_{11}&t_1t_2^{-1}a_{12}&\cdots&t_1t_n^{-1}a_{1n}\\ t_2t_1^{-1}a_{21}&a_{22}&\cdots&t_2t_n^{-1}a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ t_nt_1^{-1}a_{n1}&t_nt_2^{-1}a_{n2}&\cdots&a_{nn}\\ \end{bmatrix}.$ Thus, T acts trivially on the diagonal part of the Lie algebra of G and with eigenvectors titj-1 on the various off-diagonal entries. The roots of G are the weights diag(t1,...,tn)→titj-1. This accounts for the standard description of the root system of G=SLn(R) as the set of vectors of the form ei-ej. 03-10-2013 05:06:04 Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines. Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter. Science Fair Coach What do science fair judges look out for? ScienceHound Science Fair Projects for students of all ages All Science Fair Projects.com Site All Science Fair Projects Homepage Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8420190215110779, "perplexity_flag": "head"}
http://mathoverflow.net/questions/88718?sort=newest
## Tverberg partitions with less than (r-1)(d+1)+1 points ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Tverberg Theorem states the following: Let $x_1,x_2,\dots, x_m$ be points in $R^d$ with $m \ge (r-1)(d+1)+1$. Then there is a partition $S_1,S_2,\dots, S_r$ of `$\{1,2,\dots,m\}$` such that $\cap _{j=1}^rconv (x_i: i \in S_j) \ne \emptyset$. The bound of $(r-1)(d+1)+1$ in the theorem is sharp because there are point configurations with $(r-1)(d+1)$ points that do not have a Tverberg partition of length $r$. My question is about lowering this bound by imposing some structure to the points. That is: if we have a full dimensional point configuration $S$ with $m$ points in $R^d$ such that $m\leq(r-1)(d+1)$, can we put conditions on $S$ which still guaratee the existence of a Tverberg partition of length $r$? Gil Kalai has some very nice posts on the Tverberg Theorem in his blog: http://gilkalai.wordpress.com/2008/11/24/sarkarias-proof-of-tverbergs-theorem-1/ , http://gilkalai.wordpress.com/2008/11/26/sarkarias-proof-of-tverbergs-theorem-2/ and http://gilkalai.wordpress.com/2008/12/23/seven-problems-around-tverbergs-theorem/ . - 2 Not only there are examples of sets of $(r-1)(d+1)$ points with no Tverberg partition, but every set in sufficiently general position is such an example (by the dimension-counting). So, every condition that lowers the bound in Tverberg's theorem must necessarily be of the form "there is a particular kind of algebraic relation between the points", which does not sound all that natural geometrically. With that said, I do not know any results of this kind. – Boris Bukh Feb 18 2012 at 13:20 This is an excellent problem. Not much is known but there are few conjectures and results, I will try to answer later. – Gil Kalai Jan 31 at 12:33 ## 1 Answer This is an excellent question but we know very little about such conditions. As Boris Bukh remarked the issue is about points in special positions, because for points in sufficiently general position, even the affine hulls of parts for every partition to r parts will have an empty intersection. However, configurations of points in special positions are of great interest in combinatorial geometry. Let me start with an example. Suppose you have $2d+2$ points in $R^d$. This is one less than the number of points required to a Tverberg partition with three parts. Of course, one condition that guarantee a Tverberg 3-partition is that all the points (or all the points except one) belongs to a $(d-1)$ dimensional affine space. It is conjectured, more generally, that: If the dimension of Radon points for a set of $2d+2$ points in $R^d$ is not $d$ then there is a Tevrberg partition to three parts. The general conjecture in this direction (that I made in 1974) is: For a set $A$, denote by $T_r(A)$ those points in $R^d$ which belong to the convex hull of $r$ pairwise disjoint subsets of $latex X$. We call these points Tverberg points of order $r$. Conjecture: For every $A \subset R^d$ , $$\sum_{r=1}^{|A|} {\rm dim} T_r(A) \ge 0.$$ (Note that $\dim \emptyset = -1$.) Thus, if you have a set of points so that the dimension of (r-1)-Tverberg points is below what can be expected in the generic case, then there is a nonempty Tverberg partition to r parts. There are various strengthening and weakening of this conjecture. It was proved by Kadari (unpublished except his M Sc thesis in Hebrew from the early 90s) for the plane. There are few more things that can be said: 1) While the computational complexity of finding a Tverberg 3-partition for $2d+3$ points is unknown, the computational complexity of finding a Tverberg 3 partition for less points is NP-hard. As observed by Shmuel Onn 3-colorability of cubic graphs reduces to finding such a Tverberg partition. 2) It will be interesting to come with a topological strengthening of the above conjecture. 3) The conjecture about $2d+2$ points in $R^d$ motivated and is related to the graph-theoretic conjecture (which turned out to be false) in this Overflow question. 4) The first section of my paper Combinatorics with a Geometric Flavor gives more information and connections. - Dear Gil (and Boris), thanks for your answers. I imagined that more results of this kind would have been proved. Do you know where can I find Kadari's proof? – Arnau Feb 3 at 22:18 Dear Arnau, I am not aware of (but will be happy to lear about) other results directly on your question. Kadari's proof was not published except for his M Sc thesis (in Hebrew). – Gil Kalai Feb 5 at 17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362859129905701, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=537198
Physics Forums ## Adding Spheres: How to find the new radius? Okay this seems like a really simple question. Basically I'm adding together 8 spheres (like raindrops coalescing into one bigger drop) and I'm getting two different answers for the new radius. Each individual drop is identical. I start by expressing the new volume in terms of the individual drops' radii, and then the new radius. The individual drops' radii are R. Volume = (4/3)∏(R)^3 * 8 = (4/3)pi(Rf)^3 I work it all out and find that the new radius is 2 times the radius of an individual drop. This is true according to a solution given. But when I try this with surface area.... S.A. = 4∏(R)^2 * 8 = 4∏(Rf)^2 4∏ cancels, then the new radius comes out as R*2*√(2) What am I missing here? Is Surface area not additive, or am I making some calculation error? Recognitions: Homework Help Surface area is not additive. Imagine if you took two rectangular prisms and joined them together at a common side: each rectangular prism has lost a side to the interior of the combined object, so the surface area is NOT double the surface area of an individual block. The case with the spheres is similar. The volume should be additive, however. The reason is that, thinking physically, the mass of all the spheres is conserved, so unless the density were to change during the process of joining the droplets together, the volume increases by the same factor as the increase in mass. Oh, okay that makes sense. How did I make it this far? lol Thanks for the help! Recognitions: Gold Member Science Advisor Staff Emeritus ## Adding Spheres: How to find the new radius? Two raindrops, each of radius r, have volume $(4/3)\pi r^3$ each. If they "coalesce", because mass is conserved, and the density of water is a constant, the volume will be $(8/3)\pi r^3$. Solve $(4/3)\pi R^3= (8/3)\pi r^3$ for the new radius, R. Surface area doesn't grow proportionately to volume, hence Bergmann's rule. Volume is additive. Add the volumes and derive the radius. Thread Tools | | | | |------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Adding Spheres: How to find the new radius? | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 4 | | | Classical Physics | 0 | | | Introductory Physics Homework | 2 | | | Calculus | 13 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127097129821777, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/201746/computing-the-modularity-function-of-upper-triangular-matrices
# Computing the modularity function of upper triangular matrices Put $B_p := \left\{ \begin{pmatrix} a & b \\ 0 & c \end{pmatrix} \in GL_2(Q_p) : a, b, c \in Q_p \right\}$ the subgroup of upper triangular matrices in $GL_2(Q_p)$, $Q_p$ denoting the $p$-adic rationals. I have already figured out that the modularity function is $$\Delta \begin{pmatrix} a & b \\ 0 & c \end{pmatrix} = (|a|/|c|)^\lambda$$ i.e. if $\mu$ is the Haar measure on $B_p$ and $M$ is a measurable set then for any $x \in B_p$, $$\mu(Mx)=\Delta(x)\mu(M)$$ Does anybody know how to figure out that $\lambda=1$? Cheers, Fabian Werner - I may be saying stupid, but is not the Haar measure of $B_p$ simply the restriction of the Haar measure of $GL_2({\mathbb Q}_p)$? Also, is not the Haar measure on $GL_2({\mathbb Q}_p)$ defined by the $\frac{1}{det}$ formula the way it is on $GL_2({\mathbb R})$ ? – Ewan Delanoy Nov 29 '12 at 20:02 No! It is not like: G an LCH group, H a subgroup then the Haar measure on H is simply the one on G restricted to H. The problem becomes visible in the above example: Every set of the form $* \times * \times \{0\} \times *$ is a Null set in $Q_p^4$ (with respect to the unique product measure $\mu$ of the usual Haar measure, remark that the cases $\infty \cdot 0$ and $0 \cdot \infty$ are defined as $0$!) So as you say: the measure on $GL_2$ is roughly $(1/det) \cdot \mu$, so $B_p$ is a Null set with respect to that restriction (see next comment). – Fabian Werner Dec 18 '12 at 10:50 So, let us set $B_N := \{ x \in B_p: |det(x)| \geq p^{-N}\}$, and let $\nu$ be the restriction of the measure on $GL_2$ then we see that $B_p = \cup_{N \in \mathbb{N}} B_N$ and consequently $\nu(B_N) = \int_{GL_2(Q_p)} 1_{B_N} dx/|det(x)| \leq const \int_{GL_2(Q_p)} 1_{B_N} dx = 0$ by the last comment. $|\cdot|$ denotes the p-adic norm. – Fabian Werner Dec 18 '12 at 11:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934026300907135, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/hamiltonian+potential
Tagged Questions 3answers 120 views The notion of bounded states in quantum mechanics and their characterization with operators Is there any case of potential $V$, such that the continuity of the operator $H=c\ \Delta+V$ is not spoiled? And I don't know any non-differnetial operator examples for continous spectra. I ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263205528259277, "perplexity_flag": "middle"}
http://gauravtiwari.org/category/math/study-notes/
# MY DIGITAL NOTEBOOK A Personal Blog On Mathematical Sciences and Technology Home » Study Notes # Category Archives: Study Notes ## Proofs of Irrationality Tuesday, February 14th, 2012 19:18 / 3 Comments “Irrational numbers are those real numbers which are not rational numbers!” Def.1: Rational Number A rational number is a real number which can be expressed in the form of $\frac{a}{b}$ where $a$ and $b$ are both integers relatively prime to each other and $b$ being non-zero. Following two statements are equivalent to the definition 1. 1. $x=\frac{a}{b}$ is rational if and only if $a$ and $b$ are integers relatively prime to each other and $b$ does not equal to zero. 2. $x=\frac{a}{b} \in \mathbb{Q} \iff \mathrm{g.c.d.} (a,b) =1, \ a \in \mathbb{Z}, \ b \in \mathbb{Z} \setminus \{0\}$. (more…) 26.740278 83.888889 Like Loading... ## Gamma Function Tuesday, February 7th, 2012 15:35 / Leave a Comment If we consider the integral $I =\displaystyle{\int_0^{\infty}} e^{-t} t^{a-1} \mathrm dt$ , it is once seen to be an infinite and improper integral. This integral is infinite because the upper limit of integration is infinite and it is improper because $t=0$ is a point of infinite discontinuity of the integrand, if $a<1$, where $a$ is either real number or real part of a complex number. This integral is known as Euler’s Integral. This is of a great importance in mathematical analysis and calculus. The result, i.e., integral, is defined as a new function of real number $a$, as $\Gamma (a) =\displaystyle{\int_0^{\infty}} e^{-t} t^{a-1} \mathrm dt$ . (more…) 26.740278 83.888889 Like Loading... ## The Area of a Disk Friday, January 27th, 2012 16:22 / 1 Comment [This post is under review.] If you are aware of elementary facts of geometry, then you might know that the area of a disk with radius $R$ is $\pi R^2$. The radius is actually the measure(length) of a line joining the center of disk and any point on the circumference of the disk or any other circular lamina. Radius for a disk is always same, irrespective of the location of point at circumference to which you are joining the center of disk. The area of disk is defined as the ‘measure of surface‘ surrounded by the round edge (circumference) of the disk. 26.740278 83.888889 Like Loading... ## Triangle Inequality Friday, January 20th, 2012 10:57 / 5 Comments Triangle inequality has its name on a geometrical fact that the length of one side of a triangle can never be greater than the sum of the lengths of other two sides of the triangle. If $a$, $b$ and $c$ be the three sides of a triangle, then neither $a$ can be greater than $b+c$, nor$b$ can be greater than $c+a$, nor $c$ can be than $a+b$. Triangle Consider the triangle in the image, side $a$ shall be equal to the sum of other two sides $b$ and $c$, only if the triangle behaves like a straight line. Thinking practically, one can say that one side is formed by joining the end points of two other sides. In modulus form, $|x+y|$ represents the side $a$ if $|x|$ represents side $b$ and $|y|$ represents side $c$. A modulus is nothing, but the distance of a point on the number line from point zero. Visual representation of Triangle inequality For example, the distance of $5$ and $-5$ from $0$ on the initial line is $5$. So we may write that $|5|=|-5|=5$. Triangle inequalities are not only valid for real numbers but also for complex numbers, vectors and in Euclidean spaces. In this article, I shall discuss them separately. (more…) 26.740278 83.888889 Like Loading... ## On Ramanujan’s Nested Radicals Friday, December 30th, 2011 08:50 / 8 Comments Ramanujan (1887-1920) discovered some formulas on algebraic nested radicals. This article is based on one of those formulas. The main aim of this article is to discuss and derive them intuitively. Nested radicals have many applications in Number Theory as well as in Numerical Methods. 26.740278 83.888889 Like Loading... ## A Trip to Mathematics: Part V Equations-I Saturday, December 3rd, 2011 14:25 / Leave a Comment Applied mathematics is one which is used in day-to-day life, in solving tensions (problems) or in business purposes. Let me write an example: George had some money. He gave 14 Dollars to Matthew. Now he has 27 dollars. How much money had he? If you are familiar with day to day calculations —you must say that George had 41 dollars, and since he had 41, gave 14 to Matthew saving 27 dollars. That’s right? Off course! This is a general(layman) approach. ‘How will we achieve it mathematically?’ —We shall restate the above problem as another statement (meaning the same): George had some money $x$ dollars. He gave 14 dollars to Matthew. Now he has 27 dollars. How much money he had? Find the value of $x$. This is equivalent to the problem asked above. I have just replaced ‘some money’ by ‘x dollars’. As ‘some’ senses as unknown quantity— $x$ does the same. Now all we need to get the value of x. When solving for $x$, we should have a plan like this: George had $x$ dollars. He gave to Matthew 14 dollars Now he must have $x-14$ dollars But problem says that he has 27 dollars left. This implies that $x-14$ dollars are equal to 27 dollars. i.e., $x-14=27$ $x-14=27$ contains an alphabet x which we assumed to be unknown—can have any certain value. Statements (like $x-14=27$) containing unknown quantities and an equality are called Equations. The unknown quantities used in equations are called variables, usually represented by bottom letters in english alphabet (e.g.,$x,y,z$). Top letters of alphabet ($a,b,c,d$..) are usually used to represent constants (one whose value is known, but not shown). Now let we concentrate on the problem again. We have the equation x-14=27. Now adding 14 to both sides of the equal sign: $x-14 +14 =27 +14$ or, $x-0 = 41$        (-14+14=0) or, $x= 41$. So, $x$ is 41. This means George had 41 dollars. And this answer is equal to the answer we found practically. Solving problems practically are not always possible, specially when complicated problems encountered —we use theory of equations. To solve equations, you need to know only four basic operations viz., Addition, Subtraction, Multiplication and Division; and also about the properties of equality sign. We could also deal above problem as this way: $x-14= 27$ or,$x= 27+14 =41$ -14 transfers to another side, which makes the change in sign of the value, i.e., +14. When we transport a number from left side to right of the equal sign, the sign of the number changes and vice-versa. As here -14 converts into +14; +18 converts into -18 in example below: $x+18 =32$ or, $x=32 -18 =14$. Please note, any number not having a sign before its value is deemed to be positive—e.g., 179 and +179 are the same, in theory of equations. Before we proceed, why not take another example? Marry had seven sheep. Marry’s uncle gifted her some more sheep. She has eighteen sheep now. How many sheep did her uncle gift? First of all, how would you state it as an equation? $7 + x = 18$ or, $+7 +x =18$ (just to illustrate that 7=+7) or, $x= 18-7 =9$. So, Marry’s uncle gifted her 9 sheep. /// Now tackle this problem, Monty had some cricket balls. Graham had double number of balls as compared to Monty. Adam had also 6 cricket balls. They all collected their balls and found that total number of cricket balls was 27. How many balls had Monty and Graham? As usual our first step to solve this problem must be to restate it as an equation. We do it like this: Monty had (let) x balls. Then Graham must had $x \times 2=2x$ balls. Adam had 6 balls. The total sum=$x+2x+6=3x+6$ But that is 27 according to our question. Hence, $3x+6=27$ or, $3x=27-6 =21$ or,$x=21 /3 =7$. Here multiplication sign converts into division sign, when transferred. Since $x=7$, we can say that Monty had 7 balls (instead of x balls) and Graham had 14 (instead of $2x$). /// # Types of Equations They are many types of algebraic equations (we suffix ‘Algebraic’ because it includes variables which are part of algebra) depending on their properties. In common we classify them into two main parts: 1. Equations with one variable (univariable algebraic equations, or just Univariables) 2. Equations with more than one variables (multivariable algebraic equations, or just Multivariables) Univariable Equations Equations consisting of only one variable are called univariable equations. All of the equations we solved above are univariables since they contain only one variable (x). Other examples are: $3x+2=5x-3$; $x^2+5x +3=0$; $e^x =x^e$ (e is a constant). Univariables are further divided into many categories depending upon the degree of the variable. Some most common are: 1.   Linear Univariables: Equations having the maximum power (degree) of the variable 1. $ax+b=c$ is a general example of linear equations in one variable, where a, b and c are arbitrary constants. 2. Quadratic Equations: Also known as Square Equations, are ones in which the maximum power of the variable is 2. $ax^2+bx+c=0$ is a general example of quadratic equations, where a,b,c are constants. 3. Cubic Equations: Equations of third degree (maximum power=3) are called Cubic. A cubic equation is of type $ax^3+bx^2+cx+d=0$; where a,b,c,d are constants. 4. Quartic Equations: Equations of fourth degree are Quartic. A quartic equation is of type $ax^4+bx^3+cx^2+dx+e=0$. Similarly, equation of an n-th degree can be defined if the variable of the equation has maximum power n. Multivariable Equations: Some equations have more than one variables, as $ax^2+2hxy+by^2=0$ etc. Such equations are termed as Multivariable Equations. Depending on the number of variables present in the equations, multivariable equations can be classified as: 1. Bi-variable Equations - Equations having exactly two variables are called bi-variables. $x+y=5$ ;  $x^2+y^2=4$ ;  $r^2+\theta^2=k^2$, where k is constant; etc are equations with two variables. Bivariable equations can also be divided into many categories, as same as univariables were. A.Linear Bivariable Equations: Power of a variable or sum of powers of product of two variables does not exceed 1. For example: $ax+by=c$ is a linear but $axy=b$ is not. B. Second Order Bivariable Equations: Power of a variable or sum of powers of product of two variables does not exceed 2. For example: $axy=b$, $ax^2+by^2+cxy+dx+ey+f=0$ are of second order. Similarly you can easily define n-th order Bivariable equations. 2. Tri-variable Equations: Equations having exactly three variables are called tri-variable equations. $x+y+z=5$;  $x^2+y^2-z^2=4$ ;   $r^3+\theta^3+\phi^3=k^3$, where k is constant; etc are trivariables. (Further classification of Trivariables are not necessary, but I hope that you can divide them into more categories as we did above.) Similary, you can easily define any n-variable equation as an equation in which the number of variables is n. Out of these equations, we shall discuss only Linear Univariable Equations here (actually we are discussing them). //// We have already discussed them above, for particular example. Here we’ll discuss them for general cases. As told earlier, a general example of linear univariable equation is $ax+b=c$. We can adjust it by transfering constants to one side and keeping variable to other. $ax+b = c$ or, $ax = c-b$ or, $x = \frac{c-b}{a}$ this is the required solution. Example: Solve $3x+5=0$. We have, $3x+5=0$ or, $3x = 0-5 =-5$ or, $x = \frac{-5}{3}$//// 26.740278 83.888889 Like Loading... ## A Trip to Mathematics: Part IV Numbers Monday, November 21st, 2011 16:07 / Leave a Comment If logic is the language of mathematics, Numbers are the alphabet. There are many kinds of number we use in mathematics, but at a broader aspect we may categorize them in two categories: 1. Countable Numbers 2. Uncountable Numbers The names are enough to explain the properties of above numbers. The numbers which can be counted in nature are called Countable Numbers and the numbers which can not be counted are called Uncountable Numbers. Well, this is not the correct way to classify the bunch of types of numbers. We have some formal names for special types of numbers, like Real numbers, Complex Numbers, Rational Numbers, Irrational Numbers etc.. We shall discuss these non-interesting numbers (let me say them non-interesting) at first and then some interesting numbers(those numbers are really interesting to learn). Although in this post I have concisely described the classification, I will rigorously discuss them later. Let me start this discussion with the memorable quote by Leopold Kronecker: “God created the natural numbers, and all the rest is the work of man.” What does it mean? What did Kronecker think when he made this quote? Why is this quote true? —First part of this article is based on this discussion. Actually, he meant to say that all numbers, like Real Numbers, Complex Numbers, Fractions, Integers, Non-integers etc. are made up of the numbers given by God to the human. These God Gifted numbers are actually called Natural Numbers. Natural Numbers are the numbers which are used to count things in nature. Eight pens, Eighteen trees, Three Thousands people etc. are measure of natural things and thus ‘Eight’, ‘Eighteen’, ‘Three Thousands’ etc. are called natural numbers and we represent them numerically as ’8′, ’18′, ’3000′ respectively. So, if 8, 18, 3000 are used in counting natural things, are natural numbers. Similarly, 1, 2, 3, 4, and other numbers are also used in counting things —thus these are also Natural Numbers. Let we try to form a set of Natural Numbers. What will we include in this set? 1?                    (yes!). 2?                    (yes). 3?                     (yes). …. 1785?                (yes) …and          so on. This way, after including all elements we get a set of natural numbers {1, 2, 3, 4, 5, …1785, …, 2011,….}. This set includes infinite number of elements. We represent this set by Borbouki’s capital letter N, which looks like $\mathbb{N}$ or bold capital letter N ($\mathbf{N}$ where N stands for NATURAL. We will define the set of all natural numbers as: $\mathbb{N} := \{ 1, 2, 3, 4, \ldots, n \ldots \}$. It is clear from above set-theoretic notation that $n$-th element of the set of natural numbers is $n$. In general, if a number $n$ is a natural number, we right that $n \in \mathbb{N}$. Please note that some mathematicians (and Wolfram Research) treat ’0′ as a natural number and state the set as $\mathbb{N} :=\{0, 1, 2, \ldots, n-1, \ldots \}$, where $n-1$ is the nth element of the set of natural numbers; but we will use first notion since it is broadly accepted. Now we shall try to define Integers in form of natural numbers, as Kronecker’s quote demands. Integers (or Whole numbers) are the numbers which may be either positives or negatives of natural numbers including 0. Few examples are 1, -1, 8, 0, -37, 5943 etc. The set of integers is denoted by $\mathbb{Z}$ or $\mathbf{Z}$ (here Z stands for ‘Zahlen‘, the German alternative of integers). It is defined by $\mathbb{Z} := \{ \pm n: n \in \mathbb{N} \} \cup \{0\}$ i.e., $\mathbb{Z} := \{\ldots -3, -2, -1, 0, 1, 2, 3 \ldots \}$. Now, if we again consider the statement of Kronecker, we might ask that how could we prepare the integer set $\mathbb{Z}$ by the set $\mathbb{N}$ of natural numbers? The construction of $\mathbb{Z}$ from $\mathbb{N}$ is motivated from the requirement that every integer can be expressed as difference of two positive integers (i.e., Natural Numbers). Let $a,b,c,d \in \mathbb{N}$ and a relation ρ is defined on $\mathbb{N} \times \mathbb{N}$ by $(a,b) \rho (c,d)$ if and only if $a+d = b+c$. The relation ρ is an equivalence relation and the equivalence classes under ρ are called integers and defined as $\mathbb{Z} := \mathbb{N} \times \mathbb{N} /\rho$. Now we can define set of integers by an easier way, as $\mathbb{Z}:= \{a-b; \ a,b \in \mathbb{N}\}$. Thus an integer is a number which can be produced by difference of two or more natural numbers. And similarly as converse defintion, positive integers are called Natural Numbers. After Integers, we head to rational numbers. Say it again– ‘ratio-nal numbers‘ –numbers of ratio. Image via Wikipedia A rational number $\frac{p}{q}$ is defined as a ratio of an integer p and a non-zero integer q. (Well that is not a perfect definition, but as an introduction it is great for understanding.) The set of rational numbers is defined by $\mathbb{Q}$. Once integers are formed, we can form Rational (and Irrational numbers: numbers which are not rational ) using integers. We consider an ordered pair $(p,q):=\mathbb{Z} \times (\mathbb{Z} \setminus \{0 \})$ and another ordered pair $(r,s):=\mathbb{Z} \times (\mathbb{Z} \setminus \{0\})$ and define a relation ρ by $(p,q) \rho (r,s) \iff ps=qr$ for $p,q,r,s \in \mathbb{Z}, \ q, r \ne 0$. Then ρ is an equivalence relation of rationality, class (p,q). The set $\mathbb{Z} \times (\mathbb{Z} \setminus \{0\})/\rho$ is denoted by $\mathbb{Q}$ (and the elements of this set are called rational numbers). In practical understandings, the ratio of integers is a phrase which will always help you to define the rational numbers. Examples are $\frac{6}{19}, \ \frac{-1}{2}=\frac{-7}{14}, \ 3\frac{2}{3}, \ 5=\frac{5}{1} \ldots$. Set of rational numbers includes Natural Numbers and Integers as subsets. Consequently, irrational numbers are those numbers which can not be represented as the ratio of two integers. For example $\pi, \sqrt{3}, e, \sqrt{11}$ are irrationals. The set of Real Numbers is a relatively larger set, including the sets of Rational and Irrational Numbers as subsets. Numbers which exist in real and thus can be represented on a number line are called real numbers. As we formed Integers from Natural Numbers; Rational Numbers from Integers, we’ll form the Real numbers by Rational numbers. The construction of set $\mathbb{R}$ of real numbers from $\mathbb{Q}$ is motivated by the requirement that every real number is uniquely determined by the set of rational numbers less than it. A subset $L$ of $\mathbb{Q}$ is a real number if L is non-empty, bounded above, has no maximum element and has the property that for all $x, y \in \mathbb{Q}, x < y$ and $y \in L$ implies that $x \in L$. Real numbers are the base of Real Analysis and detail study about them is case of study of Real Anlaysis. Examples of real numbers include both Rational (which also contains integers) and Irrational Numbers. The square root of a negative number is undefined in one dimensional number line (which includes real numbers only) and is treated to be imaginary. The numbers containing or not containing an imaginary number are called complex numbers. Some very familiar examples are $3+\sqrt{-1}, \sqrt{-1} =i, \ i^i$ etc. We should assume that every number (in lay approach) is an element of a complex number. The set of complex numbers is denoted by $\mathbb{C}$. In constructive approach, a complex number is defined as an ordered pair of real numbers, i.e., an element of $\mathbb{R} \times \mathbb{R}$ [i.e., $\mathbb{R}^2$] and the set as $\mathbb{C} :=\{a+ib; \ a,b \in \mathbb{R}$. Complex numbers will be discussed in Complex Analysis more debately. We verified Kronecker’s quote and shew that every number is sub-product of postive integers (natural numbers) as we formed Complex Numbers from Real Numbers; Real Numbers from Rational Numbers; Rational Numbers from Integers and Integers from Natural Numbers. // Now we reach to explore some interesting kind of numbers. There are millions in name but few are the follow: Even Numbers: Even numbers are those integers which are integral multiple of 2. $0, \pm 2, \pm 4, \pm 6 \ldots \pm 2n \ldots$ are even numbers. Odd Numbers: Odd numbers are those integers which are not integrally divisible by 2. $\pm 1, \pm 3, \pm 5 \ldots \pm (2n+1) \ldots$ are all odd numbers. Prime Numbers: Any number $p$ greater than 1 is called a prime number if and only if its positive factors are 1 and the number $p$ itself. In other words, numbers which are completely divisible by either 1 or themselves only are called prime numbers. $2, 3, 5, 7, 11, 13, 17, 19, 23, 29 \ldots$ etc. are prime numbers or Primes. The numbers greater than 1, which are not prime are called Composite numbers. Twin Primes: Consecutive prime numbers differing by 2 are called twin primes. For example 5,7; 11,13; 17,19; 29,31; … are twin primes. Pseudoprimes: Chinese mathematicians claimed thousands years ago that a number $n$ is prime if and only if it divides $2^n -2$. In fact this conjecture is true for $n \le 340$ and false for upper numbers because first successor to 340, 341 is not a prime ($31 \times 11$) but it divides $2^{341}-2$. This kind of numbers are now called Pseudoprimes. Thus, if n is not a prime (composite) then it is pseudoprime $\iff n | 2^n-2$ (read as ‘n divides 2 powered n minus 2‘). There are infinitely many pseudoprimes including 341, 561, 645, 1105. Carmichael Numbers or Absolute Pseudoprimes: There exists some pseudoprimes that are pseudoprime to every base $a$, i.e., $n | a^n -a$ for all integers $a$. The first Carmichael number is 561. Others are 1105, 2821, 15841, 16046641 etc. e-Primes: An even positive integer is called an e-prime if it is not the product of two other even integers. Thus 2, 6, 10, 14 …etc. are e-primes. Germain Primes: An odd prime p such that 2p+1 is also a prime is called a Germain Prime. For example, 3 is a Germain Prime since $2\times 3 +1=7$ is also a prime. Relatively Prime: Two numbers are called relatively prime if and only their greatest common divisor is 1. In other words, if two numbers are such that no integer, except 1, is common between them when factorizing. For example: 7 and 9 are relatively primes and same are 15, 49. Perfect Numbers: A positive integer n is said to be perfect if n equals to the sum of all its positive divisors, excluding n itself. For example 6 is a perfect number because its divisors are 1, 2, 3 and 6 and it is obvious that 1+2+3=6. Similarly 28 is a perfect number having 1, 2, 4, 7, 14 (and 28) as its divisors such that 1+2+4+7+14=28. Consecutive perfect numbers are 6, 28, 496, 8128, 33550336, 8589869056 etc. Mersenne Numbers and Mersenne Primes: Numbers of type $M_n=2^n-1; \ n \ge 1$ are called Mersenne Numbers and those Mersenne Numbers which happen to be Prime are called Mersenne Primes. Consecutive Mersenne numbers are 1, 3 (prime), 7(prime), 15, 31(prime), 63, 127.. etc. Catalan Numbers: The Catalan mumbers, defined by $C_n = \dfrac{1}{n+1} \binom{2n}{n} = \dfrac{(2n)!}{n! (n+1)!} \ n =0, 1, 2, 3 \ldots$ form the sequence of numbers 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, … Triangular Number: A number of form $\dfrac{n(n+1)}{2} \ n \in \mathbb{N}$ represents a number which is the sum of n consecutive integers, beginning with 1. This kind of number is called a Triangular number. Examples of triangular numbers are 1 (1), 3 (1+2), 6 (1+2+3), 10(1+2+3+4), 15(1+2+3+4+5) …etc. Square Number: A number of form $n^2 \ n \in \mathbb{N}$ is called a sqaure number. For example 1 ($1^2$), 4 ($2^2$), 9($3^2$), 16 ($4^2$)..etc are Square Numbers. Palindrome: A palindrome or palindromic number is a number that reads the same backwards as forwards. For example, 121 is read same when read from left to right or right to left. Thus 121 is a palindrome. Other examples of palindromes are 343, 521125, 999999 etc. // ###### Related articles • A Trip To Mathematics: Part II Set(Basics) (wpgaurav.wordpress.com) • Do complex numbers really exist? (math.stackexchange.com) • Definitions (gowers.wordpress.com) 26.740278 83.888889 Like Loading... ### Post navigation Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 164, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324519038200378, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/112717/are-banach-space-ultraproducts-stable-under-infinite-sums
## Are Banach space ultraproducts stable under infinite sums? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For a pair of Banach spaces $X,Y$ and an ultrafilter $U$ it is easy to find an isomorphism between $(X\oplus Y)_U$ and $X_U\oplus Y_U$. Is this preserved under infinite sums, that is, Let $X$ be an infinite-dimensional Banach space and let $U$ be an ultrafilter over $\mathbb N$. Do we have $$\ell_\infty(X_U) \approx [\ell_\infty(X)]_U?$$ $\ell_\infty(X)$ denotes the $\ell_\infty$ sum of countably many copies of $X$. - Infinte products, not infinite sums – Yemon Choi Nov 17 at 23:50 What is an infinite product? – Slavoj Žižek Nov 18 at 0:54 2 Consider a one dimensional $X$. – Bill Johnson Nov 19 at 1:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8328049182891846, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/141816/express-the-laplacian-of-u-ldots
# Express the Laplacian of $u \ldots ∂$ Let $u=u(x,y)$. Now make the change of variables $x=x(r,θ)=r\cos θ$, $y=y(r,θ)=r\sin θ$. Express the Laplacian of $u$: $∂^2 u/∂x^2 + ∂^2 u/∂y^2$ in terms of derivatives of $u$ with respect to $r$ and $θ$ and everything should be in terms of $r$ and $θ$ with also the assumption that all partials are continuous. - ## 1 Answer It's a boring but standard application of the chain rule. For example, $$\frac{\partial u}{\partial x} = \frac{\partial u}{\partial r} \frac{\partial r}{\partial x}+\frac{\partial u}{\partial \theta} \frac{\partial \theta}{\partial x}.$$ A general, but complicated, approach to the Laplace operator in different coordinate systems is outlined here. Actually, $\mathbb{R}^n$ can be endowed with several riemannian metrics, and we get different expressions for the Laplace-Beltrami operator. When we use the flat metric, we get the standard Laplace operator. If we introduce curvilinear coordinates, then the expression changes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964890837669373, "perplexity_flag": "head"}
http://www.conservapedia.com/Black_holes
# Black hole ### From Conservapedia (Redirected from Black holes) Artist's conception of a binary system consisting of a black hole and a main sequence star. The black hole is drawing matter from the main sequence star via an accretion disk around it, and some of this matter forms a gas jet. Black holes are theoretical entities popularized by pseudoscience despite their implausibility and lack of ever being directly observed. Suggested by the controversial theory of relativity (see Counterexamples to Relativity), black holes are postulated to be collapsed objects, usually stars, which have become so dense that within a certain radius their escape velocity exceeds the speed of light. Thus, they absorb all matter and energy within that radius. Light and matter can enter, but nothing can ever escape. Black holes are increasingly favored by liberal publications, such as the science page of the New York Times and glossy magazines, as well as science fiction writers. As with the related theoretical concept of a "wormhole",[1] it is impossible to prove that no black hole exists anywhere, and thus they fail the falsifiability requirement of science. ## History of the idea The theoretical model of what we now call a black hole has evolved considerably over the centuries. The corpuscular theory of light held that light was made up of invisibly small particles, and that these particles moved along ballistic trajectories, like tiny bullets. In this framework, it was believed possible that a distant star could be so massive that light emitted from its surface would be dragged back down again. This theory was first advanced by John Michell, who wrote in 1783, "If the semi-diameter of a sphere of the same density as the Sun in the proportion of five hundred to one, and by supposing light to be attracted by the same force in proportion to its [mass] with other bodies, all light emitted from such a body would be made to return towards it, by its own proper gravity."[2] Suggesting the same possibility independently, Pierre-Simon Laplace wrote in 1796, "It is therefore possible that the greatest luminous bodies in the universe are on this account invisible."[2] As the corpuscular theory gave way to the wave theory of light in the early 1800s, the idea of "dark" or "invisible" stars fell from favor. At that time, it was believed that light was a wave which had no mass and therefore was unaffected by gravity. Research into the photoelectric effect, however, reignited interest in the light-as-particles view, ultimately resulting in the modern notion of wave-particle duality. Under this theory, light could be affected by gravity, so the question of whether light could be emitted from extraordinarily massive bodies was once again open. ### General Relativity As it happened, the question became unavoidable shortly after the publication in 1915 of Einstein's general theory of relativity. Schwarzschild solved the Einstein field equations in a way that describes the geometry of spacetime outside a spherically symmetric, uncharged, non-rotating distribution of mass. Well away from the center of this distribution of mass, the Schwarzschild solution closely matches the Newtonian model of a gravitational field; only close to the mass, where the curvature of spacetime is large, do significant differences between the two models appear. But if the diameter of the mass distribution is taken to be arbitrarily small, then the region of spacetime immediately surrounding the mass appears to take on extremely curious properties, properties so curious that many questioned whether they had any physical interpretation at all. Therefore, Schwarzschild showed that black holes were possible under the theory of general relativity. Jet-powered nebula formed from the accretion disk of the binary star Cygnus-X1 However, neither Schwarzschild himself nor Albert Einstein, who developed the theory of relativity, believed that black holes actually existed[3]. Einstein even tried to re-work general relativity to render these singularities impossible. However, Roger Penrose and Stephen Hawking proved the first of many Singularity Theorems, which states that singularities must form if certain conditions are present. This demonstrated that, rather than mathematical oddities, singularities are a fairly generic consequence of realistic solutions to relativity: any mass with radius less than its Schwarzschild radius is a black hole. Since then, support for black holes among the scientific community has grown. ## Nature of a Black Hole ### Mathematics There are several solutions of the Einstein field equations that are used to model black holes. The simplest of these is the Schwarzchild metric. From this metric, one can calculate the Schwarzschild radius, which defines the boundary (event horizon) of the black hole, to be $r_s=\frac{2GM}{c^2}$, where • rs is the Schwarzschild radius; • G is the gravitational constant; • M is the mass of the gravitating object; • c is the speed of light in vacuum. If the black hole has a nonzero electric charge, it is modeled by the Reissner-Nordström metric. Astronomical objects are generally electrically neutral, since otherwise they would attracted charged particles of opposite sign and quickly become neutral. Therefore the Reissner-Nordström metric is largely of theoretical interest. A far more realistic situation occurs when the black hole is spinning (i.e. has nonzero angular momentum). Then the Schwarzschild solution is insufficient (as it was for the charged case), and instead one turns to the Kerr metric. Since astrophysical black holes are generically believed to have angular momentum, this is the solution that best describes them. Among the more interesting consequences of the Kerr metric is the phenomenon of frame dragging, in which the rotating black hole literally pulls nearby spacetime along with it. If one wishes to consider a black hole that is both charged and spinning, one employs the Kerr-Newman metric, which combines the Reissner-Nordström and Kerr metrics. Since the no-hair theorem states that (non-quantum) black holes are unique up to mass, charge, and angular momentum, the Kerr-Newman metric is sufficient to describe any classical black hole. ### Time and Distance Within a certain distance from an arbitrarily small distribution of mass — a distance now known as the Schwarzschild radius — the curvature of spacetime becomes so great that no paths leading away from the mass exist. That is to say, a test particle released inside the Schwarzschild radius will inevitably move in toward the mass, not because the force of gravity is great as in the Newtonian approximation, but because spacetime is curved to such an extent that no other directions exist. A particle within the Schwarzschild radius can no more move further from the central mass than it can go backwards in time. In fact, from the frame of reference of an infalling observer beyond the Schwarzschild radius, all directions that once pointed away from the central mass now point backwards in time. Once inside the Schwarzschild radius, further motion toward the central mass is as inevitable as further motion through time is for any other observer. For some time after the publication of the Schwarzschild solution, the validity of these results was hotly debated. In the solution's original coordinate frame, some terms in the equations diverged, or became infinite, at the Schwarzschild radius, leading physicists to wonder whether the results of the equations in that region had any valid physical interpretation. One proposed interpretation was that at the Schwarzschild radius, all time for the infalling observer would stop. This led to the use of the term "frozen stars;" it wasn't believed frozen stars were cold, but rather that they were literally frozen in time. Later refinement of the Schwarzschild solution demonstrated that the apparent infinities were merely an artifact of the coordinate frame chosen, and that an infalling observer in fact can pass beyond the Schwarzchild radius. In fact, the equations predicted he would notice no effects when doing so. But any attempt on the part of that infalling observer to communicate with the outside universe, say by sending a radio message, would be doomed to failure, as the radio waves would traverse geodesics through the severely curved spacetime and end up bent toward the central mass. From this, we can say that nothing that occurs within the Schwarzschild radius can ever affect events outside the Schwarzschild radius. This gives the Schwarzschild radius of a non-rotating black hole its other name: the event horizon. ### Inside the Event Horizon What actually exists inside the event horizon of a black hole is a question physics is unable to answer. Some postulate that within the event horizon exists a point of zero (or nearly zero) volume but infinite energy density, a point sometimes referred to as a gravitational singularity, after the notion of a mathematical singularity (a term referring to a point in an equation at which one would have to solve by dividing by zero, a feat presently impossible in mathematics) in a field equation. Others suspect that infinite energy density is a physical impossibility, and that a black hole contains actual finitely-dense matter compressed into a degenerate form, such as quark-degenerate matter. Since all black holes are surrounded by an event horizon which prevents any information or messages from leaving, all these theories are non-falsifiable; we can never be sure what the interior structure of a black hole is like. ### Properties of Black Holes Black holes described with classical mechanics only have three intrinsic properties by which one differs from another: mass, electric charge, and angular momentum. Mass describes the amount of matter inside the event horizon. Angular momentum refers to whether the black hole is stationary or rotating around an axis. While the singularity of a non-rotating black hole may be an infinitely small point, the singularity of a rotating black hole would be in the shape of an infinitely thin ring. Quantum mechanics, which postulates that information loss cannot occur in a black hole, suggests properties beyond the three suggested by classical physics. Using this description, information must also be emitted by black holes. This radiation, referred to as Hawking Radiation, is thermal radiation that has currently only been described mathematically. Although radiation from inside the event horizon has not yet been observed, there is much phenomenon that occurs outside the horizon due to the black hole. Matter entering a spinning black hole is first swirled around by the black hole’s gravity, causing it to heat up and emit x-rays, which can be used to detect the black hole. In the supermassive black holes at the centers of galaxies, some of the matter does not fall into the black hole. Instead it is blasted into space in twin jets of hot gas perpendicular to the accretion disc, in a phenomenon known as an Active Galactic Nucleus. ## Origins of Black Holes The red supergiant star V838 Monocerotis, an example of one type of star which may become a black hole. Stellar-mass black holes are said to form when stars more than ten times the mass of the Sun run out of fuel and die. The process of death occurs when stars that have fused the products of their own fusion into larger and larger elements, up to iron. The star then tries to fuse the iron core that forms as a result, but this does not produce enough energy to hold the outer layers of the star apart against the pull of gravity. When this happens, the iron core at the center of the star implodes in a supernova, and the outer layers of the star are blasted into space in one of the most energetic events in the universe—one star going out in a supernova can give off as much light as an entire galaxy. Not all supernovae result in black holes, but if the mass of the core is large enough, about 1.5-3.0 times the mass of the Sun (this value is termed the Tolman-Oppenheimer-Volkoff limit, and its value is not yet known to great precision), the leftover gravity of the shrinking core stalls the outward rush of the initial blast, and crushes the core into a point of infinite density: a black hole. Extremely small black holes, with masses of around 1015 grams, have been theorized to have formed in the early universe. Any sufficiently small primordial black hole would be expected to evaporate within the lifetime of the universe, but the rate of evaporation is not currently known with any certainty. At the opposite end of the spectrum, objects with the characteristics of supermassive black holes, millions or billions of times more massive than the sun, have been detected at the centers of many galaxies, including our own Milky Way. In our galaxy, the hypothesized supermassive black hole is in the constellation Sagittarius, and is known as Sagittarius A* (pronounced "A star"). Based on the extraordinary angular velocity of stars near the galactic center, Sagittarius A* is believed to be on the order of two to three million solar masses. It is unknown how supermassive black holes form, though several models have been proposed. One hypothesis simply begins with a black hole of stellar mass which grows over the lifetime of the galaxy that surrounds it. Another proposal describes supermassive black holes as a natural, in fact nearly unavoidable, consequence of galactic formation. To date, no one theory of supermassive black hole formation is favored over all others. ## Methods of Observation Since black holes are literally invisible by traditional means of observation and so cannot be directly observed, their nature is determined from phenomena outside of the Schwarzschild radius. Scientists have located and observed objects theorized to be black holes through indirect means, such as the effect of their gravitational pull on nearby stars. Stars that are near black holes, e.g. by being part of a binary star system that contains one, show wobbles in their orbits similar to the tidal effects of the moon on Earth’s oceans. Wobble effects, however, cannot be used to conclusively prove the existence of a black hole. [4][5] Scientists have also observed stellar objects which have density consistent with black holes[6]. While matter and energy, even light, may not escape a black hole, Stephen Hawking has shown that when described by quantum mechanics, they should emit Hawking radiation, which absent of an influx of mass-energy would lead to the evaporation of the black hole in a burst of gamma rays. Scientists are currently working to pick up one of these bursts, or the radiation itself, with any of several land- and space-based telescopes. However, the matter falling into black holes as well as the cosmic microwave background obscures the radiation and makes detection extremely difficult. ## Speculative Future Exploration Scientists have speculated that if a rotating black hole is large enough, a person could pass through the center of the ring-shaped singularity and possibly enter a wormhole. However, it would have to be a very large hole, for if it were not, the hypothetical astronaut would never survive to reach the event horizon due to tidal forces. Matter coming close to the event horizon of a small black hole undergoes a process called spaghettification, a term coined by Stephen Hawking in his book A Brief History of Time to describe extraordinarily strong tidal forces. Because the mass at the center of the black hole is so dense, the gravitational pull on the near end of an object is much greater than the pull on the object’s far end. This causes the object to be stretched out in a way resembling a piece of spaghetti, and generally torn in two. ## White Holes Because the general relativity equations are symmetric with respect to time, one can take a negative square root instead of positive to yield an equation describing a hypothetical object which expels, rather than attracts, matter and energy.[7] As this is the exact opposite of a black hole, it is called a white hole. However, there is no evidence white holes actually exist. If they generate new matter and energy, this would violate the Law of the conservation of mass. Therefore, some scientists have proposed that matter falling into black holes goes through a wormhole to emerge at a white hole. While such wormholes are allowed by general relativity, they would be extremely unstable[8], and there is no evidence that they exist. Scientists, who have been extremely eager to promote the idea of black hole existence, have shown a strong aversion to the white hole theory, even though, like a white hole, a black hole has never been observed directly. This may reflect an anti-creation bias on the part of scientists who are uncomfortable with the idea that matter and energy can be created outside of what scientific theories dictate should happen. Dr. Russell Humphreys used a white hole in his model of the universe during Creation week to allow millions of years to pass in outer space while only three days passed on Earth. ## In Popular Culture Black holes have been a device in science fiction ever since their discovery. Many sci-fi books, movies, and television shows use black holes as a method of travel (see the Potential Future Exploration section above) or as a threat to a space-going vessel. In at least one season of the show Star Trek: the Next Generation by Gene Roddenberry, artificially created miniature black holes are used as power sources for spaceships and natural ones as incubators for the young of an alien race. Neither of these uses has much of a basis in reality, of course. Contrary to popular myth, a black hole is not a cosmic vacuum cleaner. In other words, a one-solar-mass black hole is no better than any other one-solar-mass object (such as, for example, the Sun) at "sucking in" distant objects. However, a one-solar-mass black hole has the same amount of matter as any other one-solar-mass object but compressed into a much smaller space, making it impossible to move fast enough to leave a black hole once there. If a spaceship could land on a black hole, it would never be able to take off again. The term "black hole" is also used as a metaphor for a place that it is hard to get out of, generally containing a high concentration of something unpleasant. Ex: "The inner city is a black hole of crime and drug use." Note that the Black Hole of Calcutta is not a reference to the celestial object; the name of the place predates the discovery of black holes in space and the Black Hole of Calcutta was a horrible underground prison in Calcutta, India. ## References 1. ↑ The prediction of the existence of wormholes, and its naming in 1957, predates the prediction and naming (1967) of a black hole.[1] 2. ↑ 2.0 2.1 http://www.aps.org/publications/apsnews/200911/physicshistory.cfm 3. ↑ http://amazing-space.stsci.edu/resources/explorations/blackholes/lesson/whatisit/history.html 4. ↑ http://library.thinkquest.org/C007571/english/advance/english.htm 5. ↑ Black Holes by Heather Cooper and Nigel Henbest (book) 6. ↑ http://amazing-space.stsci.edu/resources/explorations/blackholes/lesson/whatisit/history.html 7. ↑ http://cosmology.berkeley.edu/Education/BHfaq.html#q10 8. ↑ http://casa.colorado.edu/~ajsh/schww.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413715600967407, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/43327/evaluate-the-last-digit-of-77777?answertab=active
# evaluate the last digit of $7^{7^{7^{7^{7}}}}$ I found this puzzle online. Since I'm not good at number theoretic kind of problems I'm going to propose it in this form. If you have a number $x$, in this case $x=7$, how do you evaluate the last digit of $$x^{x^{x^{x^{x}}}}$$ where the number of powers may be $4$ (in this case) or any other, possibly small, number. - As a bonus, applying the same reasoning as in the answers, you can show that the last two digits will be 43. – Raskolnikov Jun 5 '11 at 7:54 ## 3 Answers Note that the last digit of $7^x$ depends on the remainder $x$ leaves when divided by $4$ i.e. $$7^{4k} \equiv 1 \bmod 10$$ $$7^{4k+1} \equiv 7 \bmod 10$$ $$7^{4k+2} \equiv 9 \bmod 10$$ $$7^{4k+3} \equiv 3 \bmod 10$$ Hence all we are interested is the remainder when $x$ is divided by $4$ i.e in this case the remainder when $7^{7^{7^7}}$ is divided by $4$. $$7 \equiv -1 \bmod 4$$ Hence $7^{\text{odd number}} \equiv -1 \bmod 4$ and $7^{7^7}$ is odd and hence $7^{7^{7^7}}$ is of the form $4k+3$ and hence $$7^{7^{7^{7^7}}} \text{ has the last digit as }3$$ - For any $x$ in general (as you asked), the method is similar. First, let's fix some notation: write $7^{7^{7^{7^7}}}$ as $7 {\uparrow\uparrow} 5$, and similarly a tower of $k$ $x$'s as $x {\uparrow\uparrow} k$. (This operation is called tetration.) Finding the last digit of $x{\uparrow\uparrow}k$ is the same as finding its value $\bmod 10$. Start looking at the powers of $x$ mod 10: this sequence will always be periodic (with some period at most $4$, it turns out). Say it is periodic with period $m$. In other words, the value of $x^n$ is completely determined by the value of $n \bmod m$. So now your problem is that of finding the value of $x{\uparrow\uparrow}(k-1) \bmod m$. Again, do the same thing. Look at powers of $x$ mod $m$, they will be periodic with some period, and you want to determine the value of $x{\uparrow\uparrow}(k-2)$ modulo that period. Eventually your problem becomes easy enough, and you can stop. For your $7 {\uparrow\uparrow} 5$ example, the powers of $7$ are periodic $\bmod 10$ with period $4$. So you want to determine $7 {\uparrow\uparrow} 4 \bmod 4$. The powers of $7$ are periodic $\bmod 4$ with period $2$. So you want to determine $7 {\uparrow\uparrow} 3 \bmod 2$. The powers of $7 \bmod 2$ are periodic with period $1$: a power of $7$ is always $1$. Now you're done: filling in the values backwards, $7 {\uparrow\uparrow} 3 \equiv 1 \mod 2$, so $7 {\uparrow\uparrow} 4 \equiv 3 \mod 4$, so $7 {\uparrow\uparrow} 5 \equiv 3 \mod 10$. Both the modulus value and the second argument of the tetration decrease at each step, so this method is always guaranteed to terminate soon. The same method works for the last $r$ digits (you want $x{\uparrow\uparrow}k \bmod 10^r$), other bases, etc. - Could you please explain filling in the backward part? – Quixotic Apr 11 '12 at 8:51 @Foool:(Step A) As powers of $7$ are always 1 mod 2, $7 {\uparrow\uparrow} 3 \equiv 1 \mod 2$. (Step B) We want $7 {\uparrow\uparrow} 4 \bmod 4$: Modulo $4$, powers of $7$ are $3, 1, 3, 1$, etc. So 7^(something that is 1 mod 2) is 3, and 7^(something that is 0 mod 2) is 1. Because $7 {\uparrow\uparrow} 4 = 7 ^ {7 {\uparrow\uparrow} 3}$ and the exponent is 1 mod 2, we can conclude that $7 {\uparrow\uparrow} 4 \equiv 3$ (modulo 4).(Step C) We want $7 {\uparrow\uparrow} 5 \bmod 10$: Modulo 10, the powers of 7 are $7, 9, 3, 1, 7, 9, 3, 1$... So 7^(anything that is 3 mod 4) = 3 mod 10. – ShreevatsaR Apr 11 '12 at 9:46 @Foool: If that's not clear, I can edit the answer to explain in more detail. – ShreevatsaR Apr 11 '12 at 9:47 A big hint: the powers of $7$, $7^x$, are periodic mod $10$; since $7^4 \equiv 1 \pmod{10}$, every $4$th power of $7$ will end in the same digit; this means that you only need to find out what $7^{7^{7^7}}$ is mod $4$. And since $7^2 \equiv 1 \pmod{4}$, every second power of 7 will be the same mod $4$; so you only need to know what $7^{7^7}$ is mod $2$. And you can probably guess that... :-) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350807666778564, "perplexity_flag": "head"}
http://mathoverflow.net/questions/47860/noncommutative-heat-equation-a-strange-generalization-of-killing-vectors-for
## “Noncommutative heat equation” — a strange generalization of Killing vectors for a flat metric ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $(M,g)$ be a smooth (pseudo)Riemannian manifold with a flat metric $g$, and $X$, $Y$ be vector fields on $M$ such that $$L_X^2 (g)=L_Y(g). \hspace{70mm} \mbox{(1)}$$ where $L_Z$ is the Lie derivative along $Z$ and $L_X^2(g)\equiv L_X(L_X(g))$. If $X=0$ then $Y$ is just a Killing vector for $g$ but is there any (geometric?) interpretation for $X$ and $Y$ in the general case? In fact I wonder whether Eq.(1), be it for fixed $g$ and unknown $X,Y$ or conversely for fixed $X,Y$ and unknown $g$, was studied systematically at all: I failed to find any relevant references. Motivation: Equation (1) follows from the last formula in Proposition 5 from a paper on classification of compatible Hamiltonian structures that I have recently come across. It looks a bit like the heat equation for the metric, hence the title. - ## 1 Answer Here is an observation which does not answer the question, but does at least tell where not to look for examples. It is shown that for compact Riemannian manifolds with non-positive Ricci curvature there are no interesting solutions to the stated equations, so to find interesting solutions one must look: a. in positive curvature, b. at indefinite metrics, c. on noncompact manifolds, or d. in infinite dimensions. Suppose $M$ is compact and $g$ is Riemannian. It is claimed that if $g$ has non-positive Ricci curvature, then either the Ricci curvature of $g$ is somewhere negative and both $X$ and $Y$ are $0$, or $g$ is Ricci flat and both $X$ and $Y$ are parallel. To begin with, do not suppose anything about the Ricci curvature of $g$. Let $D$ be its Levi-Civita connection and raise and lower indices using `$g_{ij}$` and the inverse symmetric bivector `$g^{ij}$`. In what follows I use the abstract index notations, so square brackets (resp. parentheses) denote anti-symmetrization (resp. symmetrization) over the enclosed indices. For any metric `$(L_{X}g)_{ij} = 2D_{(i}X_{j)}$` and ```\begin{equation}\label{e1} (L_{X}^{2}g)_{ij} = X^{p}D_{p}(L_{X}g)_{ij} + (D_{i}X^{p})(L_{X}g)_{pj} + (D_{j}X^{p})(L_{X}g)_{ip}. \end{equation}``` Trace this to obtain ```\begin{equation}\label{e2} g^{ij}(L_{X}^{2}g)_{ij} = 2X^{p}D_{p}D^{q}X_{q} + 4D^{(p}X^{q)}D_{(p}X_{q)}. \end{equation}``` By assumption this equals `$g^{ij}(L_{Y}g)_{ij} = 2D^{p}Y_{p}$`. Since by assumption $M$ is compact and without boundary, integration by parts yields ```\begin{align}\label{e3} 0 = 4\int_{M}D^{(p}X^{q)}D_{(p}X_{q)} - 2\int_{M}(D_{p}X^{p})^{2}. \end{align}``` In general, this yields no obvious conclusions. Go back to the equation preceeding the integration and commute derivatives to obtain ```\begin{align}\label{e4} g^{ij}(L_{X}^{2}g)_{ij} = 2X^{p}D^{q}D_{p}X_{q} + 4D^{(p}X^{q)}D_{(p}X_{q)} -2R_{pq}X^{p}X^{q}, \end{align}``` in which `$R_{ij}$` is the Ricci curvature of `$g_{ij}$`. Now integrating by parts yields ```\begin{align}\label{e5} \begin{split}0 &= \int_{M}\left(4D^{(p}X^{q)}D_{(p}X_{q)} - 2D^{q}X^{p}D_{p}X_{q} - 2R_{pq}X^{p}X^{q}\right)\\ & = 2\int_{M}\left(D^{(p}X^{q)}D_{(p}X_{q)} + D^{[p}X^{q]}D_{[p}X_{q]} -R_{pq}X^{p}X^{q}\right). \end{split} \end{align}``` Because $g$ is Riemannian, if the Ricci curvature is non-positive this implies that `$D_{(i}X_{j)} = 0$` and `$D_{[i}X_{j]} = 0$` so that `$D_{i}X_{j} = 0$` and $X$ is parallel. In the original equation this implies $Y$ is Killing. By the original Bochner argument, if the Ricci curvature is non-positive and somewhere negative then there is no non-zero Killing field, so the only possibility for non-trivial solutions is that $g$ be Ricci flat, in which case the Bochner argument forces $Y$ to be parallel. - +1 : very nice observation. – Willie Wong Dec 1 2010 at 10:44 Interesting! Thank you, Dan! – anonymous Dec 2 2010 at 8:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903546154499054, "perplexity_flag": "head"}
http://mathoverflow.net/questions/67030?sort=oldest
## If compact connected Lie groups are homeomorphic as topological space, are they isomorphic as Lie groups? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G_{1}$ and $G_{2}$ be compact connected Lie groups. If $G_{1}$ and $G_{2}$ are homeomorphic as topological spaces, are they isomorphic as Lie groups? - ## 2 Answers No. For instance, topologically $U(2) = SU(2) \times U(1)$, since both are homeomorphic to $S^3 \times S^1$, but the group structures are different. Another example is given by $SO(3) \times SU(2)$ which is diffeomorphic to $SO(4)$. On the other hand, any commutative connected compact real Lie group of dimension $n$ is isomorphic (as a real Lie group) to the real torus $\mathbb{T}^n:=(\mathbb{S}^1)^n$. Analogously, any connected compact complex Lie group of dimension $n$ is isomorphic (as a complex Lie group) to a complex torus, i.e. a quotient of the form $\mathbb{C}^n/\Gamma$, where $\Gamma \subset \mathbb{C}^n$ is a lattice. Notice that in the compact complex case commutativity comes for free. Two complex tori $\mathbb{C}^n /\Gamma_1$ and $\mathbb{C}^n / \Gamma_2$ are isomorphic as complex Lie groups if and only if there exists $g \in \textrm{GL}_n (\mathbb{C})$ such that $\Gamma_2=g (\Gamma_1)$, but of course they are always both isomorphic to $\mathbb{T}^{2n}$ as real Lie groups. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No. The simplest example I can think of is that $SO(4)$ is homeomorphic to $SO(3)\times Sp(1)$ as topological spaces, but they are not isomorphic Lie groups. In fact, there is a double covering $Sp(1)\times Sp(1)\to SO(4)$, $(q_1,q_2)\cdot x = q_1xq_2^{-1}$, where $Sp(1)$ is viewed as the unit quaternions and $x\in\mathbf H\cong\mathbf R^4$, which gives a Lie group isomorphism $Sp(1)\times Sp(1)/{\pm1}\cong SO(4)$. On the other hand, every continuous homomorphism between Lie groups is automatically smooth, so if the homeomorphism is a homomorphism, then it must be a Lie group isomorphism. Edit: I was puzzled by the fact that all known examples are given by pairs of locally isomorphic groups. Then I found the following papers: Toda, H., A note on compact semi-simple Lie groups, Japan J. Math. 2 (1976), 355-358. in which he proves that "two simply connected, compact (and hence semi-simple) Lie groups are isomorphic to each other if and only if they have isomorphic homotopy groups for each dimension". Later this was generalized by S. Boekholt (Journal of Lie Theory Volume 8 (1998) 183-185) as follows: "Let $G_1$ and $G_2$ be two compact, connected Lie groups with isomorphic homotopy groups in each dimension. Then $G_1$ and $G_2$ are locally isomorphic." So, going back to the question, we cannot avoid local isomorphism. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476431608200073, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/180283/why-is-the-probability-that-a-continuous-random-variable-takes-a-specific-value
# Why is the probability that a continuous random variable takes a specific value zero? My understanding is that a random variable is actually a function $X: \Omega \to T$, where $\Omega$ is the sample space of some random experiment and $T$ is the set from which the possible values of the random variable are taken. Regarding the set of values that the random variable can actually take, it is the image of the function $X$. If the image is finite, then $X$ must be a discrete random variable. However, if it is an infinite set, then $X$ may or may not be a continuous random variable. Whether it is depends on whether the image is countable or not. If it is countable, then $X$ is a discrete random variable; whereas if it is not, then $X$ is continuous. Assuming that my understanding is correct, why does the fact that the image is uncountable imply that $Pr(X = x) = 0$. I would have thought that the fact that the image is infinite, regardless of whether it is countable or not, would already imply that $Pr(X = x) = 0$ since if it is infinite, then the domain $\Omega$ must also be infinite, and therefore $$Pr(X = x) = \frac{\text{# favorable outcomes}}{\text{# possible outcomes}} = \frac{\text{# outcomes of the experiment where X = x}}{|\Omega|} = \frac{\text{# outcomes of the experiment where X = x}}{\infty} = 0$$ What is wrong with my argument? Why does the probability that a continuous random variable takes on a specific value actually equal zero? - First, you cannot divide by infinity. However, what you are doing, is integrating over a point, which (in your case) gives value 0 – M Turgeon Aug 8 '12 at 13:49 "Why does the probability that a continuous random variable takes on a specific value actually equal zero?": it does not. For instance, a random variable can take the value $0$ with probability $1/2$, and take any value in an interval otherwise. – D. Thomine Aug 8 '12 at 13:50 @D.Thomine If you want to have total probability 1 (or anything finite, for that matter), you need at most countably many points with nonzero mass. So the word uncountable does matter. – M Turgeon Aug 8 '12 at 14:00 OK, I think we should first agree on the definition of "continuous random variable". Even Wikipedia seems to have mistakes on this point... – D. Thomine Aug 8 '12 at 14:01 1 If we use the definition that the cumulative distribution is continuous, then countability is irrelevant, and a uniform distribution on the rationals between 0 and 1 is a continuous random variable. WP says the CDF should be not just continuous but "absolutely continuous with respect to the Lebesgue measure;" this seems to requre that T be considered as a subset of the reals (whereas continuity could be applied to the rationals without even bringing the reals into the picture). Another example to consider is when the c.d.f. is the Cantor function. – Ben Crowell Aug 8 '12 at 14:45 show 10 more comments ## 3 Answers The problem begins with your use of the formula $$Pr(X = x) = \frac{\text{# favorable outcomes}}{\text{# possible outcomes}}\;.$$ This is the principle of indifference. It is often a good way to obtain probabilities in concrete situations, but it is not an axiom of probability, and probability distributions can take many other forms. A probability distribution that satisfies the principle of indifference is a uniform distribution; any outcome is equally likely. You are right that there is no uniform distribution over a countably infinite set. There are, however, non-uniform distributions over countably infinite sets, for instance the distribution $p(n)=6/(n\pi)^2$ over $\mathbb N$. For uncountable sets, on the other hand, there cannot be any distribution, uniform or not, that assigns non-zero probability to uncountably many elements. This can be shown as follows: Consider all elements whose probability lies in $(1/(n+1),1/n]$ for $n\in\mathbb N$. The union of all these intervals is $(0,1]$. If there were finitely many such elements for each $n\in\mathbb N$, then we could enumerate all the elements by first enumerating the ones for $n=1$, then for $n=2$ and so on. Thus, since we can't enumerate the uncountably many elements, there must be an infinite (in fact uncountably infinite) number of elements in at least one of these classes. But then by countable additivity their probabilities would sum up to more than $1$, which is impossible. Thus there cannot be such a probability distribution. - I'll elaborate on my comment. I claim that the statement "The probability that a continuous random variable takes on a specific value actually equal zero?" is false. I'll stick with the definition that a continuous random variable takes values in an uncountable set, or, to be more precise, that no countable subset has full measure. It is the one used by Davitenio, and in the intro of this Wikipedia article. Take your favorite real-valued continuous random variable; call it $X$. Flip a well-balanced coin. Define a random variable $Y$ by: • If the coin shows heads, then $Y=X$; • If the coin shows tails, then $Y=0$. The random variable $Y$ has the same range as $X$: any value taken by $X$ can be achieved by $Y$ provided that the coin shows heads. Hence, it is continuous. However, with probability at least $1/2$, we have $Y=0$, so that one specific value has non-zero probability. The good notion here would be the notion of non-atomic measure. An atom is a point with positive measure, so a random variable which doesn't take any specific value with positive probability is exactly a random variable whose image measure is non-atomic. This is a tautology. ===== Another definition of "continuous random variable" is a real-valued (or finite-dimensional-vector-space-valued) random variable whose image measure has a density with respect to the Lebesgue measure. Yes, even Wikipedia gives different definitions to the same object. If $X$ is a continuous random variable with this definition, then is a function $f$, non-negative and with integral equal to $1$, such that for any Borel set $I$, we have $\mathbb{P} (X \in I) = \int_I f(x) dx$. Since a singleton has zero Lebesgue measure, we get $\mathbb{P} (X = x) = 0$ for all $x$. ===== My take on the subject (warning: rant): I really, really don't like the use of "continuous random variable", and more generally the use of "continuous" in opposition to "discrete". These are the kind of terms which are over-defined, so that you can't always decide what definition the user has in mind. Even if it is quite bothersome, I prefer the use of "measure absolutely continuous with respect to the Lebesgue measure", or with some abuse, "absolutely continuous measure", or "measure with a density". With even more abuse, "absolutely continuous random variable". It is not pretty nor rigorous, but at least you know what you are talking about. ===== PS: As for why your proof does not work, Joriki's answer is perfect. I would just add that the formula $$\mathbb{P} (X = x) = \frac{\# \{ \text{favorable outcomes} \} }{\# \{\text{possible outcomes}\}}$$ only work with finite probability spaces, and when all the outcomes have the same probability. This is what happens when you have well-balanced coins, non-loaded dices, well-mixed card decks, etc. Then, you can reduce a probability problem to a combinatorial problem. This does not hold with full generality. - As I mentioned in the comments, a continuous random variable is one where its cumulative distribution function is continuous. This would imply that the domain is uncountable, but the domain being uncountable does not imply that it is a continuous random variable. I am using the definition given in Statistical Inference by Casella and Berger, which is not a PhD level text, but maybe a Masters level text, i.e., no measure theory is involved. Therefore, the counterexample given by @D.Thomine is a good counterexample to your thoughts. You can have a random variable with an uncountable domain that has nonzero probability for some values. But, it is not a continuous random variable because the CDF would have a jump at such points, and therefore would not be continuous. Casella and Berger shows, for a continuous random variable, $$0 \leq P(X = x) \leq P(x - \epsilon < X \leq x) = F(x) - F(x - \epsilon)$$ for all $\epsilon > 0$. Taking the limit of both sides, as $\epsilon$ decreases to 0, gives $$0 \leq P(X = x) = \lim_{\epsilon \downarrow 0} [F(x) - F(x - \epsilon)] = 0$$ by the continuity of $F(x)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488615393638611, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/binomial-coefficients?page=7&sort=newest&pagesize=15
# Tagged Questions Coefficients involved in the Binomial Theorem. $\binom{n}{k}$ counts the subsets of size $k$ of a set of size $n$. 3answers 149 views ### How to understand this combinatorially: $\sum^{2k}_{i=0} \binom{4k}{2i} (-1)^{i}=2^{2k}(-1)^{k}$ The TAs in my department are stuck in assisting an undergraduate with the following problem: $$\sum^{2k}_{i=0} C^{4k}_{2i}(-1)^{i}=2^{2k}(-1)^{k}.$$ We tried to solve this via induction (obviously ... 1answer 136 views ### How to find the numbers that sum up to a given number? I have a list of numbers, finite, about 50 and I want to know which permutations with subsets of that set sum up to a given number. I found a formula for the number of ways but I don't know how to ... 1answer 113 views ### Proving Binomial Identities Using Bijections To Lattice Paths How can I derive a bijection to show that the following equality holds? $2\displaystyle\sum\limits_{j=0}^{n-1} \binom{n-1+j}{j} = \binom{2n}{n}$ In class, we've been deriving bijections using ... 1answer 87 views ### What is the coefficient of $z^k$ in ${z+n-1 \choose n}$ for $1 \leq k \leq n$? What is the coefficient of $z^k$ in ${z+n-1 \choose n}$ for $1 \leq k \leq n$? Thanks. I'm currently looking into Stirling numbers of the first kind, as it seems there is a connection. 1answer 76 views ### Wellner Inequality Working on an exercise from Shorack's Probability for Statisticians, Ex 4.6 (Wellner): Suppose $T \simeq$ Binomial$(n,p)$. Then use the inequality \mu(|X| \ge \lambda) \le ... 2answers 475 views ### Trying to prove that $p$ prime divides $\binom{p-1}{k} + \binom{p-2}{k-1} + \cdots +\binom{p-k}{1} + 1$ So I'm trying to prove that for any natural number $1\leq k<p$, that $p$ prime divides: $$\binom{p-1}{k} + \binom{p-2}{k-1} + \cdots +\binom{p-k}{1} + 1$$ Writing these choice functions in ... 0answers 70 views ### Some rare binomial identities Long ago , I once saw a nontrivial appealing binomial type of identity that I never saw again. It was something along the line of $\Sigma$$\binom{a(x)}{b(y)}$= where $a$ and $b$ where polynomials not ... 0answers 93 views ### Calculating $\sum_{y=0}^x \Pr[Y= y] \Pr[Z\leq k-y]^2$ when Y,Z are binomially distributed? Remark: I recently rewrote this post, hoping to get answers! I am analyzing the following experiment: Pick an $x \in \{0,\ldots,2k\}$ uniformly at random Pick $(2k+1)$-bit bitstring $b_1=(u,v_1)$ ... 4answers 322 views ### Prove that $n! \equiv \sum_{k=0}^{n}(-1)^{k}\binom{n}{k}(n-k+r)^{n}$ Basically I had some fun doing this: 0 1 1 6 7 6 8 12 19 6 27 18 37 6 64 24 61 125 etc. starting with ... 2answers 462 views ### Find the coefficient of $x^3y^2z^3$ in the expansion $(2x+3y-4z+w)^9$ The exercise says: In the expansion $(2x+3y-4z+w)^9$, find the coefficient of $x^3y^2z^3$. The formula to find the coefficient of $x_1^{r_1}x_2^{r^2}\dots x_k^{r_k}$ in $(x_1+x_2+\dots+x_k)^n$ ... 2answers 333 views ### Combinatorial proof of an identity [duplicate] Possible Duplicate: Combinatorially prove something I have to give a combinatorial proof of the identity: $$\sum_{i=0}^{n}{\binom{n}{i}}{2^i}=3^n$$ I can use prove it using the binomial ... 6answers 286 views ### Find the coefficient of $\sqrt{3}$ in $(1+\sqrt{3})^7$? I just want to ask you if my solution is correct. Here's the problem, Using the Binomial Theorem, find the coefficient of $\sqrt{3}$ in $(1+\sqrt{3})^7$. Solution: The binomial theorem is, ... 3answers 159 views ### What is $\lim\limits_{n\to\infty} \frac{n^d}{ {n+d \choose d} }$? What is $\lim\limits_{n\to\infty} \frac{n^d}{ {n+d \choose d} }$ in terms of $d$? Does the limit exist? Is there a simple upper bound interms of $d$? 0answers 178 views ### Calculating the Shapley value in a weighted voting game. Given a special case of WVG (Weighted Voting Game) of $a$ 1s and $b$ 2s and a quota q, $[q:1,1,1,1..1,2,2,..2]$. I need help with calculating the Shapley value of a player with a weight of $2$ and a ... 1answer 78 views ### Discriminating between model error and random error. [closed] I am trying to discriminate between a type of systematic error and random error in fitting data to a model. I have various functions (let's call them 'models') to which I would like to compare a ... 2answers 66 views ### binomial theorem: find coef. xy Given: $$\left(x-\dfrac{1}{2y}\right)^8\left(x+\dfrac{1}{2y}\right)^4$$ Using binomial theorem, what is the coefficient of xy in the expansion? I've tried to do it but I couldn't. Could you ... 2answers 991 views ### Using Binomial Theorem to prove identity I need to prove the following using the binomial theorem $${n \choose k} = {n-2 \choose k} + 2{n-2 \choose k-1} + {n-2 \choose k-2}$$ The binomial theorem states (1+x)^n = \sum_{k=0}^n {n \choose ... 0answers 102 views ### Binomial coefficient intervals (inequality) For given $N$, $x$ and $k$ such that $0\leq x<N$ and $2\leq k\leq \left\lfloor \frac{N+1-2x}{2}\right\rfloor$, does it exist $p,$ $2\leq p\leq \left\lfloor \frac{N+1}{2}\right\rfloor$ such that ... 3answers 418 views ### Counting subsets containing three consecutive elements (previously Summation over large values of nCr) Problem: In how many ways can you select at least $3$ items consecutively out of a set of $n ( 3\leqslant n \leqslant10^{15}$) items. Since the answer could be very large, output it modulo $10^{9}+7$. ... 1answer 412 views ### Distribute distinct objects in identical boxes Number of ways to distribute 6 Distinct objects to 3 Identical Boxes such that each should have atleast one? Is there any standard formula for these sums, as we have for identical - different pair ... 2answers 286 views ### Number of triangles inside given n-gon? How many triangles can be drawn all of whose vertices are vertices of a given n-gon and all of whose sides are diagonals ( not sides ) of the n-gon ? How many k-gons can be drawn in such a way ? 1answer 159 views ### Given $y$ and $x \choose y$, how to find $x$? [duplicate] Possible Duplicate: How to reverse the $n$ choose $k$ formula? Given integers $y\geq 0$ and $z>0$, is there a good way to find an integer $x\geq y$ such that $z=\binom x y$? I could ... 2answers 195 views ### Techniques for summing ratio of binomial coefficients There are several identities that involve the sum of the product of binomial coefficients. However what I am searching for is an identity that involves the ratio of binomial coefficients. ... 2answers 95 views ### What is the exponent of the last term of $(2x^2+3y^3)^{10}$? What is the exponent of the last term of: $$(2x^2+3y^3)^{10}$$ Hi! I'm sorry if this question seems a bit amateurish. I'm quite confused with this question that was asked in a quiz about binomial ... 5answers 218 views ### Spivak's Calculus - Exercise 4.a of 2nd chapter 4 . (a) Prove that $$\sum_{k=0}^l \binom{n}{k} \binom{m}{l-k} = \binom{n+m}{l}.$$ Hint: Apply the binomial theorem to $(1+x)^n(1+x)^m$. I'm having a hard time trying to solve the problem ... 2answers 110 views ### generating function of multinomial coefficient How to express this series in closed form? $$\sum_{i=1}^{\infty}\frac{(3i)!}{(i!)^3}x^{i}$$ Motive of the generating function is to evaluate the number of the paths from the $(0,0,0)$ to $(n,n,n)$ ... 1answer 153 views ### Inequality involving sums of fractions of products of binomial coefficients Let $n\in\mathbb{N}$. For $0\le l\le n$ consider \begin{equation} b_l:=4^{-l} \sum_{j=0}^l \frac{\binom{2 l}{2 j} \binom{n}{j}^2}{\binom{2 n}{2 j}}\text{.} \end{equation} Do you know a technique how ... 0answers 72 views ### Simplifying the sum with binomial coefficients [duplicate] Possible Duplicate: Identity involving binomial coefficients Simplify the sum: $$\sum_{k=0}^n {2k\choose k}{2n-2k\choose n-k}$$ So we can denote \$a_n=\sum_{k=0}^n {2k\choose ... 1answer 56 views ### Criterion for Wolstenholme Primes Wolstenholme Theorem is a nice theorem that states that every prime $p >3$ satisfies: $$\binom{2p}{p} \equiv 2 \pmod {p^3}$$ A Wolstenholme prime is a prime $p$ such that \$\binom{2p}{p} \equiv 2 ... 1answer 125 views ### Primes Not Dividing $\binom{2n}{n}$ Let $n \geq 3$, show ${2n \choose n}$ is not divisible by $p$ for all primes $\frac{2n}{3} <p\leq n$ Note: This fact along with other facts about ${2n \choose n}$ are used in a proof of Bertrand's ... 1answer 184 views ### No closed form for the partial sum of ${n\choose k}$ for $k \le K$? In Concrete Mathematics, the authors state that there is no closed form for $$\sum_{k\le K}{n\choose k}.$$ This is stated shortly after the statement of (5.17) in section 5.1 (2nd edition of the ... 2answers 164 views ### what is the easiest way to represent $\sqrt{1 + x}$ in series How to expand $\sqrt{1 + x}$. \sum_{n = 0}^\infty {{\left ( 1 \over 2\right )!}x^n \over n! \left({1 \over 2 }- n\right )!} = 1 + \sum_{n = 1}^\infty {{\left ( 1 \over 2\right )!}x^n \over n! ... 3answers 562 views ### Alternating sum of squares of binomial coefficients I know that the sum of squares of binomial coefficients is just ${2n}\choose{n}$ but what is the closed expression for the sum ${n}\choose{0}$$^2$ - ${n}\choose{1}$$^2$ + ${n}\choose{2}$$^2$ + ... + ... 0answers 180 views ### The Lucas Theorem and facts I have studied the Lucas theorem and I encountered the following facts. How to deduce the following facts from The Lucas theorem? (1) If d, q > 1 are integers such that , $$\binom{nd}{md}$$ ... 1answer 207 views ### proof of a finite sum involving a binomial coefficient and a variable. I found that the following equation holds for integers $l$, $k$, and any $x \neq 0,1$, \tag{1} \sum\limits_{l = 0}^k {\left( { - 1} \right)^l } \left( {\begin{array}{*{20}c} k \\ l \\ ... 3answers 208 views ### Two inequalities with binomial coefficients I have two inequalities that I can't prove: $\displaystyle{n\choose i+k}\le {n\choose i}{n-i\choose k}$ $\displaystyle{n\choose k} \le \frac{n^n}{k^k(n-k)^{n-k}}$ What is the best way to prove ... 0answers 123 views ### Lucas' theorem Consequence Lucas' theorem consequence $$\binom {m}{n}=\ \prod_{i=0}^k\;\ \binom {m_i}{n_i} \pmod{p},$$ $$m=m_k\;p^k+m_{k-1}\;p^{k-1}+\cdots +m_1\; p+m_0,$$ $$n=n_k\;p^k+n_{k-1}\;p^{k-1}+\cdots +n_1\;p+n_0$$ ... 2answers 256 views ### Prove $\sum_{k=0}^{n}\binom{n}{k} = 2^{n}$ combinatorially [duplicate] Possible Duplicate: Proving a special case of the binomial theorem Prove the identity using a combinatorial argument: $$\sum_{k=0}^{n}\binom{n}{k} = 2^{n}$$ I'm not sure how to do a ... 5answers 105 views ### Prove by induction: $2^n = C(n,0) + C(n,1) + \cdots + C(n,n)$ This is a question I came across in an old midterm and I'm not sure how to do it. Any help is appreciated. $$2^n = C(n,0) + C(n,1) + \cdots + C(n,n).$$ Prove this statement is true for all \$n ... 2answers 289 views ### modified $\sum{k{n \choose k}}$ closed form expression There is probably something stupidly simple I'm missing, but I'm trying to find a closed form for: $$2\sum_{k=1}^{(n-1)/2} k \, {n \choose k} \hspace{1cm} (n\textrm{ is odd})$$ Anyone know how to ... 1answer 115 views ### A three variable binomial coefficient identity I found the following problem while working through Richard Stanley's Bijective Proof Problems (Page 5, Problem 16). It asks for a combinatorial proof of the following: \sum_{i+j+k=n} ... 2answers 195 views ### Limit of binomial coefficient I would like to find the limit $$\lim_{n \to \infty} \binom{s}{n+1} = \lim_{n \to \infty} \frac{s (s-1) \cdots (s-n)}{(n+1)!} ,$$ where $s \in \mathbb C$. Actually, it would be enough to show that ... 1answer 130 views ### Binomial expansion with only odd coefficients? In William Feller's 1st book p.272 It said the generating function $\Phi$ satisfies \begin{equation*} qs\Phi^2(s) - \Phi(s) + ps = 0 \end{equation*} so it has two roots. The first root is unbounded ... 2answers 155 views ### Generating function with binomial coefficients I want to derive formula for generating function $$\sum_{n=0}^{+\infty}{m+n\choose m}z^n$$ because it is very often very useful for me. Unfortunately I'm stuck: f(z)=\sum_{n\ge 0}{m+n\choose ... 1answer 200 views ### Three problems with binomial coefficients I found three difficult problems for me, involving binomial coefficients. They are extremely interesting I think, but I don't know if I have enough knowledge to manage. Seem really hard, can you help ... 4answers 221 views ### Alternating sum of binomial coefficients Calculate the sum: $$\sum_{k=0}^n (-1)^k {n+1\choose k+1}$$ I don't know if I'm so tired or what, but I can't calculate this sum. The result is supposed to be $1$ but I always get ... 0answers 95 views ### Prove: $\frac{(2px)!}{((px)!)^2}\equiv\frac{(2x)!}{((x)!)^2}\pmod{p^2}$ How can I prove the following, where $p$ is a prime and $x$ a positive integer? $$\dfrac{(2px)!}{((px)!)^2}\equiv\dfrac{(2x)!}{((x)!)^2}\pmod{p^2}$$ I'm not sure if it is actually true, but I tested ... 2answers 249 views ### Does this qualify as a proof? (Spivak's 'Calculus') I'm working through Spivak's 'Calculus' at the moment, and a question about series confused me a bit. I think I have the solution, but I'm not sure if my "proof" holds. The question is: Prove ... 1answer 115 views ### Going from binomial distribution to Poisson distribution Why does the Poisson distribution $$\!f(k; \lambda)= \Pr(X=k)= \frac{\lambda^k \exp{(-\lambda})}{k!}$$ contain the exponential function $\exp$, while its relation to the binomial distribution would ... 1answer 167 views ### Multi binomial theorem application If i have the polynomial expression $(a_1x+b_1y+c_1)^p. (a_2x+a_2y+c_2)^d$, and with assumptions $a_1+b_1<<c_1$ , $a_2+b_2<<c_2$, can i expand this as a product of binomials using the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 109, "mathjax_display_tex": 24, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9143093824386597, "perplexity_flag": "middle"}
http://en.m.wikipedia.org/wiki/Prime_decomposition_(3-manifold)
# Prime decomposition (3-manifold) In mathematics, the prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) collection of prime 3-manifolds. A manifold is prime if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension. This condition is necessary since for any manifold M of dimension $n$ it is true that $M=M\#S^n.$ (where M#Sn means the connected sum of M and Sn). If P is a prime 3-manifold then either it is S2 × S1 or the non-orientable S2bundle over S1, or it is irreducible, which means that any embedded 2-sphere bounds a ball. So the theorem can be restated to say that there is a unique connected sum decomposition into irreducible 3-manifolds and fiber bundles of S2 over S1. The prime decomposition holds also for non-orientable 3-manifolds, but the uniqueness statement must be modified slightly: every compact, non-orientable 3-manifold is a connected sum of irreducible 3-manifolds and non-orientable S2bundles over S1. This sum is unique as long as we specify that each summand is either irreducible or a non-orientable S2bundle over S1. The proof is based on normal surface techniques originated by Hellmuth Kneser. Existence was proven by Kneser, but the exact formulation and proof of the uniqueness was done more than 30 years later by John Milnor. ## References • J. Milnor, A unique decomposition theorem for 3-manifolds, American Journal of Mathematics 84 (1962), 1–7. ## Read in another language This page is available in 2 languages Last modified on 14 March 2013, at 21:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253513216972351, "perplexity_flag": "head"}
http://www.reference.com/browse/Sprague%E2%80%93Grundy_theorem
Definitions # Sprague–Grundy theorem In combinatorial game theory, the Sprague–Grundy theorem states that every impartial game is equivalent to a nimber. The nim-value of an impartial game is then defined as the unique nimber that the game is equivalent to. In the case of a game whose positions (or summands of positions) are indexed by the natural numbers (for example the possible heap sizes in nim-like games), the sequence of nimbers for successive heap sizes is called the nim-sequence of the game. The theorem was discovered independently by R. P. Sprague (1935) and P. M. Grundy (1939). ## Definitions For the purposes of the Sprague–Grundy theorem, a game is a two-player game of perfect information satisfying the ending condition (all games come to an end: there are no infinite lines of play) and the normal play condition (a player who cannot move loses). An is one such as nim, in which each player has the same available moves in every position. Impartial games fall into two outcome classes: either the next player wins (an N-position) or the previous player wins (a P-position). An impartial game can be identified with the set of positions that can be reached in one move (these are called the options of the game). Thus the game with options A, B, or C is the set {A, B, C}. A is a special game denoted *n for some ordinal n. We define *0 = {} (the empty set), then *1 = {*0}, *2 = {*0, *1}, and *(n+1) = *n ∪ {*n}. When n is an integer, the nimber *n = {*0, *1, ..., *(n−1)}. This corresponds to a heap of n counters in the game of nim, hence the name. Two games G and H can be added to make a new game G+H in which a player can chose either to move in G or in H. In set notation, G+H means {G+h for h in H} ∪ {g+H for g in G}, and thus game addition is commutative and associative. Two games G and G' are equivalent if for every game H, the game G+H is in the same outcome class as G'+H. We write G ≈ G'. ## Lemma For impartial games, G ≈ G' if and only if G+G' is a P-position. Firstly, we note that ≈ is an Equivalence relation since equality of outcome classes is. We now show that for every game G, and P-position game A, A+G ≈ G. By the definition of ≈, we need to show that G+H is in the same outcome-class as A+G+H for all games H. If G+H is P-position, than the previous player has a winning strategy in A+G+H - to every move in G+H he responds according to his winning strategy in G+H, and to every move in A he responds with his winning strategy there. If G+H is N-position, than the next player in A+G+H makes a winning move in G+H, and then reverts to responding to his opponent in the manner described above. Also, G+G is P-position for any game G. For every move made in one copy of G, the previous player can respond with the same move in the other copy, which means he always makes the last move. Now, we can prove the lemma. If G ≈ G', then G+G' is of the same outcome-class as G+G, which is P-position. On the other hand, if G+G' is P-position, then since G+G is also P-position, G ≈ G+(G+G') ≈ (G+G)+G' ≈ G', thus G ≈ G'. ## Proof We prove the theorem by structural induction on the set representing the game. Consider a game $G = \left\{G_1, G_2, ldots, G_k\right\}$. By the induction hypothesis, all of the options are equivalent to nimbers, say $G_i approx *n_i$. We will show that $G approx *m$, where $m$ is the mex of the numbers $n_1, n_2, ldots, n_k$, that is the smallest non-negative integer not equal to some $n_i$. Let $G\text{'}=\left\{*n_1, *n_2, ldots *n_k\right\}$. The first thing we need to note is that $G approx G\text{'}$. Consider $G+G\text{'}$. If the first player makes a move in $G$, then the second player can move to the equivalent $*n_i$ in $G\text{'}$. After this the game is a P-position (by the lemma), since it's the sum of some option of $G$ and a nim pile equivalent to that option. Therefore, $G+G\text{'}$ is a P-position, and by another application of our lemma, $G approx G\text{'}$. So now, by our lemma, we need to show that $G+*m$ is a P-position. We do so by giving an explicit strategy for the second player in the equivalent $G\text{'}+*m$. Suppose that the first player moves in the component $*m$ to the option $*m\text{'}$ where minimal excluded number, the second player can move in $G\text{'}$ to $*m\text{'}$. Suppose instead that the first player moves in the component $G\text{'}$ to the option $*n_i$. If $n_i < m$ then the second player moves from $*m$ to $*n_i$. If $n_i > m$ then the second player, using the induction hypothesis, moves to $*m$; there must be one since $*n_i$ is the mex of the options of $*n_i$. It's not possible that $n_i = m$ because $m$ was defined to be different from all the $n_i$. Therefore, $G\text{'}+*m$ is a P-position, and hence so is $G+*m$. By our lemma, $G approx *m$ as desired. ## Development The Sprague–Grundy theorem has been developed into the field of combinatorial game theory, notably by E. R. Berlekamp, John Horton Conway and others. The field is presented in the books and . ## References • Sprague, R. P. (1935–36). "Über mathematische Kampfspiele". Tohoku Mathematical Journal 41 438–444. • Grundy, P. M. (1939). "Mathematics and games". Eureka 2 6–8. Reprinted, 1964, 27: 9–11. • Schleicher, Dierk; Stoll, Michael (2004). "An introduction to Conway's games and numbers". 0410026. }} • Milvang-Jensen, Brit C. A. (2000). "Combinatorial Games, Theory and Applications".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392616748809814, "perplexity_flag": "middle"}
http://www.advancedlab.org/mediawiki/index.php/Quantum_Interference_%26_Entanglement
# Quantum Interference & Entanglement ## Quantum Interference & Entanglement Description • Pre-requisites: Physics 137A • Days Alloted for the Experiment: 5 • Sign-up for consecutive days preferential, maximally one break is allowed after day 2 or 3. Attention: There is NO eating or drinking in the 111-Lab anywhere, except in room 286 LeConte on the bench with the BLUE tape around it. Thank You, the Staff. This lab will be graded 30% on theory, 40% on technique, and 30% on analysis. For more information, see the Advanced Lab Syllabus. Comments: E-mail Don Orlando ## Before the Lab Complete the following before your experiment's scheduled start date: 1. There is no video as of yet for this experiment. Read the references [1] [2] [3] below. 2. Read the Optics Tutorial, in particular the pdf files from CVI Melles Griot. The sections that are especially relevant to this experiment are Polarization and Waveplates (near the end of Fundamental Optics), Optical Coatings (all of our waveplates have antireflection coatings), and Intro to Laser Technology. 3. Complete the training for the safe use of lasers detailed on the Laser Safety Training page. This includes readings, watching a video, taking a quiz, and filling out a form. 4. Print and fill out the QIE Pre Lab and Evaluation. The pre-lab must be printed separately. Discuss the experiment and pre-lab questions with any faculty member or GSI and get it signed off by that faculty member or GSI. Turn in the signed pre-lab sheet with your lab report. You should keep a laboratory notebook. The notebook should contain a detailed record of everything that was done and how/why it was done, as well as all of the data and analysis, also with plenty of how/why entries. This will aid you when you write your report. ## Introduction This experiment tests the validity of quantum mechanics against local hidden variable theories in describing entanglement phenomena. It takes the form of a quantum optics experiment using polarization-entangled photon pairs. ### Bell's Theorem and the CHSH Inequality John Bell showed that any theory in which properties are local and are well-defined prior to measurement must obey certain limitations. Interestingly, quantum mechanics exceeds those limits set by these so-called Bell-inqualities. Therefore, observing that nature does exceed these limits makes a good case for quantum mechanics and proves that there exists no local realistic theory that can describe nature accurately. One particular version of the Bell's theorem is the so-called CHSH inequality (named after Clauser, Horne, Shimony, and Holt). To make the discussion more concrete, we will assume a photon source that sends two photons to each of two distant locations. The degree-of-freedom we will study is the polarization of these photons. We will be interested only in events where the polarization of both photons has been detected successfully. First let us define the parity of the polarization correlations: $E=\frac{N_{vv}-N_{vh}-N_{hv}+N_{hh}}{N_{total}}=\frac{N_{vv}-N_{vh}-N_{hv}+N_{hh}}{N_{vv}+N_{vh}+N_{hv}+N_{hh}}$, where Nvv is the number (or rate) of coincidences where both photons are vertically polarized, etc. E can range from -1, meaning that all photon coincidences have opposite polarizations, to 1, meaning they all have the same polarization. Let us then define the quantity, S, which is function of four distinct E measurements. $S=E(\alpha,\beta)-E(\alpha,\beta')+E(\alpha',\beta)+E(\alpha',\beta') \,$, where α and β are the angles in which we are going to analyse the polarization of each photon. The interest of defining the quantity S is that local realistic theories are always bound to yield |S| ≤ 2, while quantum mechanics allows values of up to 2√2 ≈ 2.8. At first glance, it would seem that S could range from –4 to 4. However, a careful inspection of the physics reveals that no two pairs of angles (α,α') and (β,β') give a value this large. If the system obeys a local hidden variable theory, then S is restricted by the CHSH inequality: |S| ≤ 2. However, quantum mechanics predicts that |S| can be as high as 2√2 for particular quantum states of the two photons and as well as carefully choosen angles. (For a full derivation, see CHSH's paper[3].) The goal of this experiment is to violate the CHSH inequality, thereby rejecting local hidden variable theories and affirming the validity of quantum mechanics. ## Experimental Setup ### Overview The heart of the experiment is the generation of a particular entangled quantum state of the polarization of two individual photons, a Bell state. In the simplest case, this is a superposition state of both photons beeing either horizontally or vertically polarized. In the experiment, we achieve this by sending photons near 405 nm from a diode laser to a pair of non-linear crystals made of beta barium borate (BBO chrystal). Within the non-linear crystals the violet photon can decay into a pair of red photons with their polarization being determined by the optical axes of the non-linear crystals. The photon pair emitted under a small angle is then detected after passing through polarization optics with an avalanche photodiode. Finding two detectors firing within a short time intervall indicates that these events were indeed caused by a photon pair and not stray light. ### Laser Diode Thorlabs LDC 205 C Laser Diode Controller. The experiment is powered by a 120-mW, 405-nm violet laser diode, the same wavelength used by Blu-ray Disc™ players. This is a Class 3B laser and can cause permanent eye damage if the beam directly enters the eye. DO NOT REMOVE THE 405 BANDSTOP FILTER AT THE END OF DOWNCONVERSION SOURCE (BBO-CHRYSTALS) OR THE ORANGE BEAM ENCLOSURES THAT PREVENT STRAY REFLECTIONS FROM LEAVING THE BEAM PATH. Orange protective goggles are available by the door. Because this is a laser diode, all the photons in the beam have the same polarization, in this case horizontal. The laser beam incident on the BBOs (i.e., after the intermediate optics) is sometimes referred to as the "pump" beam. The laser diode is powered by a Thorlabs LDC205C laser diode controller, located on top of the optical bench roof. Turn the controller on with the switch in the lower left corner. We run the laser diode in constant current mode and cathode ground (CG) polarity. You will want to view the laser diode current, ILD. To operate the diode, use the button in the upper right corner to turn it on and turn the knob to control the current. Any time the laser diode is on, the light on the QIE room door will be flashing red. The current limit is set to 150 mA to avoid burning out the diode. If you reach this limit, the controller will beep, the red "LIMIT" LED will turn on, and you will not be able to increase the current further. However, it is okay to run the laser diode at its maximum current. ### BBOs Diagram of violet beam path optics The beam passes through some optics (two steering mirrors, two wave plates, and an iris—see alignment) before reaching a pair of nonlinear beta barium borate (β-BaB2O4 or just BBO) crystals. When a pump photon of a particular polarization enters a BBO crystal, it can undergo a process known as spontaneous parametric down-conversion, in which it is converted into two photons each with half the initial energy (and twice the wavelength = 810 nm) which exit the crystal symmetrically at a small angle. These photons are sometimes referred to as the "signal" and "idler" photons, but we can just as well call them the A and B photons, referring to the two arms of the detection setup along which they will travel. These photons are polarization-entangled, meaning that they are guaranteed to have (in this case) the same polarization. Note that the two beams of down-converted photons will not be visible to the naked eye because (1) 810-nm photons are infrared and (2) the beams are extremely low-power because of the conversion efficiency of the crystals. (Think about how much power corresponds to ~1,000 photons per second and compare this to the laser diode power rating.) Each BBO only down-converts photons of a single polarization. BBO-1 (farther from detectors) is fixed such that it down-converts horizontally polarized pump photons into two vertically polarized photons. BBO-2 (closer to detectors) can be rotated: 0° means the down-conversion axes of the two BBOs are parallel and 270° means they are perpendicular. Because our BBOs are slightly misaligned, always use 270° instead of 90°. Photo of violet beam path optics Because the separation between the BBOs is so small, the down-converted photons from each BBO essentially travel along the same cone to the detectors. This means that the horizontally polarized pairs are indistinguishable from the vertically polarized pairs until we perform a polarization measurement on them. In the quantum mechanical picture, each pair of down-converted photons is in a superposition of vertical and horizontal polarization until a measurement collapses the wavefunction into one state or the other. The evolution of the quantum state of the system from emission to down-conversion is summarized below in bra-ket notation with the assumption that the BBOs are perpendicular. H and V refer to horizontally and vertically polarized states of a pump photon, respectively, and h and v refer to horizontally and vertically polarized states of a down-converted photon. hh and vv are short for the combined state of a pair of down-converted photons, and φ is the polarization angle of the pump beam. $\mbox{Emission:}\quad|\psi\rangle= \cos{\phi} |H\rangle + \sin{\phi} |V\rangle$ $\mbox{BBO-1:}\quad|H\rangle \rightarrow |vv\rangle \equiv |v\rangle_A \otimes |v\rangle_B$ $\mbox{BBO-2:}\quad|V\rangle \rightarrow |hh\rangle \equiv |h\rangle_A \otimes |h\rangle_B$ $\mbox{After BBOs:}\quad|\psi\rangle= \cos{\phi} |vv\rangle + \sin{\phi} |hh\rangle$ The probability of detecting a given polarization depends on the polarization angle of the laser. You can effectively change the polarization angle of the laser to your liking by adjusting the angle of the 405-nm half-wave plate (between the two steering mirrors). We have installed a 405-nm band-stop filter after the BBOs for laser safety reasons. Although approximately 100 mW of power are incident on the BBOs, the filter attenuates this to less than 1 mW. This power can be viewed safely in order to align the detection components. ### Detection Detection arms from above The detection setup consists of two identical arms separated by a small angle. The angle can be adjusted to match the trajectory of the down-converted photons and should be centered about the violet laser beam. Each of our detectors consists of a small lens which focuses the light into an optical fiber. The optical fiber runs from the lens to an avalanche photodiode (APD), which converts single photons into sizable (~1 V) electronic pulses. They can detect anywhere from hundreds of photons to tens of millions of photons per second. The APDs are powered by a homemade power supply, located above the APDs on the optical bench roof. The power supply has a master on/off switch and four switches to turn the APDs on and off individually. An alarm within the APD power supply will alert you if you are overloading the APDs. If you ever hear this, immediately turn off the APDs to prevent damage. Ambient light conditions are typically okay as long as you do not remove the long-pass filter on the detectors while the APDs are on. Note, however, that this alarm sounds for several seconds every time the APDs are turned on. This is normal. To measure the polarization of the photon pairs, each detection arm has a half-wave plate to set the measurement basis and a polarizing beam splitter cube. The beam splitter allows horizontally polarized light to pass and reflects vertically polarized light out at 90°. This setup only allows photons to reach a given detector if they have a certain polarization (or, equivalently, their arrival at a given detector means that their wavefunctions have collapsed into a given polarization state). In other words, the beam splitter makes the horizontally polarized photons distinguishable from the vertically polarized photons. ### Coincidence Counting Signals from the APDs are sent to a field-programmable gate array (FPGA) that has been programmed to calculate coincidences. A coincidence is the arrival of two photons at different detectors within a short coincidence window, typically 5 ns for our setup. The FPGA sends photon counts for each detector and coincidence counts for each pair of detectors to LabVIEW, which displays the data on the thermometer-like indicators on the front panel. These data can be used to calculate S. Power supply for 4 APDs APDs (left) send data to FPGA card (right) Fiber optic cables coming in to APDs Power in and data out from APDs ### Proper Start-up Procedure Due to the idiosyncrasies of our software, it is important to follow these steps in the correct order: 1. Turn on the computer and log in. 2. Turn on the FPGA. 3. Run the LabVIEW program (latest version: `C:\QIE\QIE_Labview 2012\QIE_Counter.vi`, see using LabVIEW). If you turn the FPGA on before the computer, you may not be able to use the mouse. If you run LabVIEW before turning on the FPGA, you will get an error message. The APDs can be turned on and off at any time, but you should be aware that turning them on while the lights are on influences your results (check this out). Therefore we encourage you not to turn the APDs on while the lights are turned on. It is usually best practice to turn on the main switch to the APD power supply (on the left) and then turn each APD on individually. This allows you to know which APD is overloaded should the alarm sound continuously. Remember that it is normal for the alarm to sound for a few seconds as each APD is switched on. ## Alignment Because our detector apertures are only a few mm in diameter, proper alignment is essential to obtaining good data. ### Violet Beam Path We are using an optical cage system between the laser diode and the BBOs to make alignment of this portion of the beam path fairly simple, as well as to ensure laser safety. The orange material around most of the beam path attenuates the laser power enough to be safely viewed by the naked eye but not so much as to obscure the view of the laser dot from the outside. If this material ever glows enough to add significant noise to your data, block the detectors' view of it with the black shields. The following alignment procedure is based on height of the paper target behind the detector arms being correct. In theory, this target never needs to be moved or adjusted. However, if you notice that the target height seems drastically different than the height of the violet beam, you will need to adjust the target height to match. Ideally you would want to place the target directly in front of the BBOs and adjust its height until the violet dot hits its center. However, the detector arms will be in the way, making this a very difficult task. Therefore, do not mess with the target height unless absolutely necessary! Use the two turning mirrors to aim the beam down the center of the optical cage to the paper target behind the detectors. Because the first mirror is farther upstream, it is a finer adjustment to the beam spot's position on the iris. Use the first mirror to guide the beam through the iris, then use the second mirror to center the beam on the target. Repeat the process until you have the beam accurately centered on both the BBOs and the target. Once the laser is aligned, you can open the iris all the way or leave it partially closed as you see fit. Important: at this point it is also wise to check the tilt of the quarter waveplate between the halfwave plate and the second mirror. Make sure that it is roughly perpendicular to the beam path. Angles greater than 45 degrees will limit your observable coincidence rates severely. ### Infrared Beam Path Align the components on each detector arm independently of the other arm using the following procedure. During this procedure, you will have to remove the optical fibers from the APDs. Make sure the APDs are turned off before you do this. Allowing ambient light to enter the APDs directly will overload them. Be careful not to clamp anything down on top of the optical fibers during alignment. They are fragile and expensive to replace. Along similar lines: don't pull on the jacket (the black cover), instead carefully wiggle on the metal connector itself to remove the fiber from the connection. In addition, make sure that you do not scratch the front facets. The fibers are really fragile and they are also expensive, so simply do not break them! 1. The beam splitter cubes, half-wave plates and both detectors A' and B' should be removed from the cage, leaving only the two detectors A and B. Remove the long-pass filters from the detectors. 2. Attach the screw-on target to detector A/B. Remove the optical fiber from the APD and connect it to the red fiber testing laser (looks like a laser pointer). Turn on the test laser; the red beam should pass through the pinhole in the center of the screw-on target. The resulting red light cone (do not expect a well collimated beam) represents all the light that will be collected by the detector in your experiment. The diameter of the red cone should be on the order of 1 cm. If you have a bigger cone make sure that fiber is connected properly to the collimators of the detector, i.e. make sure that the notches are aligned properly. 3. Detector A/B Height: Swing the detector arm to the center so that the violet beam is centered horizontally on the target. First move the plate on which the detectors are mounted parallel to the back plate with the thumb screws on the back of the detectors. Later, this will allow you to adjust the angle of the detectors in all directions with the thumb screws. Then adjust the height of the detector so that the violet beam passes through the pinhole, i.e., the red and violet beams coincide. Later on, you might be able compensate for slight vertical misalignment by adjusting the second turning mirror to optimize count rates. However, it may be difficult to optimize this way for both detectors. 4. Detector A/B Angle: Adjust the thumb screws on the back of the detector until the red circle is well centered on the BBOs. You can use a piece of white paper to see the beam if it helps. If necessary, you can also adjust the thumb screws on the BBOs so that the back-reflection of the red circle is centered on the detector aperture. These thumb screws move both BBOs together. However, it may be difficult to align this back reflection for both arms. 5. Remove the screw-on target from the BBOs. Replace the long-pass filters on each detector and reconnect the optical fibers to their corresponding APDs. Make sure when reconnecting the fibers that the notch on the connector is lined up correctly. You will not be able to insert the fiber all the way in if it is not aligned, meaning light could possibly leak in and out. ### Detection Arm Angle Using the fact that the laser beam exits the diode polarized within a few degrees of horizontal and that the 405-nm half-wave plate is miscalibrated such that a reading of 40° means that the optical axis is vertical, adjust the half-wave plate so that the BBOs, set at 270°, will down-convert as many photons as possible. We use 270° because this is the setup used in the actual measurement, and any other setting introduces a slight angle between the BBOs. If you have the detector half-wave plates set at the same angle as each other, the optimal angle for the detection arms will be the angle at which AB or A'B' coincidences are maximized. Use the micrometers to vary the angle of each arm until you find a setting that maximizes coincidences. Make a note of the exact readings, although you should not need to move these arms again. Questions: Under which angle do you expect a maximum of the red-photon counts on single arm? Is there a chance to observe coincidences for any pair of angles? Having successfully answered the above question you should quickly find substantial coincidence rates above 50 counts/s. Now you may want to try to enhance the coincident rate by changing the angles of the detectors and note the positions. ### Inserting the polarization analysing elements The photons will be entangled in the polarization degree-of-freedom. Therefore, will will have to analyze the polarization with a combination of a polarizer (polarizing beam splitter cube) and a half-wave plate. For this you can either use the red alignment laser or swing the arms into the center such that you can use the blue beam as guidance. 1. Cube Position: Insert the beam splitter with the front face (marked by a dot on top of the cube) towards the BBOs. (There is only one correct position for the dot.) Adjust its position and height so that the red beam passes roughly through the center of the back face. 2. Cube Angle: Adjust the thumb screws on the cube mount so that the red back-reflection hits the center of the screw-on target. You maybe also use a piece of paper with a hole punched through to allow the laser to pass through instead of the screw-on target. 3. Half-Wave Plate A/B: Insert the half-wave plate in front of the beam splitter. Adjust the position and angle with respect to the laser beam axis the same way as with the beam splitter. For most settings of the halfwave plates, you should see already coincidences. However, you might want to reoptimize the angle settings of the detector arms. Why? ## Producing a Bell State A Bell state is a maximally entangled state of two qubits, namely $|\psi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|hh\rangle\pm|vv\rangle)$ or $|\phi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|hv\rangle\pm|vh\rangle)$. (Note that we can only produce the ψ states because our BBOs down-convert into pairs with like polarization.) Ideally, we would like each pair of down-converted photons in this experiment to be in a Bell state. However, many factors can contribute to deviations from perfect entanglement: 1. pump beam not polarized at 45° 2. phase shift due to BBOs 3. separation between BBOs 4. angle between BBOs 5. different down-conversion efficiencies of BBOs The first point is controlled by the 405-nm half-wave plate. You should have set this to output light polarized at 45° during the alignment procedure. However, now that the detector arms are positioned correctly, you should refine this setting using real-time data. If we assume detection efficiencies are equal between the two detectors, then you should be able to find the correct setting by equalizing coincidence measurements on AB and A'B'. This means you are outputting horizontally and vertically polarized photon pairs in equal numbers. A small phase shift between the horizontally and vertically polarized photon pairs can occur due to dispersion and birefringence in the BBOs. (You can also think of this as adding a slight elliptical polarization to linearly polarized light.) Let's say that instead of producing a perfect Bell state, the BBOs introduce a small relative phase shift δ: $|\psi\rangle=\frac{1}{\sqrt{2}}(|hh\rangle+e^{i\delta}|vv\rangle)$. This would decrease the number of coincidences when the detector half-wave plates are both set to 22.5°. To compensate for this, we need to delay one component of the pump beam by δ before it reaches the BBOs. We can do this with a birefringent crystal. We have chosen to use a quarter-wave plate rotated about the vertical (so that it functions as a less-than-quarter-wave plate). The quarter-wave plate is installed on a rotational mount with its optical axis parallel to the vertical. When the rotational mount reads 0°, the laser beam is perpendicular to the face of the wave plate. Rotate this wave plate such that you produce an ideal Bell state with δ = 0. For this you can choose particular settings of the halfwaveplates in the analysing paths or you can record the coincidences for a range of the two analyser half waveplate settings (see midlab question). The separation between the BBOs is small enough to be negligible, and as noted above, the angle between the BBOs is practically eliminated when the BBOs are at 270°. We cannot control the down-conversion efficiencies of the BBOs, but they should be very similar. ### Mid-lab Questions Before you go on, test the purity of your Bell state by mapping out the dependence of coincidence rates on the detector half-wave plate settings. To fully explore what's going on, you will need to make several plots. In particular, you should plot the coincidence rates and E as a function of the angle one of the two half-wave plates. To better understand what is going on, rotate the other analysing half-wave plate by a certain angle and repeat the measurement. Make sure you understand what you expect to see for each plot, and ask a GSI if you are unsure. If you find that you do not have a reasonable Bell state, use the plots you created to figure out which parameters need adjustment. Adjust these and retake the data until you create the state you are aiming at. Alignment adjustments can also help improve counts on detectors that seem to be missing photons. What can you say from these measurements about the phase δ of the Bell state $|\psi\rangle=\frac{1}{\sqrt{2}}(|hh\rangle+e^{i\delta}|vv\rangle)$? If unsure, use the little program you wrote for the prelab, plot the expected coincidence rate as a function of various angles α and β. Note that it will be absolutely crucial that you understand what is going on at this stage before you proceed. Calculate the contrast (max(coincidences)-min(coincicdences))/(max(coincidences)-min(coincicdences)). What can you say from this about the purity and/or fidelity of your Bell state? Note that although it is not strictly necessary to do this experiment in the dark, ambient lighting will only increase detector noise. ## Violating Bell's Inequality The actual data can be taken using either a two-detector or a four-detector setup. In the two-detector setup, the beam splitters are just used as polarizers and there is only one coincidence rate: AB. It is simpler to align, and detector efficiencies do not affect the data. However, because it can only detect one polarization on each arm, you must record data for 16 different settings to calculate S. The four-detector setup would a full polarization analysis of the down-converted photons, which can be helpful in understanding the physics of the detection setup, exploring the dependence of the data on wave plate settings, and troubleshooting the equipment when your results are not as expected. Furthermore, only four settings are necessary to calculate S. However, four different detector efficiencies make analysis a bit more complicated. We suggest that you begin with the two-detector setup. It is easier this way to tune the Bell state and the results will be better (due to the efficiency problem). ### Two-Detector Setup In the two-detector setup, four angle pairs are needed for each E measurement. (The E Meter in LabVIEW will be meaningless if four detectors are not used.) You essentially need to adjust the half-wave plates to redirect photons to the A and B detectors that would otherwise have been reflected by the beam splitters. If you need help figuring out which angle pairs to use, see the paper by Dehlinger and Mitchell[1], which describes this experiment with only two detectors. ### Four-Detector Setup At this point it is not recommended to try the four-detector setup. In the four-detector setup, each angle pair corresponds to one measurement of E. Therefore, only four pairs of angles (two settings for α and two for β) are needed to calculate S. In order to achieve maximal violation of the CHSH inequality, i.e. S ≈ 2.8, you need to choose angles such that |β + α| = 11.25° and |β´ – α´| = 11.25° and (α´ – α) = (β´ – β) = 22.5° (e.g. [α / β]=[11.25/0],[11.25/-22.5],[-11.25/0],[-11.25/-22.5]). Make sure you read the correct angles. You should verify that these are the optimal angles yourself, which can be easily done using only basic quantum mechanics. ## Using the LabVIEW Program The most recent version of the LabVIEW program is `C:\QIE\QIE_Labview 2012\QIE_Counter.vi`. Make sure the FPGA is powered on before you run LabVIEW or you will get an error message. Upon running the vi, you should see the front panel below. Front panel of `QIE_Counter.vi` ### Count Rate Indicators If you have the APDs powered on, the first thing you will notice is the eight large "thermometer" indicators. These display count rates on each of the four detectors (at left) and coincidence rates for each pair of detectors (at right). The count rates are calculated by dividing the raw counts from each of the FPGA registers, shown in a column vector at the bottom of the screen, by the update period, controlled in the General Settings box. Below the coincidence rates are the accidental rates, calculated using the coincidence resolution controls on the following line. You should have derived the formula for this calculation in the pre-lab. The values in the coincidence resolution array may or may not agree with the actual coincidence window used by the FPGA. This is controlled by a physical switch on the FPGA, but it also tends to drift from its nominal value. Keep in mind that the accidental rates are only as accurate as the coincidence resolutions. To the left of the coincidence resolutions is the E Meter. E is calculated as described in the introduction. Note that E can only be calculated if all four detectors are being used simultaneously. To the left of the E Meter are the Raw Counts indicators. You can verify that the count rates are indeed the raw counts divided by the update period. ### Status Indicators In the upper left corner, you will see the status indicators. The text box has five possible values: Initializing LabVIEW is starting up and connecting to the FPGA. Reading Counters... LabVIEW is waiting for one Update Period to elapse. Updated Counts LabVIEW has read the FPGA registers and recalculated count rates. The green rectangle flashes once each time this occurs. Closing Program... You have clicked the Stop Program button. Program Terminated. The vi is not running. ### General Settings "Counter Port" should be set to COM1 in order to connect to the FPGA. "Update Period" controls how long LabVIEW will wait between buffer reads. In other words, it will count photons for this amount of time and then display the resulting count rates. The "Subtract Accidental Counts?" button toggles whether or not LabVIEW will subtract the values in the "Accidental Rates" array from their respective coincidence rates. Note that there is a lag time of one cycle between when you click this button and when LabVIEW actually starts subtracting. The "Round Count Rates in Display?" button toggles rounding in the eight thermometer indicators. This can be useful because dividing by the update period often gives fractional count rates. Note that this function also exhibits a lag time of one cycle. ### Snapshots Snapshots allow you to take data points and save them to a file. "Snapshot Duration" is the length of time, in seconds, of the snapshot. You also have the option to enter angles for half-wave plates A and B and a comment, which will be saved with the snapshot. When you click "Take Snapshot" for the first time, a new window will pop up displaying the data you have taken. Each subsequent time you click "Take Snapshot," a row will be added to the new window. When you are done taking snapshots, click "Save Snapshots" in the main window. To clear the pop-up window for a new set of snapshots, click "Clear All." To avoid any false coincidences we suggest that you turn off the lights and monitor before taking snapshots. If your duration is longer than three seconds, it's easier to use some kind of timer, for instance http://www.timeanddate.com/timer/# . ## After the Experiment Make sure all of the following equipment is powered down at the end of each day: • laser controller • APD power supply (turn all five switches off) • FPGA • computer, monitor, speakers • room lights ## Extensions ### Double slit quantum eraser Slit in front of detector A View of the experiment from above View of the experiment from the front 405nm interference pattern Before conducting the quantum eraser experiment as described by Walborn et al [4], you should perform a single photon double slit experiment. For this you only need to consider detector A. Please follow the steps to align the beam path. 1. Center detector A by turning the micrometer to 0 mm and using the inch micrometer (from now on called inchmeter) and screw-on target. Take notice of the position of the inchmeter. 2. Mount an iris onto the black optical board and align it so the violet beams passes the iris when closed. 3. Remove the screw-on target and carefully screw the 200 μm slit on top of the 800 nm bandwidth filter. (There is one particular filter which has been set further into the black tube.) Screw the filter with the slit back on detector A as far as possible with the slit being vertically aligned. 4. Use the micrometer to find the ideal position of the detector. Take a few measurements (10s each) and then fit a Gaussian. 5. Put the micrometer to the position of the calculated peak and adjust the angle until you reach the highest possible count rate. The iris in front of the BBOs should be closed and the one downstream opened. 6. Center your detector again (micrometer should read 0 mm) and put the double slit right behind the iris. You should use the 0.100 mm slit width. Make sure the double slit slide is not angled. Take a piece of paper and observe the interference pattern you should be able to see. 7. Put the detector arm back into the position for the 810 nm photons. You might need to slightly adjust the position of the double slit with the turning knob. The count rate should exceed 1000 s - 1. Both irises should be closed. 8. Open the iris in front of the double slit until it reaches a diameter of about 1.5 mm. You should now detect more than 2000 photons per second. Now you are able to conduct a double slit experiment and verify the theoretical interference pattern (curve). Use the inchmeter to take measurements at different positions. Think about the number of measurements you should take and about the snapshot duration. (Ten seconds are definitely not enough!) Ask your GSI if the quarter wave plate for the Quantum Eraser has been cut and is ready to use. If this is the case, try to reproduce the results of the Walborn et al Quantum Eraser experiment. ## Trouble shooting Insufficient single photon counts in each detector (<10k/s) 1. enough blue laser power? You should have at least 35 mW (corresponds to about 50 mW at the laser diode) reasonably well collimated and focused after the BBO's before the band stop. If you decide to measure this power please consult the staff because there are safety concerns. 2. Are the detectors properly aligned? 3. Are the fibers inserted properly into their connectors? The image from the fiber tester at the BBO should be at most twice as large as the BBOs. If his is not the case most likely you have not inserted the fibers correctly into the detector port. Gently rotate the fiber till the notches on the fiber connector line up with the detector port and the connector slides into the port. Low or no coincidence counts (<100/s without polarization analysis): 1. Is half wave plate between the steering mirrors rotated by a reasonable angle? Angles > 45 degrees can distort the blue beam and thus limit the number of coincidences severely. Vastly different coincidence counts for VV and HH even though the blue polarization is set properly. 1. Are the two BBO's parallel to each other? ## References 1. ↑ 1.0 1.1 D. Dehlinger and M.W. Mitchell, "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory", Am. J. Phys. 70(9),903-910 (2002). 2. ↑ J.S. Bell, "On the Einstein Podolsky Roden paradox". 3. ↑ 3.0 3.1 J.F. Clauser, M.A. Horne, A. Shimony and R.A. Holt, "Proposed experiment to test local hidden-variable theories", Phys. Rev. Lett. 23, 880 (1969). 4. ↑ S. P. Walborn, M. O. Terra Cunha, S. Pádua and C. H. Monken, "Double-slit quantum eraser", Phys. Rev. A 65, 033818 (2002).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9105180501937866, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/61882/meromorphic-1-form-and-picards-theorem
## Meromorphic 1-form and Picard’s theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $D$ be the open unit disk in the complex plane and $U_1,U_2,\,\ldots\,,U_n$ be an open cover of the puntured disk $D^*= D\setminus{0}$. Suppose on each open set $U_j$ there is an injective holomorphic function $f_j$ such that $df_j=df_k$ on every intersection $U_j\cap U_k$. Question: Is it true that the differentials $df_j$ glue together to a meromorphic 1-form on $D$? Remark: If the residue is zero then it is true (with help of Picard's theorem). - 2 Maybe you want to assume that the $U_j$ are connected; otherwise it's not true even in the residue zero case. – Tom Goodwillie Apr 16 2011 at 4:02 No, Tom Goodwillie, I don't see any necessity to suppose the U_j connected. But if you prefer you can suppose them connected. – MathOMan Apr 16 2011 at 8:02 FYI: The question is copy-pasted from the Wikipedia article on the Picard theorem, see the 4th note at en.wikipedia.org/wiki/Picard_theorem – Gunnar Magnusson Apr 19 2011 at 23:31 4 @ Gunnar: yes the the question is the same as the conjecture in the Wikipedia article you mention. There is a good reason for that: the conjecture is due to MathOMan ! This is not apparent from the article because in the bibliography MathOMan appears under his real name . (I'm divulging no secret since MathOMan says so himself in his answer below) – Georges Elencwajg Apr 23 2011 at 15:58 @Georges: Thank you for clearing that up. – Gunnar Magnusson May 1 2011 at 13:31 show 1 more comment ## 3 Answers If the $U_i$'s are assumed to be connected (as in Tom Goodwillie's comment) the answer is yes. There are counterexamples if the $U_i$ are not connected. The hypothesis implies that the differentials $df_j$ patch together to define a differential form $\omega$ on $D \setminus 0$ that is never 0 (on $D \setminus 0$). Integration of $\omega$ defines a covering space of $D \setminus 0$: that is, we can choose a basepoint, say $* = 1/2$, elements of the covering space $Y$ consist of points $z \in D \setminus 0$ together with a value that can be obtained by integrating $\omega$ along some path in $D \setminus 0$ from $*$ to $z$. If $\omega$ is the differential of an injective function on each of finitely many connected $U_j$'s (following Tom Goodwilli's comment), then each $U_j$ can be lifted to the covering space $Y$. This implies that the branched cover has finitely many sheets. Since $\pi_1(D \setminus 0)$ is abelian, $Y$ an $n$-fold cyclic cover isomorphic to $z \mapsto z^n$. In these coordinates, pullback of $\omega$ to $Y$ can be integrated to give a function $g$ from $Y$ to the Riemann sphere such that satisfies $g(\zeta y) = g(\zeta) + C$ for some constant $C$, where $\zeta$ is a primitive $n$th root of unity. But then $n*C = 0$, so $C = 0$, so $g$ comes from a function on $D \setminus 0$; since it is finite-to-one near 0, by Picard's theorem this is a removable singularity (as a map to the Riemann sphere), it extends to a meromorphic function on $D$, and its differential is therefore meromorphic. If the $U_j$'s are not assumed connected, the covering space $Y$ still exists, but it need not have finitely many sheets. In the infinite-sheeted case, the covering space is the universal cover of $D \setminus 0$ isomorphic to $z \mapsto \exp(z)$ from the right halfplane $Re(z) < 0$ to $D \setminus 0$. The integral of the pullback of $\omega$ to the right halfplane is a function that has the form $g(z) = a z + g_0(z)$, where $g_0(z)$ is the pullback of a function $f_0$ defined on $D \setminus 0$. (The linear term takes care of the integral of $\omega$ on a loop around the origin). The function $f_0$ must be an immersion (locally univalent function) from $D \setminus 0$ to $\mathbb C$. Such functions can be rather wild, for example, $f_0(z) = exp(1/z)$. The integral of $\omega$ itself could then be expressed as the multi-function $f(z) = z^a f_0(z)$, where $a$ is any complex number. Claim for any complex number $a$ and any locally univalent function $f_0: D \setminus 0 \to \mathbb C$ there is a finite cover $U_i$ of $D \setminus 0$ (where the $U_i$ are not connected) such that on each $U_i$ $\omega$ is the differential of a univalent function $f$. Proof of Claim: Cover $D \setminus 0$ by countably many open sets such that the integral of $\omega$ in each set is univalent. Every cover of a 2-manifold has a refinement that is locally at most 3-to-1, so its nerve is a 2-complex. (This is one characterization of topological dimension). It has a further refinement $U$, corresponding to the barycentric subdivision, by a covering that can be partitioned into three parts $U = A \cup B \cup C$ where the elements of $A$ are disjoint, the elemetns of $B$ are disjoint, and the elements of $C$ are disjoint. We can integrate $\omega$ on each of the elements of $A$, $B$ and $C$, and then add suitable constants to make the images in each of $A$, $B$ and $C$ disjoint. In summary: we've found three holomorphic functions $f_A$, $f_B$ and $f_C$ defined on the union of $A$, the union of $B$ and the union of $C$ whose differentials all equal $\omega$, but $\omega$ has an essential singularity at the origin. - This is a beautiful answer! – drbobmeister Apr 16 2011 at 23:23 1 I don't understand the second sentence in the third paragraph. Why finitely many sheets? – Tom Goodwillie Apr 19 2011 at 16:50 @Tom Goodwillie: this was a lapse. I'll redo things. – Bill Thurston Apr 19 2011 at 17:06 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Thanks for the answer! I apologize for answering with a delay (am on biking trip in Corsica). Yes, Tom Goodwillie is right about the necessity of the connectiveness condition. Bill Thurston's answer, however, does not convince me. I will explain why: Call $\omega$ the holomophic form defined on the punctured unit disk. There are two cases: 1. The residue is zero. Then there is a holomorphic function on the unit disk such that $df=\omega.$ Thus $df=df_j$ for every j. Using the connectedness of $U_j$ one can add a constant to $f_j$ such that $f=f_j$ on $U_j$. Then one concludes easily with Picard that f is meromorphic on the unit disk. 2. The residu $a$ is not zero. Then integrating $\omega$ yields a cover of infinite order because each turn around the origin adds the number $2\pi a$. So here I do not agree with Bill Thurston who says the order is finite. What actually happens is that the primitive of $\omega$ is of the form: holomorphic function on the punctured disk + $a\times$ logarithm. Now here comes my explanation of Bill Thurston's mistake when he says that the covering space is of finite order. The best way to define the Riemann R(g) surface of an analytic germ g is to say that R(g) is the connected component of g in the total space $|\mathcal{O}|$ of the sheaf of holomorphic functions. There are two natural projections defined on R(g). The first projection sends each germ to its centerpoint ("projection on the variable plane"). The second projection sends each germ to the value the germ takes in its centerpoint ("projection on the value plane"). It is the first projection that we are interested in. But when arguing that his covering space is finite above the punctured unit disk, Bill Thurston actually uses the functions $f_j$ as map, thus taking the second projection instead of the first. - You're right, as Tom Goodwillie also commented to my post, it doesn't follow that the cover is finite to one. I'll redo things. – Bill Thurston Apr 19 2011 at 17:06 1 To G. Magnusson: Yes, it is true that I stated this "conjecture" a dozen years ago in an article at Inst. Fourier (the same year that I left maths). I didn't know that meanwhile it got on Wikipedia! Last friday, Georges Elencwajg, a former professor of mine told me about Mathoverflow and so I posted this forgotten question here. And shame on me that I didn't see by myself the "connectedness condition" pointed out by Tom Goodwillie and Bill Thuirston! Now I will add it in the Wikipedia version. I have some naïve idea about Riemann surfaces but I lack the technique to tackle such a problem. – MathOMan Apr 21 2011 at 21:49 @MathOMan: Oh, I'm sorry, I didn't realize it was your conjecture. To be honest I thought some guy was posting problems from Wikipedia. My bad. – Gunnar Magnusson May 1 2011 at 13:28 Here is another conjecture that seems to me equivalent to my initial question. (It is a kind of "ugly" version above the value plane, the initial question above the variable plane, the disk, being the "nice" point of view.) Let $f : E\to \mathbb{C}$ be a connected étale space and $t:E\to E$ an isomorphism ("translation") such that $f\circ t=f+2i\pi a$ where $a$ is a non-zero complex number. Then $\mathbb{Z}$ acts on $E$ via the "translations" $t^n$. The 1-form $df$ clearly induces a 1-form on the quotient space $E/\mathbb{Z}$ which we still denote by $df$. Conjecture: if the quotient space is biholomorphic to the punctured disk then the pullback of $df$ does not have an essential singularity at the center of the disk (and its residue at the center is exactly $a$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 121, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938994824886322, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/10115/is-acoustic-pressure-a-statistical-term?answertab=oldest
# Is acoustic pressure a statistical term? Is acoustic pressure a statistical term? Also, what about pressure in a liquid or a gas? - 2 What do you mean by "a statistical term"? – David Zaslavsky♦ May 19 '11 at 20:43 by statistical term i mean a physical quatity defined as a statistical nature of some physical system. Like for example temperature. – Rajesh D May 19 '11 at 20:46 ## 1 Answer Yes, any notion of pressure at least normally assumes that there are many, $N\to\infty$, particles that are the sources of the pressure, so in this sense it is a statistical term. But just like with the temperature, one may assign pressure or temperature to a single particle, too, although for a single particle, we would usually say that it's an accident that it has one momentum or energy or another, so there's still some reason to say that it's a statistical term. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275733828544617, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/51814/wave-functions-for-three-identical-fermions?answertab=active
Wave functions for three identical fermions I would like to express the wave functions for three identical particles, each with orbital angular momentum $L=1$ and spin angular momentum $S=1/2$, in terms of single-particle wave functions. In other words, I would like to obtain the Clebsch-Gordan coefficients for this problem. The problem is discussed in Sakurai's Quantum Mechanics textbook around p. 375 and in Greiner and Muller's "Quantum Mechanics: Symmetries, 2nd Edition" on p. 300. I know I have to find the spin wave functions and the orbital angular momentum wave functions separately, and then combine them to get fully antisymmetric wave functions. I have the spin wave functions (four symmetric, 2 mixed symmetric under exchange of particles 1 and 2, and 2 mixed antisymmetric under exchange of 1 and 2), but I haven't been able to get a small enough number of angular momentum wave functions to get just 20 fully-antisymmetric total wave functions. In Sakurai's book, p 376, eq'n (6.5.20), we see that the 20 states can be decomposed into 2 states with total angular momentum $j=1/2$, 4*3=12 with $j=3/2$, and 6 with $j=5/2$. Could anyone fill in how Sakurai got 6 for $j=5/2$, 12 for $j=3/2$, and 2 for $j=1/2$? Most importantly, referring to my comments on Y Macdisi's answer below, could anyone answer the following question: Is the orbital angular momentum state with single particle wave functions $\alpha=1$, $\beta=0$, and $\gamma=0$ related in some way to the wave function with $\alpha=-1$, $\beta=0$, and $\gamma=0$, or $\alpha=1$, $\beta=-1$, and $\gamma=-1$, and so on? I would love it if I could just keep the first set of values for $\alpha$, $\beta$, and $\gamma$, but I see no good reason to do that. If anyone happens to know where this problem is discussed more fully, I would appreciate a reference. Or, if anyone knows how to do this, I would appreciate help in knowing what the $j=3/2$ and $j=5/2$ states could be in terms of single-particle orbital ang. momentum states and spin states. I think I actually have the normalization factors already, I just need to know what the single-particle states are. - 1 Just for the record, I don't have Sakurai in front of me, but I think you are reading the problem incorrectly - 2 identical fermions can't be in the same state, let alone 3. You probably mean 3 different fermions in the $S_z = +\frac{1}{2}$ state. Like an up, down and strange quark, or something to that effect. – DJBunk Jan 21 at 20:05 Sorry, where did I say two or more of these fermions were in the same state? I am well aware of the Pauli exclusion principle, and, looking over my question, I can't see where I wrote anything to suggest that. Thank you for responding, whatever the case. – Impossibear Jan 21 at 22:10 In Greiner and Muller, I'm referring to problem 9.1 on p. 300. This problem gives you the basis functions of the permutation group $S_3$ in terms of single-particle wavefunctions $\alpha$, $\beta$, $\gamma$. – Impossibear Jan 21 at 22:53 I just wasn't sure what you meant by 'identical particles' except for same state? – DJBunk Jan 21 at 23:10 By identical, I meant indistinguishable, I guess. Sorry for the confusion, but identical is the terminology used in every reference I've found on the problem so far. – Impossibear Jan 21 at 23:46 show 1 more comment 1 Answer The L=1 rep is a 3 dimensional irep of $SU(2)$; the S=1/2 rep is a 2 dimensional irep. Combining angular momentum (L=1) and spin (S=1/2) gives the tensor product rep $3 \otimes 2$ which is 6 dimensional. You are looking for the third exterior product of this : $A^3(3 \otimes 2)$ which is 20 dimensional and decomposes into $6 \oplus 4 \oplus 4 \oplus 4 \oplus 2$ irreps. These correspond to j=5/2,3/2,3/2,3/2,1/2 . - Thank you, Y Macdisi. I should be more specific, reading your response: Would the $j=5/2$ have to be $L_{tot}=2$, $S_{tot}=1/2$? If so, I can't see how one gets just 6 states, from the work done in Greiner and Muller's book. Would the $j=3/2$ be $L_{tot}=1$, $S_{tot}=1/2$ (for 10 of the 12), or $L_{tot}=0$, $S_{tot}=3/2$ (for 2 of the 12)? If so, I can't see how to get exactly 10 states from my $L_{tot}=1$ wavefunctions, in terms of single particle momentum states $|\pm1\rangle$ or $|0\rangle$. There just seem to be too many possible choices of single-particle angular momentum states. – Impossibear Jan 22 at 15:28 By the way, from Greiner and Muller's book, it seems there are two mixed symmetric and two mixed antisymmetric wave functions, expressed in terms of single particle wavefunctions $\alpha$, $\beta$, and $\gamma$, where each of $\alpha$, $\beta$, and $\gamma$ can take on the values $|+1\rangle$, $|0\rangle$, or $|-1\rangle$. For instance, the two orthonormal wavefunctions which are symmetric under exchange of $\alpha$ and $\beta$ are the following: $1/2 \psi_1 = 1/2\left[\alpha\beta\gamma + \beta\alpha\gamma-\gamma\beta\alpha-\gamma\alpha\beta\right]$ – Impossibear Jan 22 at 15:33 and $1/\sqrt{3}\psi_2' = 1/\sqrt{3}\left[1/2\left(\alpha\beta\gamma+\beta\alpha\gamma+\gamma\beta\alpha+ \gamma \alpha\beta\right)-\alpha\gamma\beta-\beta\gamma\alpha\right]$ – Impossibear Jan 22 at 15:34 There are also two more orthonormal wave functions, which are mixed antisymmetric under exchange of $\alpha$ and $\beta$. These four mixed (anti)symmetric wave functions must have either $L_{tot}=2$ or $L_{tot}=1$, it seems. They must be multiplied by the appropriate mixed (anti)symmetric spin wave functions to eventually form fully antisymmetric total wave functions corresponding to all of the $j=5/2$ states, and most of the $j=3/2$ states. – Impossibear Jan 22 at 15:38 Another way to think of it, is that, for three spin $1/2$ particles, the two mixed symmetric spin wave functions are the following: $|1/2,-1/2\rangle = -1/\sqrt{6}\left(2\Downarrow\Downarrow\Uparrow-\Uparrow\Downarrow\Downarrow-\Dow‌​narrow \Uparrow\Downarrow \right)$ $|1/2,1/2\rangle = 1/\sqrt{6}\left(2\Uparrow\Uparrow\Downarrow-\Uparrow\Downarrow\Uparrow-\Downarro‌​w \Uparrow\Uparrow \right)$ – Impossibear Jan 22 at 15:49 show 8 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277334809303284, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/86246/list
## Return to Question 2 deleted 4 characters in body Let $K$ be a centrally symmetric convex body in $\mathbb R^3$ with volume ${\rm vol}(K)=1$. Let for For any subset $F \subset \lbrace1,2,3\rbrace$, let $K_F$ be the projection of $K$ in $\mathbb R^F$. Question: What is the best constant $C$, such that $${\rm vol}(K_{\lbrace 1 \rbrace}) \leq C \cdot {\rm vol}(K_{\lbrace 1,2 \rbrace}) \cdot {\rm vol}(K_{\lbrace 1,3 \rbrace}).$$ Here, the volume of $K_F$ is computed in $\mathbb R^F$. With some work one can prove that $C=2$ is good enough. This is elementary and follows for example from two applications of Lemma 3.1 in J. Bourgain and V.D. Milman, New volume ratio properties for convex symmetric bodies in $\mathbb R^n$, Invent. Math. 1987 vol. 88 (2) pp. 319-340. My guess is that maybe $C=1$ works, but I am not sure. 1 # Volume inequality between projections of a convex symmetric set in $\mathbb R^3$ Let $K$ be a centrally symmetric convex body in $\mathbb R^3$ with volume ${\rm vol}(K)=1$. Let for any subset $F \subset \lbrace1,2,3\rbrace$, let $K_F$ be the projection of $K$ in $\mathbb R^F$. Question: What is the best constant $C$, such that $${\rm vol}(K_{\lbrace 1 \rbrace}) \leq C \cdot {\rm vol}(K_{\lbrace 1,2 \rbrace}) \cdot {\rm vol}(K_{\lbrace 1,3 \rbrace}).$$ Here, the volume of $K_F$ is computed in $\mathbb R^F$. With some work one can prove that $C=2$ is good enough. This is elementary and follows for example from two applications of Lemma 3.1 in J. Bourgain and V.D. Milman, New volume ratio properties for convex symmetric bodies in $\mathbb R^n$, Invent. Math. 1987 vol. 88 (2) pp. 319-340. My guess is that maybe $C=1$ works, but I am not sure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372352361679077, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/159971/sum-of-even-valued-and-odd-valued-fibonacci-numbers
# sum of even-valued and odd-valued Fibonacci numbers I was solving the Project Euler problem 2 *By starting with 1 and 2, the first 10 terms of Fibonacci Series will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... Find the sum of all the even-valued terms in the sequence which do not exceed 4 million.* here is my code in Python ````a=0 b=1 nums = [] while True: x=a+b a=b b=x if(b>4000000): break nums.append(x) sum=0 for x in nums: if(x%2==0): sum+=x print sum ```` I noticed that the answer comes out to be 4613732 However I initially did a mistake by doing x%2!=0 and the answer turned out to be 4613731 (4613732-1) Is this some property or just luck?? - ## 2 Answers Each even valued term (which are the 2nd, 5th, and every 3rd one after) is the sum of the preceding two odd terms. There is a startup transient that 2 doesn't have two 1's before it (you would if you started with 1,1). So the sum of the even valued terms up to some value will be 1 more than the sum of the preceding odd terms. - The sequence of Fibonacci numbers mod 2 goes $odd_1, odd_2, even_3$ - and in this sequence $odd_1+odd_2=even_3$. So the difference between the total of odds and evens up to any even number in the sequence is a constant, which happens to be 1. Stopping at an odd number gives a different result. - +1 for the last line – nischayn22 Jun 18 '12 at 19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241681694984436, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3887787
Physics Forums ## Is it possible to combine string theory and LQG? Good evening. I am wondering if string theory and loop quantum gravity could be combined into a single theory. I have been trying to decide which of the two I should choose as my "religion," but I feel that both are correct. Could string theory and loop quantum gravity be different manifestations of a deeper theory? Thank you in advance. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor Well, just as a bare bones issue string theory is a potential 'theory of everything' whereas LQG is not. By this I mean that string theory at least has the hope of describing ALL the fundamental forces and particles of nature, whereas a canonical QG theory such as LQG seeks only to quantize general relativity. So the possible relationship could be that the QG of LQG is the same as the QG obtained by string theory, but string theory is the broader theory. I don't know too much about either, so I don't know if there's some simple reason that their approaches could be incompatible. Marcus, our resident LQG guy, might be able to shed some light on this. Also, it's probably too early for you to 'choose sides' if you will. But a more detailed discussion along those lines is probably better suited for elsewhere. Recognitions: Science Advisor I'm hoping for some interesting action along these lines. Holography is an idea for which string theory has a very concrete proposal, while the spin networks of LQG are a form of tensor network. http://arxiv.org/abs/0905.1317 http://arxiv.org/abs/0907.2994 http://arxiv.org/abs/1102.5524 ## Is it possible to combine string theory and LQG? Quote by CarlosLara Good evening. I am wondering if string theory and loop quantum gravity could be combined into a single theory. I have been trying to decide which of the two I should choose as my "religion," but I feel that both are correct. Could string theory and loop quantum gravity be different manifestations of a deeper theory? Thank you in advance. The only way to see this is if you specialize in both of them. Good luck with that. :-) Recognitions: Science Advisor It is too early for a reliable assessment. String theory is - as far as I can see - still no single, well-defined theory with unchanging foundation. There is no single Lagrangian from which the full theory can be derived. Instead it's a web of equations, relations and dualities which point towards some underlying trooth still to be uncovered. String theory seems to be - afaik - a framework for constructing (consistent) theories of supergravity + (supersymmetric) gauge theories; just like "gauge theory" is such a framework w/o gravity. LQG as of today is not a candidate ToE, but string theory is. LQG is mostly formulated for 4-dim spacetime and symmetry group SO(3) ~ SU(2) but I don't see any reason not to start with an arbitrary graph plus labels of some (arbitrary) group G - which can very well be different from SO(3,1). LQG can be extended with additional labels for gauge and matter fields. There are attempts to understand the LQG quantization method in more than 4 dimensions and possibly for supergravity as well. So LQG seems to be a quantization method applicable to different theories, not only a single theory (the quantization method is still not fully understood, neither in the PI / spin foam, nor in the canonical approach) The difference to string theory is that you start with such a "standard" theory and apply the LQG quantization scheme, whereas in string theory you write down a string theory, calculate it's vacuum plus its spectrum and derive the low-energy theory). The whole approach is different. I would say they are different paradigms, not only different theories. There are some partial results and some part of one of these theories can be obtained from the other but there is no singe theory (as far as I Know) that both of these theories can be derived : For example the following papers may be useful: String Field Theory from Quantum Gravity by Louis Crane Abstract: Recent work on neutrino oscillations suggests that the three generations of fermions in the standard model are related by representations of the finite group A(4), the group of symmetries of the tetrahedron. Motivated by this, we explore models which extend the EPRL model for quantum gravity by coupling it to a bosonic quantum field of representations of A(4). This coupling is possible because the representation category of A(4) is a module category over the representation categories used to construct the EPRL model. The vertex operators which interchange vacua in the resulting quantum field theory reproduce the bosons and fermions of the standard model, up to issues of symmetry breaking which we do not resolve. We are led to the hypothesis that physical particles in nature represent vacuum changing operators on a sea of invisible excitations which are only observable in the A(4) representation labels which govern the horizontal symmetry revealed in neutrino oscillations. The quantum field theory of the A(4) representations is just the dual model on the extended lattice of the Lie group $E_6$, as explained by the quantum Mckay correspondence of Frenkel Jing and Wang. The coupled model can be thought of as string field theory, but propagating on a discretized quantum spacetime rather than a classical manifold. The LQG -- String: Loop Quantum Gravity Quantization of String Theory I. Flat Target Space by Thomas Thiemann Abstract: We combine I. background independent Loop Quantum Gravity (LQG) quantization techniques, II. the mathematically rigorous framework of Algebraic Quantum Field Theory (AQFT) and III. the theory of integrable systems resulting in the invariant Pohlmeyer Charges in order to set up the general representation theory (superselection theory) for the closed bosonic quantum string on flat target space. While we do not solve the, expectedly, rich representation theory completely, we present a, to the best of our knowledge new, non -- trivial solution to the representation problem. This solution exists 1. for any target space dimension, 2. for Minkowski signature of the target space, 3. without tachyons, 4. manifestly ghost -- free (no negative norm states), 5. without fixing a worldsheet or target space gauge, 6. without (Virasoro) anomalies (zero central charge), 7. while preserving manifest target space Poincar\'e invariance and 8. without picking up UV divergences. The existence of this stable solution is exciting because it raises the hope that among all the solutions to the representation problem (including fermionic degrees of freedom) we find stable, phenomenologically acceptable ones in lower dimensional target spaces, possibly without supersymmetry, that are much simpler than the solutions that arise via compactification of the standard Fock representation of the string. Moreover, these new representations could solve some of the major puzzles of string theory such as the cosmological constant problem. The solution presented in this paper exploits the flatness of the target space in several important ways. In a companion paper we treat the more complicated case of curved target spaces. Topological M-theory as Unification of Form Theories of Gravity by Robbert Dijkgraaf, Sergei Gukov, Andrew Neitzke, Cumrun Vafa Abstract: We introduce a notion of topological M-theory and argue that it provides a unification of form theories of gravity in various dimensions. Its classical solutions involve G_2 holonomy metrics on 7-manifolds, obtained from a topological action for a 3-form gauge field introduced by Hitchin. We show that by reductions of this 7-dimensional theory one can classically obtain 6-dimensional topological A and B models, the self-dual sector of loop quantum gravity in 4 dimensions, and Chern-Simons gravity in 3 dimensions. We also find that the 7-dimensional M-theory perspective sheds some light on the fact that the topological string partition function is a wavefunction, as well as on S-duality between the A and B models. The degrees of freedom of the A and B models appear as conjugate variables in the 7-dimensional theory. Finally, from the topological M-theory perspective we find hints of an intriguing holographic link between non-supersymmetric Yang-Mills in 4 dimensions and A model topological strings on twistor space. The cubic matrix model and a duality between strings and loops by Lee Smolin Abstract: We find evidence for a duality between the standard matrix formulations of M theory and a background independent theory which extends loop quantum gravity by replacing SU(2) with a supersymmetric and quantum group extension of SU(16). This is deduced from the recently proposed cubic matrix model for M theory which has been argued to have compactifications which reduce to the IKKT and dWHN-BFSS matrix models. Here we find new compactifications of this theory whose Hilbert spaces consist of SU(16) conformal blocks on compact two-surfaces. These compactifications break the SU(N) symmetry of the standard M theory compactifications, while preserving SU(16), while the BFSS model preserve the SU(N) but break SU(16) to the SO(9) symmetry of the 11 dimensional light cone coordinates. These results suggest that the supersymmetric and quantum deformed SU(16) extension of loop quantum gravity provides a dual, background independent description of the degrees of freedom and dynamics of the M theory matrix models. Recognitions: Science Advisor Quote by QGravity There are some partial results and some part of one of these theories can be obtained from the other but there is no singe theory (as far as I Know) that both of these theories can be derived ... I tried to understand these ideas a couple of years ago but I see two major problems: 1) the individual theories are by no means complete (so to unify strings and loops one would have to be clear about what strings and loops are - this is still not the case) 2) there seems to be no progress at all regarding unification of strings and loops over the last 10 years or so; the two fields are nearly disjoint Interesting paper, "Simplicial Gravity and Strings" arXiv:1110.5088: "String theory, as a theory containing quantum gravity, is usually thought to require more dimensions of spacetime than the usual 3+1. Here I argue on physical grounds that needing extra dimensions for strings may well be an artefact of forcing a fixed flat background space. I also show that discrete simplicial approaches to gravity in 3+1 dimensions have natural string-like degrees of freedom which are inextricably tied to the dynamical space in which they evolve. In other words, if simplicial approaches to 3+1 dimensional quantum gravity do indeed give consistent theories, they may essentially contain consistent background-independent string theories." I know that Simplical gravity is not the same as LQG but it is also background-independent and the two may be related in some way. Smolin also fould string like objects in certain forms of LQG..."Three roads to quantum gravity". Quote by CarlosLara ...I have been trying to decide which of the two I should choose as my "religion," but I feel that both are correct. Could string theory and loop quantum gravity be different manifestations of a deeper theory? Thank you in advance. Smolin, who has also done research in string theory as well, wrote the review: "How far are we from the quantum theory of gravity?" where he compares the status of string theory and LQG - arXiv:hep-th/0303185. I must disagree. Loop Quantum Gravity could be a theory of everything! Moreover it can be unified with String Theory. Read this attachment... the unification is been already done. It's the Arrangement Field Theory (AFT). Attached Files AFT-PART2.pdf (412.6 KB, 22 views) I would like to bring to your vision the following papers. The arrangement field theory (AFT). Part 2 http://arxiv.org/abs/1206.5665 Abstract << In this work we apply the formalism developed in the previous paper ("The arrangement field theory") to describe the content of standard model plus gravity. We discover a triality between Arrangement Field Theory, String Theory and Loop Quantum Gravity which appear as different manifestations of the same theory. Finally we show as three families of fields arise naturally and we discover a new road toward unification of gravity with gauge and matter fields. >> The arrangement field theory (AFT) http://arxiv.org/abs/1206.3663 Abstract << We introduce the concept of "non-ordered space-time" and formulate a quaternionic field theory over such generalized non-ordered space. The imposition of an order over a non-ordered space appears to spontaneously generate gravity, which is revealed as a fictitious force. The same process gives rise to gauge fields that are compatible with those of Standard Model. We suggest a common origin for gravity and gauge fields from a unique entity called "arrangement matrix" (M) and propose to quantize all fields by quantizing $M$. Finally we give a proposal for the explanation of black hole entropy and area law inside this paradigm. >> Thanks for your attention. Ashtekar in a recent review (http://arxiv.org/pdf/1201.4598.pdf - page 26) makes interesting comments on unification and LQG and how it could provide a bridge to string theory: "Unification. Finally, there is the issue of unification. At a kinematical level, there is already an unification because the quantum configuration space of general relativity is the same as in gauge theories which govern the strong and electro-weak interactions. But the non-trivial issue is that of dynamics. To conclude, let us consider a speculation. One possibility is to use the ‘emergent phenomena’ scenario where new degrees of freedom or particles, which were not present in the initial Lagrangian, emerge when one considers excitations of a non-trivial vacuum. For example, one can begin with solids and arrive at phonons; start with superfluids and find rotons; consider superconductors and discover cooper pairs. In loop quantum gravity, the micro-state representing Minkowski space-time will have a highly non-trivial Planck-scale structure. The basic entities will be 1-dimensional and polymerlike. one can argue that, even in absence of a detailed theory, the fluctuations of these 1-dimensional entities should correspond not only to gravitons but also to other particles, including a spin-1 particle, a scalar and an anti-symmetric tensor. These ‘emergent states’ are likely to play an important role in Minkowskian physics derived from loop quantum gravity. A detailed study of these excitations may well lead to interesting dynamics that includes not only gravity but also a select family of non-gravitational fields. It may also serve as a bridge between loop quantum gravity and string theory. For, string theory has two a priori elements: unexcited strings which carry no quantum numbers and a background space-time. Loop quantum gravity suggests that both could arise from the quantum state of geometry, peaked at Minkowski (or, de Sitter) space. The polymer-like quantum threads which must be woven to create the classical ground state geometries could be interpreted as unexcited strings. Excitations of these strings, in turn, may provide interesting matter couplings for loop quantum gravity." If some outgrowth of loop gravity ever produces a type of string theory, it might be through the use of twistor variables. My logic: loop gravity is largely based on topological QFT, and the topological string can be defined on twistor space. Recognitions: Science Advisor The more I think about it the more I come to the conclusion that string theory is a kind of effective theory and is not - at least not in its present formulation - based on fundamental d.o.f. Therefore I do not expect a combination but rather a certain limit (of LQG or some other theory) from which string theory could emerge. Quote by tom.stoer The more I think about it the more I come to the conclusion that string theory is a kind of effective theory and is not - at least not in its present formulation - based on fundamental d.o.f. Therefore I do not expect a combination but rather a certain limit (of LQG or some other theory) from which string theory could emerge. Completely agree. The big attraction of string theory is it's promise to provide a T.O.E. ...if LQG (or some other theory) had `string theory' in some limit then LQG (or some other theory) could also claim to promise a T.O.E. It was Joe Polchinski who said in 1999 "all good ides are part of string theory"... ...it may turn out to be the other way round and that string theory is part of some more fundamental approach. Recognitions: Science Advisor We had a long and interesting debate here in this forum when I cited a question David Gross asked a few years ago: "What is string theory?" If you are interested have a look at http://www.physicsforums.com/showthr...pointed+string Thread Tools | | | | |-----------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: Is it possible to combine string theory and LQG? | | | | Thread | Forum | Replies | | | Beyond the Standard Model | 15 | | | Beyond the Standard Model | 8 | | | Beyond the Standard Model | 2 | | | Beyond the Standard Model | 0 | | | Beyond the Standard Model | 17 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244098663330078, "perplexity_flag": "head"}
http://mathoverflow.net/questions/65903/schur-multipliers-for-lie-groups
## Schur multipliers for Lie groups? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the interest of pursuing the analogies between finite groups and (finite-dimensional) Lie groups, it seems natural to call the Schur multiplier of a finite group analogous to the fundamental group of a Lie group. Just as the Schur multiplier limits what groups can arise as central subgroups of a group $G$ with fixed (isomorphism type of) $Inn(G)$, does the fundamental group do the same thing for Lie groups? One baby example of this question: If $G$ is a Lie group with a central subgroup $Z$ with $|Z|=4$ and $G/Z \cong SO(3)$, does it follow that $G$ has a subgroup of index $2$? Also, in the above situation, if $G$ has no subgroup isomorphic to $SO(3)$ and $Z$ is cyclic, does it follow that $G \cong H$, where $H$ is the subgroup of $U(2)$ obtained by adjoining $i$ times the identity matrix to $SU(2)$? - 2 The Schur multiplier is the second homology, so don't you want to take the second homology of the classifying space $BG$ instead? – Qiaochu Yuan May 24 2011 at 21:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113983511924744, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/1068/usage-of-dx-in-integrals/1072
Usage of dx in Integrals All the integrals I'm familiar with have the form: $\int f(x)\mathrm{d}x$. And I understand these as the sum of infinite tiny rectangles with an area of: $f(x_i)\cdot\mathrm{d}x$. Is it valid to have integrals that do not have a differential, such as $\mathrm{d}x$, or that have the differential elsewhere than as a factor ? Let me give couple of examples on what I'm thinking of: $\int 1$ If this is valid notation, I'd expect it to sum infinite ones together, thus to go inifinity. $\int e^{\mathrm{d}x}$ Again, I'd expect this to go to infinity as $e^0 = 1$, assuming the notation is valid. $\int (e^{\mathrm{d}x} - 1)$ This I could potentially imagine to have a finite value. Are any such integrals valid? If so, are there any interesting / enlightening examples of such integrals? - 1 – Rasmus Aug 16 '10 at 9:42 – Rasmus Aug 16 '10 at 9:46 6 Answers First of all Non-standard Analysis makes nonsense like $\frac{\mathrm{d}y}{\mathrm{d}x}$ into a meaningful fraction rather than just a "symbol" (whatever that means). The first and simplest example of differentials being used beyond the $\int f(x)\mathrm{d}x$ format is would be multidimensional integration: $$\int_V \mathrm{d}x \mathrm{d}y \mathrm{d}z = \int \left(\int \left(\int \mathrm{d}x \right) \mathrm{d}y \right) \mathrm{d}z$$ which gives the volume of the solid $V$. The next example is Greens Theorem from vector calculus, $$\int A(x,y) \mathrm{d}x - B(x,y) \mathrm{d}y = \int (\partial_1 B - \partial_2 A) \mathrm{d}x \mathrm{d}y$$ It is also possible to forget integration completely and just use equations with differentials in them to solve calculus problems. So you can see the standard format is not even close to the whole picture, If you want to integrate terms like $\int (e^{\mathrm{d}x} - 1)$ please do! There is absolutely nothing to stop you figuring out from scratch how to solve this sort of integral and making the theory rigorous. - Thanks for the answer. I think it's closest to answering my original question. Also, especially thanks for the link to the intro to non-standard analysis. I've been interested in learning about hyper real numbers for a while now, but every article I've looked at so far has been kind of terse. – Sami Aug 13 '10 at 3:00 2 You write «If you want to integrate terms like $\int(e^{dx}-1)$ please do!» but you don't mean integrate: you mean find a meaning for the notation. You say «how to solve this sort of integral» but that integral does not make sense according to standard definitions of the integral, so it simply does not make sense to 'solve' it. You can of course come up with a theory that does give sense to such notations, but coming up with such a thing is not computing integrals... – Mariano Suárez-Alvarez♦ Oct 21 '10 at 14:20 1 @Mariano Suárez-Alvarez, that is splitting hairs, everything you said is formally correct but there's not much to be learned from it. – anon Oct 21 '10 at 16:34 1 @muad: one cannot compute things which do not make any sense under the definitions one uses. That is not splitting hairs! – Mariano Suárez-Alvarez♦ Oct 21 '10 at 16:58 2 @muad: I have seen way too many students 'compute' things which do not make any sense with the definitions they have... I would say that recognizing when something is defined or not is one of the most important things one has to learn in order to do any kind of meaningful math. – Mariano Suárez-Alvarez♦ Oct 22 '10 at 16:26 show 2 more comments I think your question here shows that, while you have been using these symbols, you haven't really been given a proper motivation for where they came from. Let's go back and consider how we came up with the idea of an integral. In a typical class, you will see a lot of pictures like this: We find the area under the curve by summing up the area of all these little rectangles. If we wanted to write an expression for the area, it would look like: The Σ means that we are computing a sum. We are adding the areas of the rectangles, which we have numbered 1 through n, to get the complete area under the curve. The area of each rectangle is given by multiplying the height by the width. The height is given by f(xi) because the base of the rectangle is at 0, and the top of the rectangle is where it meets the function f. The Δx represents the width of each rectangle. When we find the integral, we are taking the limit of this sum as the number of rectangles goes to infinity, and each individual rectangle becomes infinitesimally tiny. You can think of the dx as the equivalent of Δx: it represents the infinitesimally small width of each rectangle that we added up to get the area. Once you realize this, we can see why integrals only make sense when written ∫f(x)dx. Because we are adding up the areas of rectangles that have height f(x) and width dx. If you try to interpret the expressions you wrote in this way, you will see that they do not really make sense as integrals: you are not summing up rectangles, so you are not finding an area under a curve. You could, of course, define your own notation in which those expressions behave the way you expect them to, but all mathematical notation is driven based on what people find useful, and what people can agree on and easily understand. Your reuse of the integral sign and dx that people are used to seeing in a particular context will probably result in few people adopting your definition. - 1 The summation formula is a beautiful analogy. – Justin L. Jul 29 '10 at 5:18 Actually, the motivation for my question was me trying to understand, if there was a lim representation for integration like the one for differentiation. I imagined it to look something like: int A to B { f(x) dx } = lim b -> infinity { SUM a=1 to b { f(A + (B - A) * a / b) * (1 / b) } }. And within that definition, integrals such as int A to B { f(x) } = lim b -> infinity { SUM a=1 to b { f(A + (B - A) * a / b) } } would seem to make sense, too. – Sami Jul 29 '10 at 12:11 @Sami: There is a limit definition for the integral. However, even for the standard Riemann integral, the rigorous definition can appear quite arcane. A less general definition may look something like this. Like I said in my answer, you can adjust the expression as you have in your comment, but the result is not useful in the same way the standard integral is, it only shares a similarity in form. – Larry Wang Aug 3 '10 at 5:22 Justin L, indeed - in the Non-standard analysis it is a beautiful definition! – anon Aug 12 '10 at 20:21 When you write $\int f(x) dx$, the whole of $\int ... dx$ is an indivisible symbol, just as the $d/dx$ is an indivisible symbol when you write $df/dx$. Of course, there are reasons why the notation is as it is, but trying to manipulate it like you suggest in $\int e^{dx}$, for example, is simply meaningless. - 2 I could imagine e^{dx} being used as notation for a Riemann-Stieltjes integral, but it's sort of weird notation. – Qiaochu Yuan Jul 29 '10 at 0:06 I'd love to know what the downvote was for! :) – Mariano Suárez-Alvarez♦ Jul 29 '10 at 1:22 We should just write $\int f$ or $\int \lambda x. f(x)$ or something, if the $\mathrm{d}x$ is meaningless and indivisible. – anon Aug 12 '10 at 19:50 1 @muad, why should we? Should we also drop the $df/dx$ notation? There is absolutely no reason why notations which are 'split' in this way should not be used. – Mariano Suárez-Alvarez♦ Oct 21 '10 at 14:17 1 Well, the immense majority of people disagree with you :) – Mariano Suárez-Alvarez♦ Oct 21 '10 at 16:59 show 2 more comments No, it's not valid. The dx in the integral is a representation of the fact that the integral is obtained as an area, so multiplying the "average" of the function value at each point by an infinitesimal interval. As the manner in which we don't calculate the area does not change, the notation does not change. There are different notations that are used when the integral is over a curve, or over more than variable (thus leading for example to volumes). The d(variable) notation is also used as a reminder that the integral is against a specific variable and not another, e.g. that int x/y dx differs from int x/y dy. - In the context of calculus, $dx$ simply means 'integrate with respect to $x$'. Some books even omit $dx$ entirely because 'of course we're integrating with respect to $x$'. The $dx$-bit does not get a proper meaning before differential forms. - The closest thing I've seen is setting $D = \frac{d}{dx}$ and using $e^D = \displaystyle \sum_{k=0}^\infty {\frac{D^k}{k!}} = 1 + D + \frac{D^2}{2!} + \cdots$ as a new operator on a function. So you can write, for example. $e^D \left\{ e^x \right\} = e^{x+1}$ Remember in the modern sense we think of $$\int dx$$ as an operation on a function (namely finding the family of primitives), thus we don't split the symbols up. Thus $$\int 1$$ makes no sense. In other topics, one can think of the $dx$ in a different way than in the elementary task of finding primitives, such as in the Riemann-Stieltjes integration, where you have $$\int_{I} f d\alpha$$ or $$\int_{a}^b f d\alpha$$ Where you integrate over an interval $I$, and $f$ and $\alpha$ are functions. Similarily, you usually write $f'' = \frac{d^2}{dx^2}\left\{f\right\}$ since you are applying the operator $\frac{d}{dx}$ two times, which is an abbreviation of $\frac{d}{dx}\left\{\frac{d}{dx}\left\{f\right\}\right\}$ I'm sorry Rasmus but I know nothing about differential forms. - Huh, the exponential map of the derivative operator... I've never seen that before. So $e^D(ax^2 + bx + c) = 1 + (2ax + b) + a$? Is there any context in which this is useful / interesting / significant? – Rahul Narain Feb 1 '12 at 5:23 In Operational Calculus. I think I saw this on Spiegel's Applied Differential Equations, but now I'm not sure. – Peter Tamaroff Feb 1 '12 at 5:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361234307289124, "perplexity_flag": "head"}
http://mathoverflow.net/questions/77702/transformation-of-a-cubic-form/77751
## Transformation of a cubic form ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) How can I change an integral binary form $ax^3+bx^2y+cxy^2+dy^3$ with the usual discriminant $D =b^2c^2-27a^2d^2+18abcd-4ac^3-4b^3d$ into a form $ax^3+dy^3$ which has a simple discriminant $-27a^2d^2$, which matrix (with $\operatorname{det} =1,-1$) can transforms it ? thanks... - 1 That is not always possible. – Felipe Voloch Oct 10 2011 at 16:23 3 Under a linear change of variables (x,y) |--> (Ax+By,Cx+Dy), the discriminant of your cubic form becomes (AD-BC)^6*disc(original). If the cubic form can be diagonalized then this formula has to be -27 times a square, as you wrote, so the discriminant of the original cubic form has to be -3 times a perfect square. Lots of examples are not like that, e.g., x^3 + x^2y + y^3 has discriminant -31. Thus this cubic form can't be diagonalized over Q. That you can diagonalize quadratic forms in characteristic 0, or more generally outside characteristic 2, is something special about degree 2. – KConrad Oct 10 2011 at 17:23 3 It is diagonalizable over $\mathbb{C}$ (if the discriminant is not zero, anyway). The space of binary cubic forms is "prehomogeneous", which means that $GL_2(\mathbb{C})$ acts transitively on the set of cubic forms with discriminant not zero. To prove this, and to find explicit matrices: Factor your cubic form over $\mathbb{C}$. $GL_2$ acts on the three roots of a binary cubic form, and this action is triply transitive, so figure out the action sending your roots to the roots of $x^3 + y^3$. Then, you can scale as appropriate to get a matrix in $SL_2$. – Frank Thorne Oct 10 2011 at 18:01 1 Please use TeX on this site. – GH Oct 10 2011 at 19:05 let us assume that the matrix of transformation is in GL2(Z). indeed i like to proof the relation for a cubic form F with H hessian form and J jacobian form in this form : J^2+27D.F^2-4H^3=0 when i can transform F into ax^3+dy^3 then it should be very easy ! – ahmad Oct 11 2011 at 11:43 show 1 more comment ## 1 Answer One can solve the problem explicitly over $\mathbb{C}$ then try to work out the extra constraints due to working over the integers. For the complex case, and continuing on the approach in the comment by Frank Thorne, you can also use the so-called canonizant. Let $C(x,y)$ be your cubic. The canonizant here is the same as the Hessian which classically normalized is: $$H(x,y)=\frac{1}{36} \left(\frac{\partial^2}{\partial x \partial v}-\frac{\partial^2}{\partial y\partial u}\right)^2\ C(x,y) C(u,v)\ |_{u:=x, v:=y}$$ meaning: take the derivatives then set $u=x$ and $v=y$. If one can write $C=L_1^3+L_2^3$ where $L_1$, $L_2$ are linear forms in $x,y$, then $$H(x,y)=2 \Delta(L_1,L_2)^2\ L_1(x,y)\ L_2(x,y)$$ where $\Delta(L_1,L_2)$ is the determinant formed with the coefficients of the two linear forms. The matrix you are looking for essentially is that which sends $(x,y)$ to $(L_1(x,y),L_2(x,y))$. So to find it you need to compute the Hessian and then factor it. This means solving a second degree equation instead of a cubic equation as suggested in Frank's comment. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150370359420776, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/17852/how-many-different-sets-can-be-formed-by-two-nonempty-sets/17853
# How many different sets can be formed by two nonempty sets If A and B are two different nonempty sets, how many distinct sets can be formed with these sets using as many unions,intersections,complements and parentheses as desired. Four sets are fundamental:$A$,$B$,$A \cup B$,$A \cap B$. Other sets are $A \cup B'$,$A' \cup B$,$A \cap B'$,$A' \cap B$,$A' \cup B'$,$A' \cap B'$. Any other sets are possible. - You could look at cartesian products and power sets. – PEV Jan 17 '11 at 15:27 @Trevor:Sorry,cartesian products and power sets are not allowed. – Vinod Jan 17 '11 at 15:30 1 To get a clear answer, you have to assume that A and B are not just distinct but also "as independent as possible" in some sense. Otherwise, it could be the case that A = {1} and B = {1,2}, and you only get 8 possible sets. – Rahul Narain Jan 17 '11 at 16:02 ## 1 Answer $16$. The corresponding Venn diagram has four parts, and you can get any combination of those four parts. - @Yuan:Is there a formula for that.Is it 2^(2^n) where n is the number of distinct sets. – Vinod Jan 17 '11 at 15:33 @Vinod: yes, that sounds about right. – Qiaochu Yuan Jan 17 '11 at 15:47 @Vinod $2^{2^n - 1}$ to be accurate. – Sergey Filkin Apr 8 '12 at 8:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332471489906311, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/314719/complexity-analysis-of-logarithms
Complexity analysis of logarithms I have two functions, f(n)=log(base 2)n and g(n)=log(base 10)n. I am trying to decide whether f(n) is O(g(n)), or Ω(g(n)) or Θ(g(n)). I thinks i should take the limit f(n)/g(n) as n goes to infinity, and i think that limit is constant so f(n) must be Θ(g(n)). Am i right? Thanks in advance - The quantity $f(n)/g(n)$ is constant, that is, independent of $n$. – Gerry Myerson Feb 26 at 11:58 1 Answer You can rely on a useful and often-forgotten result, that $(\log_a b)(\log_b c) = \log_a c$, so in your problem we'll have $$(\log_2 10)g(n) = (\log_2 10)(\log_{10} n) = \log_2 n = f(n)$$ so $f(n)$ is just a constant multiple ($\log_2 10\approx 3.32$) of $g(n)$ and hence $f(n)=\Theta(g(n))$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111267328262329, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/175666-place-37-people-into-5-rooms-so-each-room-has-least-1-person.html
# Thread: 1. ## Place 37 people into 5 rooms so that each room has at least 1 person in it? I know that if i forget about the condition that no room is empty, then i can line 37 people up and assign each to a room to get a sequence with repetition, which means 5 ways to give the first person a room, 5 ways for the second etc until i get $5^{37}$ possible ways to place 37 people in 5 rooms. But now with the condition that no room is empty? I suppose i could subtract the subcases of 1 room empty,and 37 people in 4 rooms, 2 rooms empty and 37 people in 3 rooms etc, but this seems lengthy and i have a feeling there's a better way? 2. Originally Posted by punkstart I know that if i forget about the condition that no room is empty, then i can line 37 people up and assign each to a room to get a sequence with repetition, which means 5 ways to give the first person a room, 5 ways for the second etc until i get $5^{37}$ possible ways to place 37 people in 5 rooms. But now with the condition that no room is empty? This a matter of counting the number of surjections( onto functions) from a set of 37 to a set of 5. $\text{Surj}(N,K)=\displaystyle\sum\limits_{j = 0}^K {\left( { - 1} \right)^j \binom{K}{j}\left( {K - j} \right)^N }$ Here $N=37~\&~K=5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959497332572937, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/tagged/confidence-interval
# Tagged Questions An interval of random variables, depending on observed data, which, with a fixed probability, contain an unknown parameter of interest. 1answer 43 views ### How to calculate confidence intervals of $1/\sqrt{x}$-transformed data after running a mixed linear regression in stata? I have run a series of mixed linear regressions in Stata, some with inverse-square-root ($1/\sqrt{x}$) transformations and others with square root ($\sqrt{x}$) transformations. How do I calculate ... 0answers 37 views ### Calculating the area of a confidence ellipse within a certain region I was wondering if someone had an idea on how to calculate the blue shaded area inside my confidence ellipse. Any suggestions or places to look are greatly appreciated. Also, I am hoping to find a ... 1answer 49 views ### Nonlinear regression: Confidence intervals on transformed or untransformed parameters? Suppose I am using a standard inhibition model to find biochemical parameters that fit my data. The equation is: $y = \frac{A}{{1 + \exp \left( {\ln \left[ S \right] - \ln IC_{50}} \right)}}$ where ... 0answers 20 views ### Can parameter uncertainties be salvaged when the residuals are correlated? I have a nonlinear physical model for which I'm trying to determine parameter uncertainties using Monte Carlo. Instead of describing the nitty-gritty details, I will use a series of figures: ... 1answer 41 views ### Change in binomial proportion confidence interval I'm having trouble calculating 95% confidence intervals for a change in binomial proportion. For example, in group $A$, there are $4$ successes out of $n =20$. In group $B$, there are $12$ successes ... 0answers 11 views ### What is the confidence interval for quantile regression? And how to find other than default? [migrated] There is a way to construct the confidence interval for quantile regression: x <- rnorm(1000) y <- x + 2*rnorm(1000) rqm1 <- rq(y~x) summary(rqm1) What ... 0answers 48 views ### Is there evidence to say with 99% confidence that the proportions are different? A college placement service interviewed 1,100 graduates to determine if they were satisfied with the teaching they received. Of the 400 who had taken statistics, 225 said they were satisfied. Of the ... 1answer 56 views ### How to estimate true value and 95% bands when distribution is asymmetrical? I have a set of results of independent measurements of some physical quantity. As an example I give here real expermental data on methanol refractive index at 25 degrees Celsius published in ... 0answers 146 views ### Ratios of means - statistical comparison test using Fieller's theorem? I would really appreciate any suggestions with the following data analysis issue. Please read till the end as the problem at first may appear trivial, but after much researching, I assure you it is ... 2answers 72 views ### Why do we refer to our estimates in terms of precision? Open any statistics textbook and it will urge the need to check the 'precision of our estimates'. Take the following random variable: ... 0answers 27 views ### Method to evaluate a multiple choice type survey [closed] I have designed a survey that asks survey takes a set of question based on a video. There are many such videos and each video has three questions. Questions have four options each. I need to evaluate ... 0answers 13 views ### Assigning weights in crowdsourcing or voting system In Multi-weighted voting system, how is each individual worker is assigned with a weight? What is the concept behind it? 1answer 35 views ### error in estimation with continuous data (New to Statistics) Is there a way to correlate error in a fit (MSD) to the error of the a calculation performed with the parameters associated with the fit? My specific problem is dealing with spectroscopic data. I ... 0answers 17 views ### Duality between acceptance region of testing and confidence region in terms of optimality? The duality theorem between acceptance region of testing and confidence region is about their validity (i.e. satisfying the significance level of the former and the confidence level of the latter sum ... 0answers 23 views ### Does pivoting a discrete CDF provide a pivot? In Section 9.2.3 of Casella's Statistical Inference, they base their confidence interval construction for a parameter $\theta$ on a real-valued statistic $T$ with cdf $F_T(t| \theta)$. They first ... 0answers 33 views ### Computing a bootstrap confidence interval for the prediction error with the percentile and the BCa method I have two related questions regarding the computation of a non-parametric bootstrap confidence interval for the prediction error. Setting: I have a sample S from a data population P and a learner L, ... 1answer 70 views ### confidence interval for classification error---binomial assumption vs. bootsrap resampling I am developing a classifier using a set of N patterns, where N~1000. I am using K-fold cross-validation (with K=5) and computing the probability of classification error p (typical value is p=0.03). ... 0answers 20 views ### How are confidence intervals constructed from point estimates? Wikipedia gives a brief description on constructing confidence intervals from point estimates, and in particular points out three ways, "the method of moments", "likelihood theory" and "the estimating ... 0answers 17 views ### Meaning of following statement about confidence intervals From Wikipedia Confidence intervals are an expression of probability and are subject to the normal laws of probability. I was wondering what the normal laws of probability means? Are they the ... 0answers 38 views ### In testing, do we need to make the area of an acceptance region as small as possible? One criterion to access a confidence interval is that the smaller its area, the better. For hypothesis testing, I was wondering if it is also that the smaller the area of the acceptance region, the ... 0answers 25 views ### How do you construct confidence interval (by the method of pivots)? From a note Many derivations of confidence intervals can be described in terms of ... (pivot). ... The best choice is often suggested by looking at what statistic a good hypothesis test ... 1answer 64 views ### What does it imply when an estimate is not inside its 95% confidence interval? What does it actually imply when a 95% CI does not contain an estimate (coefficient or parameter). Is there some model assumption that has not been satisfied? Or it means something else? I know when ... 0answers 97 views ### marginal effects (and confidence interval) for interaction variables (STATA) I'm trying to marginal effects (and their confidence intervals) for an interaction variable. I'm using Stata and it's panel data (pooled cross-sectional time ... 1answer 63 views ### How to derive a confidence interval from an F distribution? So, this is the question I'm working on: Suppose we observe a random sample of five measurements: 10, 13, 15, 15, 17, from a normal distribution with unknown mean $µ_1$ and unknown variance $σ_1^2$. ... 1answer 73 views ### Behrens–Fisher problem on Wikipedia [closed] In the opinions of the denizens of this forum, how good is this article? It makes a big point of the unsolved mathematical problem, and says little about the fact that just which math problem is to ... 1answer 92 views ### P value and confidence interval for two sample test of proportions disagree I'm using R to calculate the two-sample test for equality of proportions, where the two proportions are 350/400 and 25/25. So: ... 0answers 19 views ### Deciding if my mutant strains are significantly different from my wild type from data measured over time or from gradients I have a set of data which is the oxygen consumption of a wild type (WT) and a few mutants. I averaged the data and plotted graphs with each mutant and WT series and plotted a line of best fit. Is ... 0answers 46 views ### How to calculate 95% CI for a random effect? The R code "intervals()" gives confidence intervals for fixed effects only in a mixed model. *Is there a reason why only fixed effects' confidence intervals are provided? *Is there any way to get ... 1answer 49 views ### Do the predictions of a Random Forest model have a prediction interval? If I run a randomForest model, I can then make predictions based on the model. Is there a way to get a prediction interval of each of the predictions such that I ... 0answers 38 views ### Determining confidence intervals: using partial information on possible outcomes Let's say we have a mathematical model that provides the probability of finding oil at a location in terms of a system of 10 bins with probabilities going from very low, say 2%, to 20% for the best ... 1answer 61 views ### Confidence interval for a small number of iid Poisson I want to calculate the confidence interval for $\lambda$ from a small ($n=10$) set of repeated observations from a Poisson distribution. That is, I have $X_1, \dots, X_{10}$ which I believe are ... 0answers 19 views ### HDR: highest density regions or Confidence Intervals I am trying to do a hypothesis testing on my bootstrap panel regression parameters, I am thinking about creating confidence intervals bootstrap generated parameters, but I could construct highest ... 3answers 78 views ### Need help understanding calculation about Confidence interval I am currently reading Math behind A/B testing written by Amazon and got stuck. At some point they say: To determine the 95% confidence interval on each side of conversion rate, we multiply the ... 0answers 50 views ### how do you create 1000 confidence intervals of sample size 20? [closed] On excel, how do you create 1000 confidence intervals of sample size 20? So how do you simulate 1000 of these 20 samples, to get the upper bound, lower bound, average, and standard deviation as ... 1answer 62 views ### How to find mean and standard deviation based on a given confidence interval? If a 95% confidence interval for a population mean ranges from 200 to 240 (that is, LCL=200 and UCL=240). Then, the computed numerical values of sample mean, $\bar{x}$, and sample standard deviation ... 0answers 19 views ### confidence interval with small supopulations I am currently using a large dataset (n=1.850) composed of smaller samples of several countries. I am currently aiming to describe the sample and infer to the population using simple frequencies ... 0answers 27 views ### Some clear range and interval definitions What is your or an offical (please provide link to citation) definition of range AND interval _? Or perhaps put in another way: what is the major important difference between the two terms _? In my ... 1answer 78 views ### How to calculate the standard error of the marginal effects in interactions (robust regression)? what I am interested in learning is how to calculate the std error of the marginal effects of a X variable when it is part of an interaction, especially in robust regression. There are tipically two ... 1answer 120 views ### Significant difference from regression confidence intervals I have a question about statistical significance in relation to confidence intervals from linear regression. I'm obviously far from a stats expert, and I've been searching for the answer to this, ... 2answers 100 views ### What should I do when a confidence interval includes an impossible range of values? Let's say I'm analyzing the mean number of students per class for a school district. The district has imposed a hard limit on the maximum ratio: there can never be more than 30 students in a class. ... 1answer 95 views ### How to calculate the confidence interval of a function of a combination of two linear models We have two linear fits, one for each data-set (unfortunately they include weights but I'm willing to ignore that if there's a nice analytic solution to this). Data-set ... 1answer 93 views ### Calculate the expected number of 95% confidence interval of binomial distribution Can anyone show me how to calculate the expected number of 95% confidence interval of a binomial distribution using R, such as Bin(100,0.5). 1answer 44 views ### Trends in noisy data and the applicability of the 95% confidence interval I am performing simulations while measuring a quantity A which depends on the parameter B. I make N independent measurements of A for given values of B. I can then calculate the mean to get an ... 1answer 19 views ### How to test the significance of increase in sample interval range(s)? Suppose we have two samples of a variable taken under different conditions: e.g. A1 without medical treatment and A2 after medical treatment. These are not necessarily normally distributed. Suppose A1 ... 0answers 44 views ### How do you think about the central estimate when the confidence interval is asymmetric? I'm using a wild bootstrap to create confidence intervals around fitted values of the following model, for a specific combination of the factors, as x varies across its range. ... 0answers 66 views ### Why does the mean of the bootstrapped distribution not equal the original summary stat? I have n samples and their average. There's some correlation so I used a moving block bootstrap to get an empirical distribution of the mean. The mean of this empirical bootstrapped distribution seems ... 0answers 33 views ### A randomized block experiment [closed] A randomized block experiment was conducted to investigate the effect of different lighting patterns (the treatment factor) on the egg production of chickens. Extended daylight (14 hours) and flashing ... 0answers 51 views ### Bootstrap Prediction Intervals My question concerns the construction of forecast prediction intervals using bootstrapping. I have a 36 month time series, which I am using to perform point forecasts for the next 12 months using ... 0answers 69 views ### Plotting confidence bands around fitted values from a binomial GLMM I have some parameter estimates and confidence intervals estimated from a set of model-averaged binomial GLMMs: two main effects and their interaction. I would like to plot [population level] fitted ... 0answers 66 views ### Confidence intervals for extreme value distributions I have wind data that i'm using to perform extreme value analysis (calculate return levels). I'm using R with packages 'evd', 'extRemes' and 'ismev'. I'm fitting GEV, Gumbel and Weibull ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947627544403076, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/72810/the-relationship-between-group-cohomology-and-topological-cohomology-theories/74960
## The relationship between group cohomology and topological cohomology theories ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was recently trying to learn a little bit about group cohomology, but one point has been confusing me. According to wikipedia (http://en.wikipedia.org/wiki/Group_cohomology and some other sources on the internet), given a (topological) group $G$, we have that the group cohomology $H^n(G)$ is the same as the singular cohomology $H^n(BG)$ (with coefficients in a trivial $G$-module $M$). Moreover, it says that given any group $G$, if we don't care about its topology, we can always give it the discrete topology and look at the cohomology of $K(G,1)$. This seems to suggest that when $G$ has topology that we do care about, we can just look at $BG$ with whatever topology $G$ is supposed to have. The relevant citation in this section is a reference to a book called Cohomology of Finite Groups, but I was wondering if this result would work with groups such as $U(1)$ which are not finite? Moreover, it would seem then that there is some sort of natural way to define group cohomology to detect the topology of the group; for example maybe look at continuous $G$-modules and continuous cochains. However, I heard that when doing this, one has to be careful because in general, the category of continuous $G$-modules might not have enough injectives. Also, I found this article by Stasheff (http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.bams/1183540920) which seems to suggest that for continuous cohomology, the we might not have the equality $H^n(BG)=H^n(G)$ between singular and group cohomology. I was wondering if someone could explain these connections to me (including what "continuous cohomology" is) and/or clarify what is happening? It would also be great if someone could tell me how one might compute something like $H^n(U(1);M)$ where $U(1)$ carries the discrete topology. Thanks. - 2 I quote a comment from Bill Thurston: "For a discrete group, $BG=K(G,1)$ (as homotopy types), and the homology of the group is the same as the homology of this space. For a topological group, if $G_\delta$ denotes $G$ with discrete topology, then $K(G,1)=BG_\delta$, and often has quite different homology from $BG$. For instance, $B\mathbb{R}$ is trivial, verus $B\mathbb{R}_\delta$, with homology rank $2\omega$ in every dimension." – Chris Gerig Aug 13 2011 at 6:31 ## 3 Answers Short answer: you don't want to consider group cohomology as defined for finite groups for Lie groups like $U(1)$, or indeed topological groups in general. There are other cohomology theories (not Stasheff's) that are the 'right' cohomology groups, in that there are the right isomorphisms in low dimensions with various other things. Long answer: Group cohology, as one comes across it in e.g. Ken Brown's book (or see these notes), is all about discrete groups. The definition of group cocycles in $H^n(G,A)$, for $A$ and abelian group, can be seen to be the same as maps of simplicial sets $N\mathbf{B}G \to \mathbf{K}(A,n)$, where $\mathbf{B}G$ is the groupoid with one object and arrow set $G$, $N$ denotes its nerve and $\mathbf{K}(A,n)$ is the simplicial set corresponding (under the Dold-Kan correspondence) to the chain complex $\ldots \to 1 \to A \to 1 \to \ldots$, where $A$ is in position $n$, and all other groups are trivial. Coboundaries between cocycles are just homotopies between maps of simplicial spaces. The relation between $N\mathbf{B}G$ and $K(G,1)$ is that the latter is the geometric realisation of the former, and the geometric realisation of $\mathbf{K}(A,n)$ is an Eilenberg-MacLane space $K(A,n)$, which represents ordinary cohomology ($H^n(X,A) \simeq [X,K(A,n)]$, where $[-,-]$ denotes homotopy classes of maps). This boils down to the fact that simplicial sets and topological spaces encode the same homotopical information. It helps that $N\mathbf{B}G$ is a Kan complex, and so the naive homotopy classes are the right homotopy classes, and so we have $$sSet(N\mathbf{B}G,\mathbf{K}(A,n))/homotopy \simeq Top(BG,K(A,n))/homotopy = [BG,K(A,n)]$$ In fact this isomorphism is a homotopy equivalence of the full hom-spaces, not just up to homotopy. If we write down the same definition of cocycles with a topological group $G$, then this gives the 'wrong' cohomology. In particular, we should have the interpretation of $H^2(G,A)$ as isomorphic to the set of (equivalence classes of) extensions of $G$ by $A$, as for discrete groups. However, we only get semi-direct products of topological groups this way, whereas there are extensions of topological groups which are not semi-direct products - they are non-trivial principal bundles as well as being group extensions. Consider for example $\mathbb{Z}/2 \to SU(2) \to SO(3)$. The reason for this is that when dealing with maps between simplicial spaces, as $N\mathbf{B}G$ and $\mathbf{K}(A,n)$ become when dealing with topological groups, it is not enough to just consider maps of simplicial spaces; one must localise the category of simplicial spaces, that is add formal inverses of certain maps. This is because ordinary maps of simplicial spaces are not enough to calculate the space of maps as before. We still have $BG$ as the geometric realisation of the nerve of $\mathbf{B}G$, and so one definition of the cohomology of the topological group $G$ with values in the discrete group $A$ is to consider the ordinary cohomology $H^n(BG,A) = [BG,K(A,n)]$. However, if $A$ is also a non-discrete topological group, this is not really enough, because to define cohomology of a space with values in a non-discrete group, you should be looking at sheaf cohomology, where the values are taken in the sheaf of groups associated to $A$. For discrete groups $A$ this gives the same result as cohomology defined in the 'usual way' (say by using Eilenberg-MacLane spaces). So the story is a little more complicated than you supposed. The 'proper ' way to define cohomology for topological groups, with values in an abelian topological group (at least with some mild niceness assumptions on our groups) was given by Segal in G Segal, Cohomology of topological groups, in: "Symposia Mathematica, Vol. IV (INDAM, Rome, 1968/69)", Academic Press (1970) 377–387 and later rediscovered by Brylinski (it is difficult to find a copy of Segal's article) in the context of Lie groups in this article. - 3 @ David: Re you short answer, one often does want to study $BG^d$ for $G$ a Lie group with the discrete topology, e.g. in the context of flat bundles and in algebraic K-theory via Quillen's definition $K(R)=H^*(BGL(R)^d_+)$. – Paul Aug 14 2011 at 7:53 Flach's article "Cohomology of topological groups ..." contains a definition of cohomology of a topological group $G$ with values in a sheaf on top. spaces (including the case where the sheaf is represented by a topological G-module) in terms of the topos \$BGG. – Jakob Aug 24 at 12:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Continuous cohomology of a topological group G used to mean using the usual chain complex of multivaraible functions F:G^n \to A with the usual coboundary except that f is required to be continuous. in the smooth case, continuous or smooth gives the same cohomology. In the Lie case, van Est established very nice relations between this cohomology, the Lie algebra cohomology and the purely topological cohomology of the underlying space G - Most of the time, $H^n(G)$ means $H^n(BG)$ with the discrete topology, and topologists tend to write $K(G,1)$ rather than $BG$ to emphasize that $G$ is considered as a discrete group. This is because when $G$ comes with a natural topology then $BG$ is typically assumed to take the topology into account, specifically there is a fibration with base $BG$, contractible total space, and fiber $G$ with its given topology, and this specifies $BG$ up to homotopy type. One way to study $K(U(1),1)$, is to observe that there is a short exact sequence of (discrete) groups $0\to Z\to R\to U(1)\to 0$. Continuous cohomology is more involved, taking the topology into account, and is often defined by simplicial methods. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485405683517456, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/116981-growth-function-prove-2-a.html
# Thread: 1. ## growth function and prove 2 Can you help for the prove of this question ? Attached Thumbnails 2. To solve this question you need two things: (1) to know the definitions of big-O and $\Omega$, and (2) to actually understand them, so that they make sense to you. I remember when I first learned about big-O and similar things, the definitions were just strings of symbols. I knew what each symbol meant and I could manipulate them formally, but I had no mental picture. For example, I could not say, "Well, this function grows infinitely, so it looks like it is bounded from below by this other function. Let me now find the actual constants from the definition of $\Omega$". It takes staring at the definition for some time, trying different examples, etc., before you actually "get it". So, to start, could you write the definitions of big-O and $\Omega$ and describe what your difficulty is? 3. Big-0(Omega): Let f and g be functions from the set of integer or the set of real numbers to the set of real numbers.We say that f(x) is O(g(x)) if there are constants C and k such that |f(x)|<C|g(x)| where x>k Big-Theta: Let f and g be functions from the set of integer or the set of real numbers to the set of real numbers.We say that f(x) is Ω (g(x)) if there are positive constants C and k such that |f(x)|>C|g(x)| where x>k 4. Excellent. Well, the definitions basically say that [ $f\in O(g)$ with constants $C, k$] is the same thing as [ $g\in\Omega(f)$ with constants $1/C,k$]. Trivia: $\Omega$ is called Omega, where the stress is on the second syllable. It means "O mega", i.e., big O in Greek. This is opposed to "o micron", which is small o. Omicron is written in the same way as latin "o". Big Theta is written $\Theta$. 5. [IMG]file:///C:/DOCUME%7E1/ADMINI%7E1/LOCALS%7E1/Temp/moz-screenshot-1.png[/IMG]Today I try to solve this prove but ı can't do that and I am very tired now. If you help me to solve this problem , I will be very happy. 6. I think you will easily get it after rest. There is nothing to be done here; the most difficult fact is that $|a|<C|b|$ if and only if $|b|>(1/C)|a|$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492348432540894, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/42227-another-integration-partial-fractions.html
# Thread: 1. ## Another Integration by Partial Fractions Hello, $\int\frac{-x^2-x-1}{5x^2+7x-6}dx$ = $<br /> \int[\frac{A}{5x-3} + \frac{B}{x+2}]dx$ and $<br /> B(5x-3) + A(x+2) = -x^2-x-1$ I'm having trouble solving $A$ and $B$. Is there a method for determining them? Or should I be approaching the problem differently altogether? Austin Martin 2. Originally Posted by auslmar Hello, $\int\frac{-x^2-x-1}{5x^2+7x-6}dx$ = $<br /> \int[\frac{A}{5x-3} + \frac{B}{x+2}]dx$ and $<br /> B(5x-3) + A(x+2) = -x^2-x-1$ I'm having trouble solving $A$ and $B$. Is there a method for determining them? Or should I be approaching the problem differently altogether? Austin Martin Since the degree of the numberator and the denominator are the same you need to use long division before partial fractions! Remember that if the degree of the numerator is greater than or equal to the denominator ALWAYS use long division. I don't know how to write out long division with LaTex but $\frac{-x^2-x-1}{5x^2+7x-6}=-\frac{1}{5}+\frac{\frac{2}{5}x-\frac{11}{5}}{5x^2+7x-6}=-\frac{1}{5}+\frac{1}{5}\left( \frac{2x- 11}{5x^2+7x-6} \right)$ Now we can use partial fractions on $\left( \frac{2x- 11}{5x^2+7x-6} \right)$ Good luck 3. Originally Posted by TheEmptySet Since the degree of the numberator and the denominator are the same you need to use long division before partial fractions! Remember that if the degree of the numerator is greater than or equal to the denominator ALWAYS use long division. I don't know how to write out long division with LaTex but $\frac{-x^2-x-1}{5x^2+7x-6}=-\frac{1}{5}+\frac{\frac{2}{5}x-\frac{11}{5}}{5x^2+7x-6}=-\frac{1}{5}+\frac{1}{5}\left( \frac{2x- 11}{5x^2+7x-6} \right)$ Now we can use partial fractions on $\left( \frac{2x- 11}{5x^2+7x-6} \right)$ Good luck Ah yes! I completely forgot about the long division bit. Thanks for the help . I'll work it out!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899832010269165, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/56170-parabola-question.html
# Thread: 1. ## parabola question hi guys need help finding the equation of the parabola with focus (1,6) and directrix y = 0: i know the distance from the focus point and directrix will be 6 or distance from vertex point to focus will be 3.. just dont know how to use the parabola formula to determine the equation. cheers 2. Originally Posted by jvignacio hi guys need help finding the equation of the parabola with focus (1,6) and directrix y = 0: i know the distance from the focus point and directrix will be 6 or distance from vertex point to focus will be 3.. just dont know how to use the parabola formula to determine the equation. cheers again, see here 3. Originally Posted by Jhevon again, see here just made an attempt. is the answer $y = \frac {x^2-2x+37}{12}$ correct?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8910672068595886, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/self-learning+algebraic-topology
# Tagged Questions 2answers 156 views ### Books for algebraic geometry, algebraic topology [duplicate] Possible Duplicate: (undergraduate) Algebraic Geometry Textbook Recomendations I am planning to self-study one of these two subjects: Algebraic geometry , Algebraic topology. I can borrow ... 1answer 112 views ### Easy papers on fundamental groups (for beginners) I'd like to read some papers concerning fundamental groups, for example, papers written to explain some basic facts about homotopy explicitly for undergraduate students. All the papers I have ... 3answers 178 views ### Algebraic Topology pamphlets? I'm looking to self-learn some Algebraic Topology and have found the books I've looked at so far (ie. Hatcher) to be rather tome-like for my tastes. Does anyone know of a good slim lecture notes style ... 1answer 234 views ### Approach to Learning Co/Homology I have decided to begin studying co/homology and I'm trying to work out the best approach to doing this. As I understand the situation, any system that satisfies the Eilenberg-Steenrod axioms ... 2answers 243 views ### study topology: homotopy and homology I want to study the basis of topology. I know functional analysis and very basic topology. I need to learn about homologies and homotopies but it seems that all the books (mostly of Russian authors) ... 1answer 187 views ### Group acting on locally finite tree Let $G$ be a group acting on a locally finite connected tree $T$ i.e. each vertex degree is finite. Let $G$ has compact open topology i.e. for each compact set $C$ and an open set $U$ of $T$, the sets ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270966053009033, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/134361-following-function-uniformly-continuous-print.html
# Is the following function uniformly continuous? Printable View • March 17th 2010, 06:38 PM Pinkk Is the following function uniformly continuous? Let $f(x)=x^{2}sin(\frac{1}{x}), x\ne 0; f(0)=0$. Is $f$ uniformly continuous on $\mathbb{R}$? Now this one I'm completely stumped on and I don't even know where to begin. • March 17th 2010, 06:46 PM Drexel28 Quote: Originally Posted by Pinkk Let $f(x)=x^{2}sin(\frac{1}{x}), x\ne 0; f(0)=0$. Is $f$ uniformly continuous on $\mathbb{R}$? Now this one I'm completely stumped on and I don't even know where to begin. Note that the function is differentiable and that $f'(x)=\begin{cases}0 & \mbox{if} \quad x=0 \\ x\sin\left(\tfrac{1}{x}\right) -\cos\left(\tfrac{1}{x}\right) & \mbox{if} \quad x\ne 0\end{cases}$ and so $|f'(x)|\leqslant |x\sin\left(\tfrac{1}{x}\right)-\cos\left(\tfrac{1}{x}\right)|\leqslant |x\sin\left(\tfrac{1}{x}\right)|+|\cos\left(\tfrac {1}{x}\right)|\leqslant 1+1=2$. Thus, your function has a bounded derivative and is thus Lipschitz and so trivially uniformly continuous. • March 17th 2010, 07:07 PM Pinkk Ah okay, makes sense because if $f'\le M, M>0$ then if $|b-a|<\frac{\epsilon}{M}$, then $|f(b)-f(a)| = |f'(x)||b-a|\le M|b-a| < M\frac{\epsilon}{M}=\epsilon$, if I'm not mistaken. Thanks again, man. • March 17th 2010, 07:08 PM Drexel28 Quote: Originally Posted by Pinkk Ah okay, makes sense because if $f'\le M, M>0$ then if $|b-a|<\frac{\epsilon}{M}$, then $|f(b)-f(a)| = |f'(x)||b-a|\le M|b-a| < M\frac{\epsilon}{M}=\epsilon$, if I'm not mistaken. Thanks again, man. Good call. All times are GMT -8. The time now is 08:50 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933718740940094, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/79374?sort=oldest
## Symbol map in Getzler calculus ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I hope someone can help me, although this question is rather specific. I am reading John Roe's chapter on Getzler symbols in "Elliptic operators, topology and asymptotic methods" to understand the proof of the Atiyah-Singer index theorem. • So, for the differential operators on functions of $M$ (say, $D(M)$), there is a symbol map to the space of constant coefficient operators (regarding a cartesion chart) on $T_p M$, named $C(TM)$. (This map seems quite natural, because $C_m(TM)$ is isomorphic to $D_m(M)/D_{m-1}(M)$) • By the same principle, there is a symbol map from the Clifford algebra $Cl(TM)$ to the exterior algebra $\Lambda^\bullet TM$. Now regarding differential operators on the spinor bundle $\Sigma M$, we have $D(\Sigma M)=D(M) \otimes Cl(TM)$. And so I thought: Looking at the above, I would consider the symbol map from $D(\Sigma M)$ to sections of $C(TM) \otimes \Lambda^\bullet TM$. But that is not the one constructed in the Getzer calculous. Rather, the symbol map used there maps $D(\Sigma M)$ to sections of $P(TM) \otimes \Lambda^\bullet TM$, where $P(TM) = \mathbb{C}[TM]\otimes C(TM)$, the space of constant coefficient operators on $TM$ with polynomial coefficients. And here is my question: Why is that? I guess, this construction is just the ingenuity of the whole construction. But can someone give me a motivation? What does this symbol map do, what the one I would have thought of cannot do? - Just for interest, if someone stumbles across this question later: The symbol map defined in Roe's book mentioned above is flawed in the sense that it does not have the properties needed for the proof. This can be fixed, however, and also in a quite natural way. So my question basically came up because of the flaw in the book. – Kofi Nov 25 2011 at 21:42 ## 1 Answer Actually you can imitate the Getzler calculus using your symbol map $D(\Sigma M) \to C(TM) \otimes \bigwedge TM$ and you will still prove an interesting theorem, just not the Atiyah-Singer index theorem. In fact the theorem you will prove is basically Weyl's asymptotic formula for the eigenvalues of the Laplacian. The problem is that your symbol map doesn't use the Clifford module structure of the fibers of the spinor bundle in any essential way. The natural way to fix this problem is to change the symbol map so that Clifford multiplication by a tangent vector becomes a first order operator instead of a 0th order operator, and indeed this is a property possessed by the Getzler symbol. Note that this gives the Dirac operator $D = \sum_i c(\partial_i) \partial_i$ Gezler order 2. Since the local approach to the Atiyah-Singer index theorem is based on the heat equation (due to the McKean-Singer formula), it is crucial that $D^2$ also have Getzler order 2. It's not obvious how to construct a symbol calculus in which both $D$ and $D^2$ have order 2, and achieving this motivates many of the difficult aspects of the Getzler calculus. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267643094062805, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/45406/prime-factorization-rules?answertab=votes
# Prime factorization rules When you have a number like 81. Is it safe to assume that if the number can't be divided by 2 or 3 that it's prime if it ends with a 1? - ## 2 Answers On a related note, you should realize that a number which ends in a 1 is NEVER divisible by 2 since it is odd. Also, ending with a 1 has nothing to do with the number being prime, and in fact there are infinitely many such primes by Dirchlet's theorem on primes in arithmetic progressions since 11,21,31,41,... is an arithmetic progression. In fact, the primes which end in 1 have the same relative density as those ending in 3,7, or 9 (the only other possibilities which lead to infinitely many primes since all others are divisible by either 2 or 5). - 1 Moreover there are infinitely many composites ending in $1$. More preicsely, knowing that a large number has final decimal digit $1$ does not affect its "probability" of being prime, a consequence of the Prime Number Theorem in Arithmetic Progressions. – Pete L. Clark Jun 14 '11 at 22:43 1 +1: @Rofler: Nice answer! – amWhy Jun 14 '11 at 22:49 +1 @Rofler: Well put. – Jack Henahan Jun 14 '11 at 23:25 Definitely not, the smallest two examples are $91=7\times 13$ and $121=11\times 11$. (I am not counting $1$ as an example, though technically it qualifies!) In fact, there are infinitely many non-primes of the required type, that is, ending in a $1$ and not divisible by $3$. There are also infinitely many primes of that type. With smallish numbers, a substantial proportion of the numbers of the required type are prime. But after a while, the primes of that type "thin out". If you pick a huge number of that type "at random", then with high probability it will be a non-prime. Comment: You can "roll your own" counterexamples. Suppose that one of the following holds: (i) $a$ and $b$ end in $1$ and are not divisible by $3$, or (ii) $a$ and $b$ end in $9$ and are not divisible by $3$, or (iii) one of $a$ and $b$ ends in a $3$, the other in a $7$, and neither is divisible by $3$. Then $a \times b$ ends in a $1$, is not divisible by $3$, and is not prime. So for example pick $a=11$, $b=31$. Their product $341$ ends in a $1$, is not divisible by $3$, and is not prime. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955451488494873, "perplexity_flag": "head"}
http://mathoverflow.net/questions/11456/unpointed-brown-representability-theorem/11538
## unpointed brown representability theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The classical Brown Representability Theorem states: Denote $hCW_*$ the homotopy category of pointed CW-complexes. Let $F : hCW_* \to Set_*$ be a contravariant functor. Then $F$ is representable if and only if • $F$ respects coproducts, i.e. $F(\vee_{i \in I} X_i) = \prod_{i \in I} F(X_i)$ for all families $X_i$ of pointed CW-complexes. • $F$ satisfies a sort of mayer-vietoris-axiom: If $X$ is a pointed CW-complex which is the union of two pointed subcomplexes $A,B$, then the canonical map $F(X) \to F(A) \times_{F(A \cap B)} F(B)$ is surjective1. What about omitting the base points? So let $F :hCW \to Set$ be a contravariant functor that satisfies the analogous properties as above (replace the wedge-sum by the disjoint union). Is then $F$ representable? I'm not sure if we just can copy the proof of the pointed case (which can be found, e.g., in Switzer's book "Algebraic Topology - Homology and Homotopy", Representability Theorems). For example, $F(pt)$ can be anything (in contrast to the pointed case), it will be the set of path components in the classifying space. Besides, the proof uses homotopy groups and in particular the famous theorem of Whitehead, which deal with pointed CW-complexes. Nevertheless, I hope that $F$ is representable ... what do you think? As a first step, we may define for every $i \in F(pt)$ the subfunctor $F_i$ of $F$ by $F_i(Y) = \{f \in F(Y) : \forall y : pt \to Y : f|_{y} = i \in F(pt)\}$, which should be thought as the connected component associated to $i$. Then it's not hard to show that $F_i$ satisfies the same properties as $F$ and that $F_i = [-,X_i]$ implies $F = [-,\coprod_i X_i]$. In other words, we may assume that $F(pt)=pt$ (so that the classifying space will be connected). 1 You can't expect it to be bijective, cf. question about categorical homotopy colimits - 1 The answer to this question turns out to be negative. The reader who comes across this question should see the following question and the answers by Karol Szumiło: mathoverflow.net/questions/104866/… – Tyler Lawson Aug 20 at 14:58 ## 3 Answers This is a copy of my answer to http://mathoverflow.net/questions/104866/brown-representability-for-non-connected-spaces/ which I repost here per request in the comment. A negative answer to the question can be concluded from this paper: Freyd, Peter; Heller, Alex Splitting homotopy idempotents. II. J. Pure Appl. Algebra 89 (1993), no. 1-2, 93–106. This paper introduces a notion of conjugacy idempotent. It is a triple `$(G, g, b)$` consisting of a group `$G$`, an endomorphism `$g \colon G \to G$` and an element `$b \in G$` such that for all `$x \in G$` we have `$g^2(x) = b^{-1} g(x) b$`. The theory of conjugacy idempotents can be axiomatized by equations, so there is an initial conjugacy idempotent `$(F, f, a)$`. The Main Theorem of the paper says (among other things) that `$f$` does not split in the quotient of the category of groups by the conjugacy congruence. Now `$f$` induces an endomorphism `$B f \colon B F \to B F$` which is an idempotent in `$\mathrm{Ho} \mathrm{Top}$` and it follows (by the Main Lemma of the paper) that it doesn't split. It is then easily concluded that `$(B f)_+ \colon (B F)_+ \to (B F)_+$` doesn't split in `$\mathrm{Ho} \mathrm{Top}_*$`. The map `$(B f)_+$` induces an idempotent of the representable functor `$[-, (B F)_+]_*$` which does split since this is a `$\mathrm{Set}$` valued functor. Let `$H \colon \mathrm{Ho} \mathrm{Top}_*^\mathrm{op} \to \mathrm{Set}$` be the resulting retract of `$[-, (B F)_+]_*$`. It is half-exact (i.e. satisfies the hypotheses of Brown's Representability) as a retract of a half-exact functor. However, it is not representable since a representation would provide a splitting for `$(B f)_+$`. The same argument with `$B f$` in place of `$(B f)_+$` shows the failure of Brown's Representability in the unbased case. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes, Brown representability holds for such functors. There are not really any material differences between this and the proof of Brown representability in the pointed case. EDIT: My previous version of this was not rigorous enough. I was trying to be clever and get away with just simple cell attachments, which only work if you already know that the functor is represented by a space. Sorry for the delay in reworking, but this particular proof has enough details that it takes time to write up. As you say, you begin by decomposing such functors so without loss of generality $F(pt)$ is a single point. Start with `$X_{-1}$` as a point. Assume you've inductively constructed an $(n-1)$-dimensional complex `$X_{n-1}$` with an element `$x_{n-1} \in F(X_{n-1})$` so that, for all CW-inclusions $Z \to Y$ of finite CW complexes with $Y$ formed by attaching a $k$-cell for $k < n$, the map ```$$ [Y,X_{n-1}] \to [Z,X_{n-1}] \times_{F(Z)} F(Y) $$```is surjective. Now, define a "problem" of dimension $n$ to be a CW-inclusion $Z \to Y$ where $Y$ is a subspace of $\mathbb{R}^\infty$ formed by attaching a single $n$-cell to $Z$, together with an element of ```$$ Map(Z,X_{n-1}) \times_{F(Z)} F(Y). $$``` The fact that $Y$ has a fixed embedding in $\mathbb{R}^\infty$ means that there is a set of problems $S$, whose elements are tuples `$(Z_s,Y_s,f_s,y_s)$` with `$f_s$` a map `$Z_s \to X_{n-1}$` and `$y_s$` is a compatible element of $F(Y)$. Let `$X_n$` be the pushout of the diagram ```$$ X_{n-1} \leftarrow \coprod_{s \in S} Z_s \rightarrow \coprod_{s \in S} Y_s $$``` where the lefthand maps are defined by the maps `$f_s$` and the righthand maps are the given $CW$-inclusions. This is a relative $CW$-inclusion formed by attaching a collection of $n$-cells; therefore, `$X_n$` still has the extension property for relative cell inclusions of dimension less than $n$. The space `$X_n$` is homotopy equivalent to the homotopy pushout of the given diagram, which is formed by gluing together mapping cylinders. Specifically, `$X_n$` is weakly equivalent to the space ```$$ X_{n-1} \times \{0\} \cup (\coprod_S Z_s \times [0,1]) \cup (\coprod_S Y_s \times \{1\}) $$``` which decomposes into two CW-subcomplexes: ```$$ A = X_{n-1} \times \{0\} \cup (\coprod Z \times [0,1/2]) $$``` which deformation retracts to `$X_{n-1}$`, and ```$$ B = (\coprod Z_s \times [0,1/2]) \cup (\coprod Y_s \times \{1\}) $$``` which deformation retracts to `$\coprod Y_s$` with intersection `$A \cap B \cong \coprod Z_s$`. The Mayer-Vietoris property and the coproduct axiom then imply that there is an element `$x_n \in F(X_n)$` whose restriction to $A$ is `$x_{n-1}$` and whose restriction to $B$ is `$\prod y_s$`. Taking colimits, you have a CW-complex $X$ with an element $x \in F(X)$ (constructed using a mapping telescope + Mayer-Vietoris argument) so that, for all CW-inclusions $Z \to Y$ obtained by attaching a single cell, the map ```$$ [Y,X] \to [Z,X] \times_{F(Z)} F(Y) $$``` is surjective. Now you need to show that for any finite CW-complex $K$, $[K,X] \to F(X)$ is a bijection. First, surjectivity is straightforward by induction on the skeleta of $K$. More specifically, for any $K$ with subcomplex $L$, element of $F(K)$, and map $L \to X$ realizing the restriction to $F(L)$, you induct on the cells of $K\setminus L$. Then, injectivity: if you have two elements $K \to X$ with the same images in $F(K)$, you use the above-proven stronger surjectivity property to the inclusion `$K \times \{0,1\} \to K \times [0,1]$` to show that there is a homotopy between said maps. - $Y_n$ is the disjoint union of $X_{n-1}$ and spheres, right? – Martin Brandenburg Jan 12 2010 at 15:28 No, it's the wedge because of the simplifying assumption F(pt) = pt. If you take a disjoint union then `$Y_n$` no longer agrees with `$X_{n-1}$` on low spheres. – Tyler Lawson Jan 12 2010 at 15:41 hmm, how do you construct elements in $F(Y_n)$ when there is no wedge-axiom? I think the disjoint union works here since $[S^k,-]$ commutes with disjoint unions (since $S^k$ is path-connected) and $[S^k,S^n]$ has exactly one element for $k<n$. – Martin Brandenburg Jan 12 2010 at 15:50 You cut each sphere you're attaching into two discs, one of which is joined to $X_{n-1}$ at the center. Then $Y_n$ is a union of a coproduct of discs, along a coproduct of equators, with something that retracts down to $X_{n-1}$. So far as the disjoint union, the set $[S^k, X_{n-1} \coprod S^n]$ is a disjoint union of a point and $[S^k, X_{n-1}]$ for $k < n$ and in particular does not agree with it. – Tyler Lawson Jan 12 2010 at 16:01 I don't see why this should work because $F(S^n) \to F(D^n_+) \times F(D^n_-)$ is not injective; then there is no reason why the naive preimages of elements of $F(S^n)$ are preimages. – Martin Brandenburg Jan 13 2010 at 15:59 show 2 more comments An answer to this is started in Boardman's Stable Operations in Generalized Cohomology (Handbook of Algebraic Topology, also available via Steve Wilson's homepage (Wilson and Boardman - together with Johnson - cowrote the follow-up paper in the same volume). Theorem 3.6 states: Let $h(-)$ be an ungraded cohomology theory as above [same conditions as in the question except that it lands in abelian groups]. Then: (a) $h(-)$ is represented in $\operatorname{Ho}$ [homotopy category of spaces homotopy equivalent to CW-complexes] by an $H$-space $H$, with a universal class $\iota \in h(H,o) \subset h(H)$ that induces an isomorphism $\operatorname{Ho}(X,H) \cong h(X)$ of abelian groups by $f \mapsto h(f)i$ for all $X$; (b) For any cohomology theory $k(-)$, operations $\theta : h(-) \to k(-)$ correspond to elements $\theta \iota \in k(H)$. The proof given depends on references, which is why I say that the answer is "started" in this paper. The argument goes: 1. Brown representability gives a based connected space representing $h(-,o)$ on based connected spaces. 2. West shows that $h(-,o)$ is represented on all based spaces (i.e. drops the connected assumption). 3. Then the "disjoint basepoint" trick represents $h(-)$ on all spaces. Now that I look at it, the essence of the "disjoint basepoint" trick probably does need the additional assumption of the functor landing in abelian groups, since it uses the split short exact sequence $$0 \to h(X,o) \to h(X) \to h(o) \to 0$$ to relate relative and absolute cohomology. Thus $h(X) \cong h(X^+,o)$. So, for abelian groups then you are fine. Is it necessary that you land in Set? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292970895767212, "perplexity_flag": "head"}
http://mathoverflow.net/questions/48656/is-delignes-central-extension-sofic
## Is Deligne’s central extension sofic? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In P. Deligne. Extensions centrales non r´esiduellement finies de groupes arithm´etiques. CR Acad. Sci. Paris, s´erie A-B, 287, 203–208, 1978. Deligne proves the existence of a certain central extension of a residually finite group. See the section Deligne's central extension in Cornulier's paper for a quick discussion of the group. A countable, discrete group $\Gamma$ is $sofic$ if for every $\epsilon>0$ and finite subset $F$ of $\Gamma$ there exists an $(\epsilon,F)$-almost action of $\Gamma$. See, for example Theorem 3.5 of the nice survey of Pestov. Gromov asked whether all countable discrete groups are sofic. It is now widely believed that there should be a counterexample to this. Is Deligne's central extension sofic? This question is related to the one here, but is not sharpened enough to be an answer. (In fact, in its original form not even to be a question! Thank you Henry and Andreas.) - 2 A few points: you haven't said anything about $H$, for example it could be the trivial subgroup. In general, the largest such $H$ I thought was called the finite residuum of $G$ (although that doesn't seem very popular with Google), and is the intersection of all finite index subgroups of $G$. The distinction you make betewen "every finite index subgroup" and "every finite index normal subgroup" turns out not to matter - you can always drop by a finite index to gain normality (this doesn't depend on finite generation). – ndkrempel Dec 8 2010 at 18:12 4 The interest of Deligne's example is not that it isn't residually finite. The interest is that it's a non-residually finite group which is a central extension of a residually finite (indeed, linear) group. – HW Dec 8 2010 at 18:15 2 (The arxiv link is broken. Here a working one: arxiv.org/abs/0804.3968 ) – Greg Graviton Dec 8 2010 at 18:26 Thank you, Greg, Henry and ndkrempel! – Jon Bannon Dec 8 2010 at 21:08 ## 2 Answers If I understand your question correctly then I think you've already answered it! The 'above property' is precisely 'not being residually finite'. To see this, just consider the intersection of all finite-index subgroups: if it's trivial, your group is residually finite; if not, then that intersection is your subgroup $H$. As we found in a previous question of yours, Baumslag's group $B=\langle a,b\mid (a^b)^{-1}a(a^b)a^{-2}\rangle$ which is certainly not residually finite, is sofic. - 1 Incidentally, I think the question in the title is still an interesting one. – HW Dec 8 2010 at 18:15 Hi Henry. Sorry about the nonsense! Please have another look. – Jon Bannon Dec 8 2010 at 20:35 1 Gladly! Unfortunately, I think it's pretty unlikely that I can say anything useful about the modified question. – HW Dec 8 2010 at 21:30 Thanks again, Henry. – Jon Bannon Dec 8 2010 at 21:37 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Deligne's central extension is certainly an interesting group to consider. I can say something about the question whether this group is hyperlinear. For hyperlinearity, you ask for approximation of the group law by unitary matrices instead of permutations. This is (as you of course know) weaker than being sofic, but being sofic implies that the group is hyperlinear. Hence, it is a neccesary condition and natural generalization. In the paper Examples of hyperlinear groups without factorization property, Groups Geom. Dyn. 4 (2010), no. 1, 195–208 I obverved that a central extension of a group $G$ by an abelian group $A$ is hyperlinear if and only if the twisted group von Neumann algebra $L_{\phi \circ \alpha} G$ is embeddable, where $$\alpha \colon G \times G \to A$$ is the 2-cocycle which classifies the central extension, and $\phi$ belongs to a dense set in the Pontrjagin dual of $A$. This is the same as asking for a approximation of the group laws of $G$ with unitaries on a finite-dimensional Hilbert space, twisting the multiplication with the cocycle $\phi \circ \alpha$. The twisted setup is natural anyway and I propose to call a $S^1$-valued 2-cocycle on a group $G$ hyperlinear if you can find such an approximation by unitaries. It seems natural to consider the possibility that there are $S^1$-valued 2-cocycles even on residually finite groups which are not hyperlinear. However, I do not have any examples. On the other side, if $G$ is residually finite and you can at the same time approximate $\phi \circ \alpha$ for sufficiently many $\phi$'s by suitable almost 2-cocycles defined on the finite quotients, you are in business. This would show that the central extension is at least hyperlinear. I do not know about sofic in place of hyperlinear, since in the combinatorial setup, it is not possible to disintegrate the central extension over the Pontrjagin dual of $A$. - Thanks for sharing your ideas on this, Andreas. – Jon Bannon Dec 8 2010 at 21:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221997857093811, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/210760-formula-help.html
2Thanks • 1 Post By russo • 1 Post By russo # Thread: 1. ## Formula help. If two persons A and B start at the same time in opposite directions from two points and arrive at two points in 'a' and 'b' hr, respectively after having met, then , A's speed/B's speed = root b/root a. Can you help how this formula came up? Appreciate your help. 2. ## Re: Formula help. Is it a physics problem? Is it an accelerated movement? I'm not sure what you meant. I guess you meant A starts from a and gets to b and B starts from b and gets to a but in that case I get to Vb/Va = -1. Also, by root did you mean square root? Please, explain a little bit more or write the exercise here because it seems to me that something is missing or something... 3. ## Re: Formula help. No this is a basic math problem...Yes, the root refers to square root. 4. ## Re: Formula help. I've been trying to find a way, still seems very confusing. Do you have any more information about the starting and ending points? Sorry to keep asking but up to now I keep thinking that something is missing. However, I tell you what have I thought so far and maybe it can help: A and B go in opposite direction and they're going to meet. When they meet, they keep moving and reach two different points at two different times. Let's say that 0 is our meeting point, A moves to +x and B to -x. I define two points that are going to be reached by each person: Pa and Pb. I know that a is the time A takes to reach Pa and same thing with b. Finally, I get two equations: $a \bullet V\sub{a} = P\sub{a}$ $-b \bullet V\sub{b} = -P\sub{b}$ If I sum both of them: $a \bullet V \sub{a} - b \bullet V \sub{b} = P\sub{a} - P\sub{b}$ Then I got stuck. 5. ## Re: Formula help. I have an example problem using this formula.... A man starts from B to K, another from K to B at the same time. After passign each other they complete their journey in 3 1/3 and 4 4/5 hours, respectively. Find the speed of the second man if the speed of the first is 12 km/hr. 1'st man's speed/2nd man's speed = root 4 4/5 / root 3 1/3 => 6/5 =>2nd man's speed =10 km/hr 6. ## Re: Formula help. Finally I think I got it. I'll take your first example: person A goes +x and met B at certain time. B goes -x. That time would be the same for both because they started at the same time. $A \rightarrow M \leftarrow B$ where M is the meeting point. The distance between A and M and B and M is unknown since we don't know their velocities but we can obtain them using the second part of the problem where A gets to where B was. A: $M \rightarrow B$ B: $A \leftarrow M$ The problem says that after they meet (or after they get to M), after "a" hours A arrives to where B was and after "b" hours B arrives where A was. Now we can find an expression for each distance: $\overline{AM} = b \bullet v_B$ $\overline{BM} = a \bullet v_A$ So, what we've done here is get the segment's width by multiplying the velocity times time. Notice I'm using the second part of the problem, after the meeting because I have enough information there. Person A goes from M to B in a hours. Now we need to find the time that takes A and B to get to M which, I repeat, it's the same for both. Let's call it $t_M$. By cross-multiplying: M to A in b hours: $\overline{AM} \longrightarrow b$ $\overline{BM} \longrightarrow t_M = \frac{\overline{BM} \bullet b}{\overline{AM}} = a \bullet b \bullet \frac{v_A}{b \bullet v_B} = a \bullet \frac{v_A}{v_B}$ M to B in a hours: $\overline{BM} \longrightarrow a$ $\overline{AM} \longrightarrow t_M = \frac{\overline{AM} \bullet a}{\overline{BM}} = a \bullet b \bullet \frac{v_B}{a \bullet v_A} = b \bullet \frac{v_B}{v_A}$ Finally, both results are equal so: $a \bullet \frac{v_A}{v_B} = b \bullet \frac{v_B}{v_A}$ $\frac{v_A^2}{v_B^2} = \frac{b}{a}$ $\frac{v_A}{v_B} = \sqrt{\frac{b}{a}}$ That's it! I tried to be as clear as possible but, please, if you didn't understand anything ask me and I'll try to make it clearer. EDIT: maybe it's not quite relevant to point this since it's a math problem but the cross-multiplying would be impossible if the velocity changes through time. In this case, it was a constant. 7. ## Re: Formula help. @ russo! Great help! Thanks! 8. ## Re: Formula help. @ Russo: I have a question, sorry for asking this very late...how do we conclude that the time taken by A and B to reach M is the same when we dont know their speeds till the meeting point M? 9. ## Re: Formula help. Because if two things encounter at the same point, it SHOULD be at the same time. Let me tell you an example. Imagine you are walking down the street and you meet someone you know. By the time, you start talking both can look at their watches and expect to have the same time. That's why, when the problem says that both start at the same time (let's say t = 0), the precise moment they meet they're in t = te Also, when you're talking about encounters you can assume that they're in the same time and position (unless they're are in different reference systems) 10. ## Re: Formula help. Thanks, that is nice to learn!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9661543369293213, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/39651/what-if-gamma-rays-in-electron-microscope?answertab=active
# What if $\gamma$-rays in Electron microscope? I was referring Electron microscopes and read that the electrons have wavelength way less than that of visible light. But, the question I can't find an answer was that, If gamma radiation has the smallest of wavelengths of all, why can't it be used to reach to even finer details in microscopy? - ## 3 Answers As X-Rays & $\gamma$-rays have very low wavelength, one could think of building an X-Ray or gamma-ray microscope. But, the problem only arrives at focusing both. They can't be focused as visible light is focused using refractive convex lenses (in microscope) thus providing a magnification of about 2000. Another problem with gamma rays is that they've very high ionizing power and interact with matter to the maximum extent thereby destroying it (causing atomic decay). But on the other hand, we've Electron microscopes which work on the principle of wave nature of moving electrons. Electrons accelerated through a potential difference of 50 kV have a wavelength of about 0.0055 nm. (which is according to de-Broglie relation of wave-particle duality - $\lambda=\frac{h}{\sqrt{2meV}}=\frac{1.227}{\sqrt{V}}$nm) This is $10^5$ times less than the wavelength of visible light there by multiplying the magnification by $10^5$. If you've read enough about electron microscopes, you should've known the fact that Electrons could be easily focused using electric & magnetic fields than going into a more complex one... :) Even if these great physicists try something of focusing the gamma rays, it's production and maintenance would be far too difficult and expensive either. Because, we know that $\gamma$-rays could be produced only by means of radioactive decays which is biologically hazardous... - ok..that focussing point of ur's makes sense..!! – rafiki Oct 12 '12 at 15:52 There are phase plates and other techniques that are being developed to be able to focus X-rays and thus create usable microscopes which promise resolution better than what is possible by refractive optics with visible light. However, these are not yet commercially available. Further, with the high penetrating power of gamma radiation no equivalents are being explored with gamma radiation. The reason could be the availability of electron microscopes with what is now simpler technology, that can provide resolution down to tens of picometres. - I think the main principal problem would be with high transmission of gamma radiation. It is almost unaffected by matter so you cannot imprint information about your sample on it very efficiently. Then, there are also many other practical difficulties -- it would probably be rather complicated to create directed gamma beams and you would need to use plenty of radioactive material and screen it well from radiating anywhere. - i think instruments must be there to measure that precision and blocking radiation is much of a trivial job these days...given the device and observer need no contact and can be handled via robots...do u have some confirmed info on your first argument ? – rafiki Oct 12 '12 at 12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585883021354675, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/89094?sort=oldest
## Is the ring of quaternionic polynomials factorial? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Denote by $\mathbb{H}[x_1,\dots,x_n]$ the ring of polynomials in $n$ variables with quaternionic coefficients, where the variables commute with each other and with the coefficients. Two polynomials $P,Q\in \mathbb{H}[x_1,\dots,x_n]$ are similar, if $P=a Q b$ for some $a,b\in \mathbb{H}$. A ring $\mathbb{K}$ is factorial, if the equality $P_1\cdot\dots\cdot P_n=Q_1\cdot\dots\cdot Q_m$, where $P_1,\dots, P_n,Q_1,\dots,Q_m\in \mathbb{K}$ are irreducible (and noninvertible) elements, imply that $n=m$ and there is a permutation $s\in S_n$ such that $P_k$ is similar to $Q_{s(k)}$ for each $k=1,\dots,n$? By [1, Theorem 1] and [2, Theorem 2.1] it follows that $\mathbb{H}[x]$ is factorial. Is the ring $\mathbb{H}[x,y]$ factorial? This question is a continuation of the following ones: http://mathoverflow.net/questions/79063/when-the-determinant-of-a-2x2-polynomial-matrix-is-a-square http://mathoverflow.net/questions/62820/pythagorean-5-tuples [1] Oystein Ore, Theory of non-commutative polynomials, Annals of Math. (II) 34, 1933, 480-508. [2] Graziano Gentili and Daniele C. Struppa, On the Multiplicity of Zeroes of Polynomials with Quaternionic Coefficients, Milan J. Math. 76 (2008), 15-25, DOI 10.1007/s00032-008-0093-0. - ## 1 Answer Just to remove the question from the 'Unanswered' list: $(x-i)\cdot((x+i)(y+j)+1)=((y+j)(x+i)+1)\cdot(x-i)$, hence $\mathbb{H}[x,y]$ is not factorial. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8245550394058228, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/140929-solved-bijection-between-b-b-b.html
# Thread: 1. ## [SOLVED] bijection between [a,b], [a,b) and (a,b) This is another exercise from Charles Chapman book, Real Mathematical Analysis, that I haven't been able to solve: To find a bijection between the intervals [a,b], [a,b) and (a,b). 2. Originally Posted by becko This is another exercise from Charles Chapman book, Real Mathematical Analysis, that I haven't been able to solve: To find a bijection between the intervals [a,b], [a,b) and (a,b). Do you have to actually find them or just show they exist? 3. While you can't "write them down", it's not hard to tell you how to form such a bijection. Of course, they can't be continuous functions so don't expect to be able to write them as a simple "formula". For the [a, b] to [a, b) for example, map all irrationals in a, b (except a and b themselves if they happen to be irrational) to themselves. Now write all the rational rational numbers in the interval in an ordered list (of course, rationals are countable so that can be done) starting with b first and a second (even if a and b are irrational). Map each $x_n$ in that list to to $x_{n+1}$ 4. We will biject $\left[ {0,1} \right] \leftrightarrow \left( {0,1} \right)$ by the following: $f(0)=\frac{1}{2}~\&~f(1)=\frac{1}{3}$. If $x=\frac{1}{n}$ for some $n\ge 2$, $f(x)=\frac{1}{n+1}$, otherwise, $f(x)=x$ Now define $g(x)=(b-a)x+a$ as a linear function, $g$ it is a bijection. So $\left[ {0,1} \right] \leftrightarrow \left[ {a,b} \right]$ Can you show that $g \circ f \circ g^{ - 1} :\left[ {a,b} \right] \leftrightarrow \left( {a,b} \right)$. 5. I have to find the bijections explicitly, not just prove they exist. I don't expect a formula, but just a description of the bijection. 6. Thanks Plato and HallsofIvy. Of course, shift the rationals! That solves it!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505600333213806, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/114884/semiflows-and-continuous-symmetries
## Semiflows and continuous symmetries ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a differential equation on a Banach space $\mathcal{X}$ of the form $\frac{d u}{d t} = F(u)$, it is often the case that $F$ is equivariant under translations, i.e. that $T_\alpha F(u) = F(T_\alpha u)$ where $(T_\alpha f)(x) = f(x+\alpha)$. The set of translation operators {$T_\alpha, \alpha \in \mathbb{R}$} is a (unitary ?) representation of the Lie group $\mathbb{R}$; its infinitesimal generator is $\tau = \frac{d}{d x}$. Assume that $u_*$ is an equilibrium of the differential equation, so $F(u_*)=0$. My question is if and when it is possible to write a solution to the above differential equation as $u(t) = T_{\alpha(t)} (u_* + v(t))$, where $v$ is transversal to $\tau u_*$ in the following way: take the linear form $\phi_* \in \mathcal{X}^*$ for which $\phi_* (\tau u_* ) = 1$, then $v \in \mathcal{H} =$ { $v \in \mathcal{X}: \phi_*(v) = 0$ }. Loosely speaking, this means that we can split the action of the semiflow generated by the differential equation in a part transversal to the direction of the translation group action at $u_*$, followed by a translation. This specific form of the solution was stated as an Ansatz in Haragus&Iooss (Springer, 2011), 'Local bifurcations, center manifolds and normal forms in infinite-dimensional dynamical systems', p. 56. Thanks in advance for any (useful) reply! Frits Veerman, Universiteit Leiden, The Netherlands - Just a small marginal comment: Unitary makes only sense if you have a Hilbert space... – András Bátkai Nov 29 at 13:38 Very true. I have to admit that the applications I have in mind are all in a Hilbert space setting; I'm just looking for the most general context in which the above might be true. Thanks for your remark. – Frits Veerman Nov 30 at 14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185072183609009, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/214383-euclidean-algorithm-more-than-two-variables-print.html
# Euclidean Algorithm, more than two variables Printable View • March 7th 2013, 04:57 AM sirellwood Euclidean Algorithm, more than two variables Hi all, Ok, so im more than comfortable with the Euclidean Algorithm when I have only two numbers so it is of the form an + bm = r But now i have the equation 225a + 360b + 432c + 480d = 3, and i need to find the integers a,b,c and d to satisfy this. So i know the gcd is 3, and I realise how you can get to that by finding the gcd, x of two of the numbers and then finding the gcd of x and one of the other numbers and so forth. But now I am stuck wondering how I go about using the method of back substitution to find the integers a,b,c and d? Thanks! • March 8th 2013, 04:42 AM BobP Re: Euclidean Algorithm, more than two variables We can get a solution with three applications of the Euclidean Algorithm, I don't know if it possible to do any better. To begin with we may as well divide throughout by 3, 75a + 120b +144c + 160d = 1. Next, group as 3(25a + 48c) + 40(3b + 4d) = 1. There are probably other groupings available, you just have to see that the three pairs of numbers (3,40), (25,48), (3,4) are each relativley prime. Now rewrite as 3A + 40B = 1 and use the EA to find values for A and B. A = -13, B = 1 are solutions (and there will be an infinity of others). That gets you 25a + 48c = -13, and 3b + 4d = 1. Solve these to get (again with an infinity of other possibles) a = 299, b = -1, c = -156 and d = 1. • March 8th 2013, 10:12 AM Zeno Re: Euclidean Algorithm, more than two variables I have a question. Will this always work? The original equation given was 225a + 360b + 432c + 480d = 3 Can I swap the 3 out for any arbitrary whole number n and still be guaranteed to always get a solution using this method? Like so: 225a + 360b + 432c + 480d = n And then just chose any n by whim (assume a whole number choice). • March 8th 2013, 10:42 AM Jame Re: Euclidean Algorithm, more than two variables n must be the gcd or a multiple of the gcd • March 9th 2013, 12:27 AM ILikeSerena Re: Euclidean Algorithm, more than two variables Quote: Originally Posted by sirellwood Hi all, Ok, so im more than comfortable with the Euclidean Algorithm when I have only two numbers so it is of the form an + bm = r But now i have the equation 225a + 360b + 432c + 480d = 3, and i need to find the integers a,b,c and d to satisfy this. So i know the gcd is 3, and I realise how you can get to that by finding the gcd, x of two of the numbers and then finding the gcd of x and one of the other numbers and so forth. But now I am stuck wondering how I go about using the method of back substitution to find the integers a,b,c and d? Thanks! Hi sirellwood! :) You can extend the Euclidean Algorithm for this. The algorithm just keeps track of a set of equations that are true, reducing the remainder step by step, until it can be reduced no further. Like BobP I'll reduce your equation first to 75a + 120b +144c + 160d = 1 Then the initial set of equations is: . $\begin{matrix}a & b & c & d \end{matrix}$ $\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix}$ Select the equation with the lowest remainder, which is 75. Multiply that equation by a number and add it to each of the other equations. Select the number to multiply with such that the absolute value of the new remainder becomes as low as possible. Like this: $\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} \begin{matrix} * \\ -2 \\ -2 \\ -2 \end{bmatrix}$ The result is: $\begin{bmatrix}1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -2 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 75 \\ -30 \\ -6 \\ 10 \end{bmatrix}$ Two more iterations, and you'll get: $\begin{bmatrix}-29&0&14&1 \\ 8&1&-5&0 \\ 16&0&-5&-3 \\ -6&0&2&1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ -2 \end{bmatrix}$ In other words, you get the solution: $75 \cdot -29 + 120 \cdot 0 + 144 \cdot 14 + 160 \cdot 1 = 1$ You can construct other solutions by adding any of the other equations, multiplied by any number, to it. • March 9th 2013, 04:55 PM Zeno Re: Euclidean Algorithm, more than two variables ILikeSerena, Can you explain the two iterations you left out? I'm just learning matrix multiplication and I don't understand how you created that final matrix. Thanks. • March 9th 2013, 05:09 PM ILikeSerena Re: Euclidean Algorithm, more than two variables Quote: Originally Posted by Zeno ILikeSerena, Can you explain the two iterations you left out? I'm just learning matrix multiplication and I don't understand how you created that final matrix. Thanks. Oh, I've already thrown those iterations away. Let's see if I can walk you through. Note that the matrix is just a short hand notation for a set of 4 equations. $\begin{bmatrix}1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -2 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 75 \\ -30 \\ -6 \\ 10 \end{bmatrix}$ In this set of equations, the lowest remainder is -6. To get the remainder 75 as low as possible, you need to add -6 a total of 12 times. Since we're really talking about a set of equations, we need to multiply the third row by 12 and add the result to row one. What do you think the first row will become? • March 9th 2013, 07:25 PM Zeno Re: Euclidean Algorithm, more than two variables Quote: Originally Posted by ILikeSerena Oh, I've already thrown those iterations away. Let's see if I can walk you through. Note that the matrix is just a short hand notation for a set of 4 equations. $\begin{bmatrix}1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -2 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 75 \\ -30 \\ -6 \\ 10 \end{bmatrix}$ In this set of equations, the lowest remainder is -6. To get the remainder 75 as low as possible, you need to add -6 a total of 12 times. Since we're really talking about a set of equations, we need to multiply the third row by 12 and add the result to row one. What do you think the first row will become? I have no clue what I'm doing here with matrix multiplication. I actually took a course in linear algebra about 30 years ago. And I did pretty good in that course, but apparently I forget everything. Ok, let me take a stab at this. row three appears to be: $\begin{bmatrix}-2 & 0 & 1 & 0 \end{bmatrix}$ So if I multiple that by 12 I should get: $\begin{bmatrix}-24 & 0 & 12 & 0 \end{bmatrix}$ And then adding that to row one I should get: $\begin{bmatrix}-23 & 0 & 12 & 0 \\ -2 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -2 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} ? \\ ? \\ ? \\ ? \end{bmatrix}$ I forget how to multiply matrices: I'll take a stab at it: $\begin{bmatrix}-23 & 0 & 12 & 0 \\ -2 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -2 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 3 \\ -30 \\ -6 \\ 10 \end{bmatrix}$ Is that right? What do I do next then? Add 3 times row 4 to row 2, and 2 times row three to row four? $\begin{bmatrix}-23 & 0 & 12 & 0 \\ -8 & 1 & 0 & 3 \\ -2 & 0 & 1 & 0 \\ -6 & 0 & 2 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 3 \\ 0 \\ -6 \\ -2 \end{bmatrix}$ Now what? Start again with the -2? Add 1 times row 4 to row 1, and -3 times row 4 to row 3? Let's see if that works: $\begin{bmatrix}-29 & 0 & 14 & 1 \\ -8 & 1 & 0 & 3 \\ 16 & 0 & -5 & -3 \\ -6 & 0 & 2 & 1 \end{bmatrix} \begin{bmatrix} 75 \\ 120 \\ 144 \\ 160 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ -2 \end{bmatrix}$ I have no clue what I just did, but my final matrix appears to be different from the final matrix you got. But the coeffients for a, b, c, and d came out the same. I'll have to mess around with this some more when I'm not so tired. How do I know that I'm done? What's that -2 doing still dangling there? • March 10th 2013, 12:02 AM ILikeSerena Re: Euclidean Algorithm, more than two variables It seems I don't have to explain anything. You've got it down! :D The reason your result is different is because you took different steps than I did. But that doesn't matter as long as you did not make mistakes. To verify if you made mistakes, you can check each row. For instance row 2 is [-8 1 0 3]. Multiply with respectively [75 120 144 160], add them all up and verify that the result is the same as the remainder, which is [ 0 ]. The key result is the row that has the remainder 1, which is the first row. It gives you a solution. Once you have that you can stop. Or else you can stop when you can't get the remainder any lower. However, there is more than one solution. Each row with remainder zero can be added to the row with the solution. Doing so will still yield 1 as remainder and get you another solution. That -2 is still dangling because you didn't clean it up yet. It's not really necessary, but if you want you can add row 1 twice to row 4, and then it will be zero as well. The full set of solutions is the following: Let lcm be the least common multiple of your coefficients in your original equation. That is, lcm = lcm(75,120,144,160). $a = -29 + \frac{lcm}{75} n_1$ $b = 0 + \frac{lcm}{120} n_2$ $c = 14 + \frac{lcm}{144} n_3$ $d = 1 - \frac{lcm}{160} n_1 - \frac{lcm}{160} n_2 - \frac{lcm}{160} n_3$ where $n_1, n_2, n_3$ are arbitrary integers. If you substitute these solutions in the original equation, you should see that the equation is satisfied for choices for $n_1, n_2, n_3$. • March 10th 2013, 10:56 AM Zeno Re: Euclidean Algorithm, more than two variables Quote: Originally Posted by ILikeSerena That -2 is still dangling because you didn't clean it up yet. It's not really necessary, but if you want you can add row 1 twice to row 4, and then it will be zero as well. I was actually thinking that last night but was too tired to continue. So it should be possible then to always clean a problem like this up to all zeros save for the first 1. Another question: Is this matrix algebra basically doing the same thing as Euclid's Algorithm? Quote: Originally Posted by ILikeSerena The full set of solutions is the following: Let lcm be the least common multiple of your coefficients in your original equation. That is, lcm = lcm(75,120,144,160). $a = -29 + \frac{lcm}{75} n_1$ $b = 0 + \frac{lcm}{120} n_2$ $c = 14 + \frac{lcm}{144} n_3$ $d = 1 - \frac{lcm}{160} n_1 - \frac{lcm}{160} n_2 - \frac{lcm}{160} n_3$ where $n_1, n_2, n_3$ are arbitrary integers. If you substitute these solutions in the original equation, you should see that the equation is satisfied for choices for $n_1, n_2, n_3$. That's nice additional information in case you ever need to come up with specific values for a, b, c, or, d in a particular situation. ~~~~~ This wasn't even my problem, but I enjoyed going through it. Ironically I just happen to be studying both Euclid's Elements right now, and I also have a need to get back into Linear Algebra again, so this problem served two purposes for me. (Rock) • March 10th 2013, 11:08 AM ILikeSerena Re: Euclidean Algorithm, more than two variables Quote: Originally Posted by Zeno Another question: Is this matrix algebra basically doing the same thing as Euclid's Algorithm? This is Euclid's algorithm, although normally it is applied to 2 variables and 2 equations. The matrix notation is circumstantial and only intended to organize what we're doing. I wouldn't even really call it matrix algebra. The notation is useful to keep track of systems of equations without writing everything out. • March 11th 2013, 07:41 AM sirellwood Re: Euclidean Algorithm, more than two variables Ok, so looking at peoples replies, I see that no one is leaving it in the form 225a + 360b + 432c + 480d = 3, I left it like this and worked backwards through the algorithm and produced answers a = 3 b = -28 c = -196 d = 196 Plugging these back into 225a + 360b + 432c + 480d = 3, it satisfies the equation. But I am wondering if this isnt acceptable, seeing as no one suggested to do it this way? thanks • March 11th 2013, 11:08 AM ILikeSerena Re: Euclidean Algorithm, more than two variables Quote: Originally Posted by sirellwood Ok, so looking at peoples replies, I see that no one is leaving it in the form 225a + 360b + 432c + 480d = 3, I left it like this and worked backwards through the algorithm and produced answers a = 3 b = -28 c = -196 d = 196 Plugging these back into 225a + 360b + 432c + 480d = 3, it satisfies the equation. But I am wondering if this isnt acceptable, seeing as no one suggested to do it this way? thanks The solutions are the same whether we divide by 3 or not. Dividing by 3 makes the numbers smaller, which is easier to deal with. The algorithm also works with this original equation, although you won't get to a remainder of 1, but you'll find a remainder of 3. All times are GMT -8. The time now is 01:31 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397812485694885, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/10/19/chain-homotopies-and-homology/?like=1&source=post_flair&_wpnonce=b0679ce6ea
# The Unapologetic Mathematician ## Chain Homotopies and Homology Sorry about going AWOL yesterday, but I got bogged down in writing another exam. Okay, so we’ve set out chain homotopies as our 2-morphisms in the category $\mathbf{Kom}(\mathcal{C})$ of chain complexes in an abelian category $\mathcal{C}$. We also know that each of these 2-morphisms is an isomorphism, so decategorifying amounts to saying that two chain maps are “the same” if they are chain-homotopic. Many interesting properties of chain maps are invariant under chain homotopies, which means that they descend to properties of this decategorified version. Alternately, some properties are defined by 2-functors, which means that if we apply a chain homotopy we change our answer by a 2-morphism in the target 2-category, which must itself be an isomorphism. I like to call these “homotopy covariants”, rather than “invariants”. Anyway, then the decategorification of this property is an invariant, and what I said before applies. The big one of these properties we’re going to be interested in is the induced map on homology. Let’s consider chain complexes $A$ and $B$, chain maps $f$ and $g$ from $A$ to $B$, and let’s say there’s a chain homotopy $\alpha:f\Rightarrow g$. The chain maps induce maps $\widetilde{f}:H_\bullet(A)\rightarrow H_\bullet(B)$ and $\widetilde{g}:H_\bullet(A)\rightarrow H_\bullet(B)$. I assert that $\widetilde{f}=\widetilde{g}$. To see this, first notice that passing to the induced map is linear. That is, $\widetilde{f}-\widetilde{g}=\widetilde{f-g}$. So all we really need to show is that a null-homotopic map induces the zero map on homology. But if $\alpha:f\Rightarrow0$ makes $f$ null-homotopic, then $f_n=d^B_{n+1}\circ\alpha_n+\alpha_{n-1}\circ d^A_n$. When we restrict $f_n$ to the kernel of $d^A_n$, this just becomes $d^B_{n+1}\circ\alpha_n$, which clearly lands in the image of $d^B_{n+1}$, which is zero in $H_n(B)$, as we wanted to show. Now if we have chain maps $f:A\rightarrow B$ and $g:B\rightarrow A$ along with chain homotopies $g\circ f\Rightarrow1_A$ and $f\circ g\Rightarrow1_B$, we say that $A$ and $B$ are “homotopy equivalent”. Then the induced maps on homology $\widetilde{f}:H_\bullet(A)\rightarrow H_\bullet(B)$ and $\widetilde{g}:H_\bullet(B)\rightarrow H_\bullet(A)$ are inverses of each other, and so the homologies of $A$ and $B$ are isomorphic. This passage from covariance to invariance is the basis for why Khovanov homology works. We start with a 2-category $\mathcal{T}ang$ of tangles (which I’ll eventually explain fore thoroughly). Then we pick a ring $R$ and consider the 2-category $\mathbf{Kom}(R\mathbf{-mod})$ of chain complexes over the abelian category of $R$-modules. We construct a 2-functor $K:\mathcal{T}ang\rightarrow\mathbf{Kom}(R\mathbf{-mod})$ that picks a chain complex for each number of free ends, a chain map for each tangle, and a chain homotopy for each ambient isotopy of tangles. Then two isotopic tangles are assigned homotopic chain maps — the chain map is a “tangle covariant”. When we pass to homology, we get tangle invariants, which turn out to be related to well-known knot invariants. ### Like this: Posted by John Armstrong | Category theory ## 3 Comments » 1. Why call this 2-category “Kom”? I thought the Germans lost World War II and math terminology was mainly English now. Anyway: if one wants still more fun, one can notice that there are chain homotopies between chain homotopies, and so on ad infinitum. So, Kom is really a strict ∞-category. And one can explain this by noticing that a chain complex in C (at least a “one-sided” chain complex) is itself a sort of strict ∞-category: precisely a strict ∞-category internal to C. From this viewpoint, chain maps are functors, chain homotopies are natural transformations, and so on. But I’m very glad you’re not baffling your readers with that sort of stuff, at least not yet. Comment by | October 19, 2007 | Reply 2. For some reason, the references I’m using to refresh my memory say “Kom”. So I followed their lead for now. Besides, why not switch it up in the language department now and then? We still use “F” for sheaves, don’t we? Comment by | October 19, 2007 | Reply 3. [...] is a homotopy, then the Poincaré lemma gives us a chain homotopy from to as chain maps, which tells us that the maps they induce on homology are identical. That is, passing to homology [...] Pingback by | December 6, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918220579624176, "perplexity_flag": "head"}
http://mathoverflow.net/questions/20267/invariant-quadratic-forms-of-irreducible-representations/20296
Invariant quadratic forms of irreducible representations Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a finite group, and $k$ be a field of characteristic zero (not necessarily algebraically closed!). Let $\rho : G \to \mathrm{End}_k \left(k^n\right)$ be a irreducible representation of $G$ over $k$. Consider the vector space $S=\left\lbrace H\in \mathrm{End}_k\left(k^n\right) \mid \rho\left(g\right)^T H\rho\left(g\right)=H\text{ for any }g\in G\right\rbrace$ $=\left\lbrace \sum\limits_{g\in G}\rho\left(g\right)^T H\rho\left(g\right)\mid H\in \mathrm{End}_k\left(k^n\right)\right\rbrace$ and its subspace $T=\left\lbrace H\in S\mid H\text{ is a symmetric matrix}\right\rbrace$. It is easy to show that, if we denote our representation of $G$ on $k^n$ by $V$, then the elements of $S$ uniquely correspond to homomorphisms of representations $V\to V^{\ast}$ (namely, $H\in S$ corresponds to the homomorphism $v\mapsto\left(w\mapsto v^THw\right)$), while the elements of $T$ uniquely correspond to $G$-invariant quadratic forms on $V$ (namely, $H\in T$ corresponds to the quadratic form $v\mapsto v^THv$). (1) In the case when $k=\mathbb C$, Schur's lemma yields $\dim S\leq 1$, with equality if and only if $V\cong V^{\ast}$ (which holds if and only if $V$ is a real or quaternionic representation). Thus, $\dim T\leq 1$, and it is known that this is an equality if and only if $V$ is a real representation. (Except of the equality parts, this all pertains to the more general case when $k$ is algebraically closed of zero characteristic). (2) In the case when $k=\mathbb R$, it is easily seen that $T\neq 0$ (that's the famous nondegenerate unitary form, which in the case $k=\mathbb R$ is a quadratic form), and I think I can show (using the spectral theorem) that $\dim T=1$. As for $S$, it can have dimension $>1$. (3) I am wondering what can be said about other fields $k$; for instance, $k=\mathbb Q$. If $k\subseteq\mathbb R$, do we still have $\dim T=1$ as in the $\mathbb R$ case? In fact, $T\neq 0$ can be shown in the same way. - 1 Answer There are certainly examples over $k=\mathbb{Q}$ where $\dim T\ge2$. Let's take the cyclic group $G$ of order $5$ and the representation space $$V=\{(a_0,\ldots,a_4)\in\mathbb{Q}^5:a_0+\cdots +a_4=0\}$$ where $G$ acts by cyclic permutation. Two linearly independent elements of $T$ are given by $$\left(\begin{array}{rrrrr} 2&-1&0&0&-1\\ -1&2&-1&0&0\\ 0&-1&2&-1&0\\ 0&0&-1&2&-1\\ -1&0&0&-1&2\end{array}\right)$$ and $$\left(\begin{array}{rrrrr} 2&0&-1&-1&0\\ 0&2&0&-1&-1\\ -1&0&2&0&-1\\ -1&-1&0&2&0\\ 0&-1&-1&0&2\end{array}\right)$$ (these define quadratic forms on $V$ since they annihilate the all-one vector). The point here is that this representation splits into two over $\mathbb{R}$. I think the dimension of $T$ in general will be the number of irreducible representations it splits into over $\mathbb{R}$. - Thanks! I should have thought about that - another case when the irreducibility of cyclotomic polynomials over $\mathbb Q$ comes in ahndy. – darij grinberg Apr 4 2010 at 13:03 1 If $V$ is the representation space, then the space $T$ corresponds to $G$-maps from $\mathrm{Sym}^2(V)$ to $k$ considered as a trivial $G$-space. Thus $\dim T$ is the number of copies of the trivial representation inside $\mathrm{Sym}^2(V)$. This won't change when one passes from $k$ to an extension field. – Robin Chapman Apr 4 2010 at 15:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179009199142456, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/48617/how-to-draw-electric-fields-correctly
How to draw electric fields correctly? Draw a negatively charged plate A and a positively charged plate B with an equal charge. Draw the field. (the image is the given answer) I however do not understand why this is. Is there any reason why the arrows between the plates are not placed on the same 'height' but on 2 different heights? Also, the field lines above B and below A are going into 2 directions (which is linked with my previous question), which to me seems weird. What my idea of a correct answer would be: Between the plates all the fields have arrows (on the same height) pointing to A, above plate B all the arrows point upward and below field A all arrows point downwards (however, not sure about the last statement since the field not between the plates is 0 I'd say). I have just learned about fields and field lines, and I still find them hard to grasp, so pardon me for this elementary question. Also, is it a coincidence that the field lines seem to go through the $+$'es perfectly but are not aligned perfectly with the $-$'es? side note: Before this question you are asked to only draw a field A, it is only after you've answered that they ask you to draw B in the same picture too. This might have to do with the way it is drawn in this picture. - The diagram is confusing. It is drawing two sets of field lines: one set due to plate A (as if plate B didn't exist) and another due to plate B (as if plate A didn't exist). It is not showing the total field. This doesn't represent the total field if both plates are present! – Michael Brown Jan 8 at 15:10 @MichaelBrown How would the total field deviate from this one? Is my guess correct? Or wouldn't you have to draw field lines above B and below A (since those compensate eachother)? – user14445 Jan 8 at 15:11 The electric field is a vector field $\vec{E}$: it has a magnitude and direction. If a charge distribution $A$ produces a field $\vec{E}_A$ and charge $B$ produces $\vec{E}_B$ the total field is the vector sum $\vec{E}=\vec{E}_A + \vec{E}_B$. In this particular example the fields reinforce between the plates (same direction) and cancel outside of the plates (opposite direction). – Michael Brown Jan 8 at 15:17 @MichaelBrown You should gather those comments into an answer, since there is not much more to say. Otherwise I'd answer and maybe surpass you in rep earned this year to date ;) – Chris White Jan 8 at 15:30 @ChrisWhite Bah, you're already well ahead of me in total and you're sure to widen the gap when my holidays are over. :) – Michael Brown Jan 8 at 15:49 1 Answer (as per Chris White's suggestion) The diagram is confusing. It is drawing two sets of field lines: one set due to plate A (as if plate B didn't exist) and another due to plate B (as if plate A didn't exist). It is not showing the total field. This doesn't represent the total field if both plates are present! The electric field is a vector field $\vec{E}$: it has a magnitude and direction. If a charge distribution A produces a field $\vec{E}_A$ and charge B produces $\vec{E}_B$ the total field is the vector sum $\vec{E}=\vec{E}_A+\vec{E}_B$. In this particular example the fields reinforce between the plates (same direction) and cancel outside of the plates (opposite direction). - Thank you for this answer – user14445 Jan 8 at 15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564327597618103, "perplexity_flag": "middle"}
http://wiki.stat.ucla.edu/socr/index.php/AP_Statistics_Curriculum_2007_Limits_CLT
# AP Statistics Curriculum 2007 Limits CLT ## Contents ### Motivation The following example motivates the need to study the sampling distribution of the sample average, i.e., the distribution of $\overline{X_n}={1\over n}\sum_{i=1}^n{X_i}$, as we vary the sample {$X_1, X_2, X_3, \cdots , X_n$}. Suppose there are 10 world renowned laboratories, each is charged with conducting the same experiment (e.g., sequencing the genome Drosophila, the fruit fly), using the same protocols. One of the outcomes of this study that the sponsor agency is interested could be the average rate of occurrence of the ATC codon in 10,000 base-pairs in the Drosophila genome. After completing the sequencing of the genome, each lab selects a random segment of 1,000,000 base-pairs and counts the number of ATC codons in every segment of 10,000 base-pairs (there are 100 such segments). Finally they compute the average of the 100 counts they obtained. The funding/sponsoring agency receives 10 average counts from the 10 distinct laboratories. Most likely there will be differences between these averages. The funding agency poses the most important question: Can we predict how much variation (i.e., discrepancy) there will be among the 10 lab averages, if we had only been able to fund/conduct one experiment at one site (due to resource/budgetary limitations)? In other words, if the sponsoring organization could only support one lab to carry out the experiment, can they estimate what are the possible errors that may be committed by using the sample average (of 100 samples) obtained from the chosen lab? The answer is yes. They can accurately estimate the real count of ATC codons in the Drosophila genome from a single lab experiment as the sampling distribution of the average (across labs) is known to be (approximately) Normal! You can see a number of applications of the Central Limit Theorem here. ### General Statement of the Central Limit Theorem The Central Limit Theorem (CLT) argues that the distribution of the sum or average of independent observations from the same random process (with finite mean and variance), will be approximately Normally distributed (i.e., bell-shaped curve). That is, the CLT expresses the fact that any sum or average of (many) independent and identically-distributed random variables will tend to be distributed according to a particular Attractor Distribution; the Normal Distribution effectively represents the core of the universe of all (nice) distributions. ### Symbolic Statement of the Central Limit Theorem The formal statement of the CLT is described here. However undergraduate and graduate classes uses the following statement of the central limit theorem: Let {$X_1,X_2, \cdots, Xn$} be a random sample (IID) from a (native) distribution with well-defined and finite mean μX and variance $\sigma_X^2$. Then as n increases, the sampling distributions of the sample average $\overline{X_n}={1\over n}\sum_{i=1}^n{X_i}$ and the total sum $\overline{T_n}=\sum_{i=1}^n{X_i}$ approach Normal distributions with corresponding means and variances: $\mu_{\overline{X_n}}=\mu_X; \sigma_{\overline{X_n}}^2={\sigma_X^2\over n}$ $\mu_{\overline{T_n}}=n\times\mu_X; \sigma_{\overline{T_n}}^2={n\times\sigma_X^2}$ In essence, the CLT implies that the Normal Distribution is the center of the universe of all nice distributions. This is the reason why we encounter frequent estimates involving arithmetic-averaging –- the pathway from a nice distribution to Normal distribution is paved by sample averages. In other words, the CLT provides a unifying framework for all (nice) distributions, the way the Grand Unifying Theory attempts to unite the theory behind the three fundamental forces in physics. ### Are There CLTs for Other Sample Statistics? The ramifications of the CLT go beyond the scope of this interpretation. For example, one frequently wonders if there are other types of population-parameters or sample-statistics that yield similar limiting behavior. • How large does the sample size have to be to ensure normality of the sample average or total sum? • Does the convergence depend on the characteristics of the native distribution (e.g., shape, center, dispersion)? • How about weighted averages, non-linear combinations or more general functions of the random sample? Many other interesting questions are frequently asked by people exposed to the CLT. Some may have known theoretical answers (exact or approximate); other questions may be better addressed empirically by simulations and experiments (See the SOCR CLT Applet with Activity). ### CLT Applications A number of applications of the Central Limit Theorem are included in the SOCR CLT Activity. To start the SOCR CLT Experiment • Go to SOCR Experiments • Select the SOCR Sampling Distribution CLT Experiment from the drop-down list of experiments in the left panel. The image below shows the interface to this experiment. Notice the main control widgets on this image (boxed in blue and pointed to by arrows). The generic control buttons on the top allow you to do one or multiple steps/runs, stop and reset this experiment. The two tabs in the main frame provide graphical access to the results of the experiment (Histograms and Summaries) or the Distribution selection panel (Distributions). Remember that choosing sample-sizes <= 16 will animate the samples (second graphing row), whereas larger sample-sizes (N>20) will only show the updates of the sampling distributions (bottom two graphing rows). • In the Sampling Distribution CLT Experiment, select Q-quadratic distribution (under the distribution tab). Set the sample sizes (n1 and n2) first to 2 and then to 4. Observe the shape of the sampling distribution -- it will become first tri-modal (n=2) and then five-modal (for n=4), respectively. As the sample sizes exceed 5, these multiple modes will merge into one, and the sampling distribution will become unimodal. Of course, the CLT guarantees that the sampling distribution of the average will ultimately become Normal, as the sample size increases. ### References • Dinov, ID, Christou, N, and Sanchez, J (2008) Central Limit Theorem: New SOCR Applet and Demonstration Activity. Journal of Statistics Education, Volume 16, Number 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996096253395081, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/07/16/the-higgs-mechanism-part-1-lagrangians/
# The Unapologetic Mathematician ## The Higgs Mechanism part 1: Lagrangians This is part one of a four-part discussion of the idea behind how the Higgs field does its thing. Wow, about six months’ hiatus as other parts of my life have taken precedence. But I drag myself slightly out of retirement to try to fill a big gap in the physics blogosphere: how the Higgs mechanism works. There’s a lot of news about this nowadays, since the Large Hadron Collider has announced evidence of a “Higgs-like” particle. As a quick explanation of that, I use an analogy I made up on Twitter: “If Mirror-Spock exists, he has a goatee. We have found a man with a goatee. We do not yet know if he is Mirror-Spock.” So, what is the Higgs boson? Well, it’s the particle expression of the Higgs field. That doesn’t explain anything, so we go one step further. What is the Higgs field? It’s the (conjectured) thing that gives some other particles (some of their) mass, in certain situations where normally we wouldn’t expect there to be any mass. And then there’s hand-waving about something like the ether that particles have to push through or shag carpet that they have to rub against that slows them down and hey, mass. Which doesn’t really explain anything, but sort of sounds like it might and so people nod sagely and then either forget about it all or spin their misconceptions into a new wave of Dancing Wu-Li Masters. I think we can do better, at least for the science geeks out there who are actually interested and not allergic to a little math. A couple warnings and comments before we begin. First off: I’m not going to go through this in my usual depth because I want to cram it into just three posts, albeit longer ones than usual, even though what I will say touches on all sorts of insanely cool mathematics that disappointingly few people see put together like this. Second: Ironically, that seems to include a lot of the physicists, who are generally more concerned with making predictions than with understanding how the underlying theory connects to everything else and it’s totally fine, honestly, that they’re interested in different aspects than I am. But I’m going to make a relatively superficial pass over describing the theory as physicists talk about it rather than go into those underlying structures. Lastly: I’m not going to describe the actual Higgs particle or field as they exist in the Standard Model; that would require quantum field theory and all sorts of messy stuff like that, when it turns out that the basic idea already shows up in classical field theory, which is a lot easier to explain. Even within classical field theory I’m going to restrict myself to a simpler example of the sort of thing that happens. Because reasons. That all said, let’s dive in with Lagrangian mechanics. This is a subject that you probably never heard about unless you were a physics major or maybe a math major. Basically, Newtonian mechanics works off of the three laws that were probably drilled into your head by the end of high school science classes: Newton’s Laws of Motion 1. An object at rest tends to stay at rest; an object in motion tends to stay in that motion. 2. Force applied to an object is proportional to the acceleration that object experiences. The constant of proportionality is the object’s mass. 3. Every action comes paired with an equal and opposite reaction. It’s the second one that gets the most use since we can write it down in a formula: $F=ma$. And for most forces we’re interested in the force is a conservative vector field, meaning that it’s the (negative) gradient (fancy word for “derivative” that comes up in more than one dimension) of a potential energy function: $F=-\nabla U$. What this means is that things like to move in the direction that potential energy decreases, and they “feel a force” pushing them in that direction. Upshot for Newton: $ma=-\nabla U$. Lagrangian mechanics comes at this same formula with a different explanation: objects like to move along paths that (locally) minimize some quantity called “action”. This principle unifies the usual topics of high school Newtonian physics with things like optics where we say that light likes to move along the shortest path between two points. Indeed, the “action” for light rays is just the distance they travel! This also explains things like “the angle of incidence equals the angle of reflection”; if you look at all paths between two points that bounce off of a mirror, the one that satisfies this property has the shortest length, making it a local minimum for the action. Let’s set this up for a body moving around in some potential field to show you how it works. The action of a suggested path $q(t)$ — the body is at the point $q(t)$ at time $t$ over a time interval $t_1\leq t\leq t_2$ is: $\displaystyle S[q]=\int\limits_{t_1}^{t_2}\frac{1}{2}mv(t)^2-U(q(t))\,dt$ where $v(t)=\dot{q}(t)$ is the velocity vector of the particle, $v(t)^2$ is the square of its length, and $U(x)$ is a potential function depending only on the position of the particle. Don’t worry: there’s a big scary integral here, but we aren’t going to actually do any integration. The function on the inside of the integral is called the Lagrangian function, and we calculate the action $S$ of the path $q$ by integrating the Langrangian over the time interval we’re concerned with. We write this as $S[q]$ with square brackets to emphasize that this is a “functional” that takes a function $q$ and gives a number back. Of course, as mathematicians there’s really nothing inherently special about functions taking functions as arguments, but for beginners it helps keep things straight. Now, what happens if we “wiggle” the path a bit? What if we calculate the action of $q'=q+\delta q$, where $\delta q$ is some “small” function called the “variation” of $q$? We calculate: $\displaystyle S[q']=\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}'(t))^2-U(q'(t))\,dt$ Taking the derivative $\dot{q}'$ is linear, so we see that $\dot{q}'=\dot{q}+\delta\dot{q}$; “the variation of the derivative is the derivative of the variation”. Plugging this in: $\displaystyle\begin{aligned}S[q']&=\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}(t)+\delta\dot{q}(t))^2-U(q(t)+\delta q(t))\,dt\\&=\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}(t)^2+2\dot{q}(t)\cdot\delta\dot{q}(t)+\delta\dot{q}(t)^2)-U(q(t)+\delta q(t))\,dt\\&\approx\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}(t)^2+2\dot{q}(t)\cdot\delta\dot{q}(t))-\left[U(q(t))+\nabla U(q(t))\cdot\delta q(t)\right]\,dt\end{aligned}$ where we’ve thrown away terms involving second and higher powers of $\delta q$; the variation is small, so the square (and cube, and …) is negligible. So what’s the difference between this and $S[q]$? What’s the variation of the action? $\displaystyle\delta S=S[q']-S[q]=\int\limits_{t_1}^{t_2}m\dot{q}(t)\cdot\delta\dot{q}(t)-\nabla U(q(t))\cdot\delta q(t)\,dt$ where again we throw away negligible terms. Now we can handle the first term here using integration by parts: $\displaystyle\begin{aligned}\delta S=S[q']-S[q]&=\int\limits_{t_1}^{t_2}-m\ddot{q}(t)\cdot\delta q(t)-\nabla U(q(t))\cdot\delta q(t)\,dt\\&=\int\limits_{t_1}^{t_2}-\left[m\ddot{q}(t)+\nabla U(q(t))\right]\cdot\delta q(t)\,dt\end{aligned}$ “Wait a minute!” those of you paying attention will cry out, “what about the boundary terms!?” Indeed, when we use integration by parts we should pick up $\ddot{q}(t_2)\cdot\delta q(t_2)-\ddot{q}(t_1)\cdot\delta q(t_1)$, but we will assume that we know where the body is at the beginning and the end of our time interval, and we’re just trying to figure out how it gets from one point to the other. That is, $\delta q$ is zero at both endpoints. So, now we apply our Lagrangian principle: bodies like to move along action-minimizing paths. We know how action changes if we “wiggle” the path by a little variation $\delta q$, and this should remind us about how to find local minima: they happen when no matter how we change the input, the “first derivative” of the output is zero. Here the first derivative is the variation in the action, throwing away the negligible terms. So, what condition will make $\delta S$ zero no matter what function we put in for $\delta q$? Well, the other term in the integrand will have to vanish: $\displaystyle m\ddot{q}(t)+\nabla U(q(t))=0$ But this is just Newton’s second law from above, coming back again! Everything we know from Newtonian mechanics can be written down in Lagrangian mechanics by coming up with a suitable action functional, which usually takes the form of an integral of an appropriate Lagrangian function. But lots more things can be described using the Lagrangian formalism, including field theories like electromagnetism. In the presence of a charge distribution $\rho$ and a current distribution $j$, we take the potentials $\phi$ and $A$ as fundamental and start with the action (suppressing the space and time arguments so we can write $\rho$ instead of $\rho(x,t)$: $\displaystyle S[\phi,A]=\int_{t_1}^{t_2}\int_{\mathbb{R}^3}-\rho\phi+j\cdot A+\frac{\epsilon_0}{2}E^2-\frac{1}{2\mu_0}B^2\,dV\,dt$ When we vary with respect to $\phi$ and insist that the variance of $S$ be zero we get Gauss’ law: $\displaystyle\nabla\cdot E=\frac{\rho}{\epsilon_0}$ Varying the components of $A$ we get Ampère’s law with Maxwell’s correction: $\displaystyle\nabla\times B=\mu_0j+\epsilon_0\mu_0\frac{\partial E}{\partial t}$ The other two of Maxwell’s equations come automatically from taking the potentials as fundamental and coming up with the electric and magnetic fields from them. ## 12 Comments » 1. [...] is part two of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 [...] Pingback by | July 17, 2012 | Reply 2. “I think we can do better, at least for the science geeks out there who are actually interested and not allergic to a little math.” This is what’s missing. Thanks for filling the gap. When I remember what topic needs a treatment like this, I might ask you to cover it. Comment by | July 17, 2012 | Reply 3. Quick question: what theorem lets us state U( q+ delta\q ) = U(q) + grad\U( q* delta\q )? I know it says a scalar function at a point a little further away is a nearby value plus a gradient, something like a first order taylor approximation, but I don’t understand the reason the argument of the gradient term is a product of the value and the hyperreal. Is this theorem something we see in infinitesimal calculus? Comment by | July 18, 2012 | Reply 4. It’s an approximation, not an equality. One way to say it is to take Taylor’s theorem and stop after the first degree term. More intuitively, in one dimension we know that $\displaystyle f'(x)\approx\frac{f(x+\Delta x)-f(x)}{\Delta x}$ which we can rearrange to write $\displaystyle f'(x)\Delta x\approx f(x+\Delta x)-f(x)$ or $f(x+\Delta x)\approx f(x)+f'(x)\Delta x$ and what I’ve used above is just a higher-dimensional version of this approximation. Comment by | July 18, 2012 | Reply • Crap I didn’t parse the parenthesis correctly in my head. Now I get it. I thought the argument of the gradient was at the point q(t)* delta\q. Excellent post by the way. Certainly hit the creepy/cool factor when you derived Maxwell’s equations from an electrical action. Comment by | July 18, 2012 | Reply 5. [...] is part three of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 and Part 2 [...] Pingback by | July 18, 2012 | Reply 6. To be fair, Jalil, almost any field equations (within a very broad class) can come from picking the right Lagrangian, so it’s not really all that surprising that Maxwell’s equations can arise from this sort of context. And it’s not really clear why the Lagrangian I picked is “the right one” other than the fact that it gives back the equations I wanted to get in the first place. It’s sort of begging the question, in a way. The cool part starts coming in when you choose a Lagrangian for some complete other reason (like in part 3) and it turns out to give a set of field equations you already knew from somewhere else. Comment by | July 18, 2012 | Reply 7. [...] is part four of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1, Part 2, and Part 3 [...] Pingback by | July 19, 2012 | Reply 8. This is very good and awakens my enthusiasm for the A potential. Note I did not say “reawakens.” But — and I think this is the ultimate dumb question — why should a body “want” to minimize the integral of some funny quantity? It seems to me the answer, if any, should be something about how we “want” to write dynamic laws. Maybe Lagrangian-style thinking means taking the path of least (human) confusion. Digression: I just had the eerie feeling that Emmy Noether might be behind me, staring at my back. Comment by | July 19, 2012 | Reply 9. This is a good point, and it goes partly to the anthropomorphic way we speak about physics. Newtonian physics is rife with objects that “want” to keep moving, or that “want” to minimize energy. Unfortunately, taking that language out doesn’t really solve the problem entirely; the underlying question is “why does Lagrangian mechanics/field theory work?” To some extent, the Feynman path-integral formulation of quantum field theory gives a partial answer to this question, but in the end it just pushes the goalposts back another step. Ultimately, I have to say that I don’t really know, and I don’t think anyone really does. Figuring it out is why people study fundamental physics in the first place. Comment by | July 19, 2012 | Reply 10. [...] utilizando la teoría clásica de campos. Nos lo cuenta The Unapologetic Mathematician en “The Higgs Mechanism part 1: Lagrangians,” July 16, “The Higgs Mechanism part 2: Examples of Lagrangian Field Equations,” [...] Pingback by | July 19, 2012 | Reply 11. [...] The Higgs Mechanism part 1: Lagrangians [...] Pingback by | July 20, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347059726715088, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/non-locality
# Tagged Questions The non-locality tag has no wiki summary. learn more… | top users | synonyms 1answer 140 views ### Is this field redefinition for free scalar field theory non-local? The action of free scalar field theory is as follows: $$S=\int d^4 x \frac{\dot{\phi}^2}{2}-\frac{\phi(m^2-\nabla^2)\phi}{2}.$$ I have been thinking to redefine field as ... 2answers 314 views ### Is spacetime an illusion? In consistent histories, for gauge theories, can the projection operators used in the chains be not gauge invariant? In quantum gravity, for a projection operator to be gauge invariant means it has ... 1answer 79 views ### CHSH violation and entanglement of quantum states How is the violation of the usual CHSH inequality by a quantum state related to the entanglement of that quantum state? Say we know that exist Hermitian and unitary operators $A_{0}$, $A_{1}$, ... 1answer 23 views ### Sub and super multiplicativity of norms for understanding non-locality In relation to various problems in understanding entanglement and non-locality, I have come across the following mathematical problem. It is most concise by far to state in its most mathematical form ... 2answers 46 views ### Bell polytopes with nontrivial symmetries Take $N$ parties, each of which receives an input $s_i \in {1, \dots, m_i}$ and produces an output $r_i \in {1, \dots, v_i}$, possibly in a nondeterministic manner. We are interested in joint ... 4answers 59 views ### Why can't noncontextual ontological theories have stronger correlations than commutative theories? EDIT: I found both answers to my question to be unsatisfactory. But I think this is because the question itself is unsatisfactory, so I reworded it in order to allow a good answer. One take on ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262842535972595, "perplexity_flag": "middle"}
http://everythingscience.co.za/grade-10/01-skills-for-science/01-skills-for-science-02.cnxmlplus
# Science • Home • Read a textbook • Practise for exams • Order You are here: Home › Mathematical skills # Mathematical skills You should be comfortable with scientific notation and how to write scientific notation. You should also be able to easily convert between different units and change the subject of a formula. In addition, concepts such as rate, direct and indirect proportion, fractions and ratios and the use of constants in equations are important. ## Rounding off Certain numbers may take an infinite amount of paper and ink to write out. Not only is that impossible, but writing numbers out to a high precision (many decimal places) is very inconvenient and rarely gives better answers. For this reason we often estimate the number to a certain number of decimal places. Rounding off a decimal number to a given number of decimal places is the quickest way to approximate a number. For example, if you wanted to round-off 2,652 527 2 to three decimal places then you would first count three places after the decimal. Next you mark this point with a $|$: $2,652|5272$. All numbers to the right of $|$ are ignored after you determine whether the number in the third decimal place must be rounded up or rounded down. You round up the final digit (make the digit one more) if the first digit after the $|$ is greater than or equal to 5 and round down (leave the digit alone) otherwise. So, since the first digit after the $|$ is a 5, we must round up the digit in the third decimal place to a 3 and the final answer of 2,652 527 2 rounded to three decimal places is 2,653. In a calculation that has many steps, it is best to leave the rounding off right until the end. This ensures that your answer is more accurate. ## Scientific notation In science one often needs to work with very large or very small numbers. These can be written more easily (and more compactly) in scientific notation, in the general form: $N×10n$(1) where $N$ is a decimal number between 0 and 10 that is rounded off to a few decimal places. $n$ is known as the exponent and is an integer. If $n>0$ it represents how many times the decimal place in $N$ should be moved to the right. If $n<0$, then it represents how many times the decimal place in $N$ should be moved to the left. For example 3,24 × 103 represents 3240 (the decimal moved three places to the right) and 3,24 × 10−3 represents 0,003 24 (the decimal moved three places to the left). If a number must be converted into scientific notation, we need to work out how many times the number must be multiplied or divided by 10 to make it into a number between 1 and 10 (i.e. the value of $n$) and what this number between 1 and 10 is (the value of $N$). We do this by counting the number of decimal places the decimal comma must move. For example, write the speed of light (299 792 458 m·s−1) in scientific notation, to two decimal places. First, we find where the decimal comma must go for two decimal places (to find $N$) and then count how many places there are after the decimal comma to determine $n$. In this example, the decimal comma must go after the first 2, but since the number after the 9 is 7, $N=3,00$. $n=8$ because there are 8 digits left after the decimal comma. So the speed of light in scientific notation, to two decimal places is 3,00 × 108 m·s−1. We can also perform addition, subtraction, multiplication and division with scientific notation. The following two worked examples show how to do this: ### Example 1: Addition and subtraction with scientific notation #### Question $1,99×10-26+1,67×10-27-2,79×10-25=?$ #### Answer ##### Make all the exponents the same To add or subtract numbers in scientific notation we must make all the exponents the same: 1,99 × 10−26$=$ 0,199 × 10−25 and 1,67 × 10−27$=$ 0,0167 × 10−25 ##### Carry out the addition and subtraction Now that the exponents are the same we can simply add or subtract the $N$ part of each number: $0,199+0,0167-2,79=-2,5743$ ##### Write the final answer To get the final answer we put the common exponent back: −2,5743 × 10−25 Note that we follow the same process if the exponents are positive. For example $5,1×103+4,2×104=4,71×104$. ### Example 2: Multiplication and division with scientific notation #### Question $1,6×10-19×3,2×10-19÷5×10-21$ #### Answer ##### Carry out the multiplication For multiplication and division the exponents do not need to be the same. For multiplication we add the exponents and multiply the $N$ terms: $1,6×10-19×3,2×10-19=(1,6×3,2)×10-19+(-19)=5,12×10-38$ ##### Carry out the division For division we subtract the exponents and divide the $N$ terms. Using our result from the previous step we get: $2,56×10-38÷5×10-21=(5,12÷5)×10-38-(-19)=1,024×10-18$ ##### Write the final answer The answer is: 1,024 × 10−18 Note that we follow the same process if the exponents are positive. For example: $5,1×103×4,2×104=21,42×107=2,142×108$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8842799067497253, "perplexity_flag": "head"}
http://www.newton.ac.uk/programmes/KIT/seminars/2010121310001.html
Navigation:  Home > KIT >  Seminars >  Jin, S # KIT ## Seminar ### An Eulerian surface hopping method for the Schrödinger equation with conical crossings Jin, S (Wisconsin-Madison) Monday 13 December 2010, 10:00-11:00 Seminar Room 1, Newton Institute #### Abstract In a nucleonic propagation through conical crossings of electronic energy levels, the codimension two conical crossings are the simplest energy level crossings, which affect the Born-Oppenheimer approximation in the zeroth order term. The purpose of this paper is to develop the surface hopping method for the Schrödinger equation with conical crossings in the Eulerian formulation. The approach is based on the semiclassical approximation governed by the Liouville equations, which are valid away from the conical crossing manifold. At the crossing manifold, electrons hop to another energy level with the probability determined by the Landau-Zener formula. This hopping mechanics is formulated as an interface condition, which is then built into the numerical flux for solving the underlying Liouville equation for each energy level. While a Lagrangian particle method requires the increase in time of the particle numbers, or a large number of statistical samples in a Monte Carlo setting, the advantage of an Eulerian method is that it relies on fixed number of partial differential equations with a uniform in time computational accuracy. We prove the positivity and $l^{1}$-stability and illustrate by several numerical examples the validity and accuracy of the proposed method. #### Video Available Video Formats #### Comments Start the discussion! Back to top ∧
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8777139782905579, "perplexity_flag": "middle"}
http://www.neverendingbooks.org/index.php/the-f_un-folklore.html
# The F_un folklore Posted by lieven on Saturday, 7 June 2008 All esoteric subjects have their own secret (sacred) texts. If you opened the Da Vinci Code (or even better, the original The Holy blood and the Holy grail) you will known about a mysterious collection of documents, known as the "Dossiers secrets", deposited in the Bibliothèque nationale de France on 27 April 1967, which is rumoured to contain the mysteries of the Priory of Sion, a secret society founded in the middle ages and still active today... The followers of F-un, for $\mathbb{F}_1$ the field of one element, have their own collection of semi-secret texts, surrounded by whispers, of which they try to decode every single line in search of enlightenment. Fortunately, you do not have to search the shelves of the Bibliotheque National in Paris, but the depths of the internet to find them as huge, bandwidth-unfriendly, scanned documents. The first are the lecture notes "Lectures on zeta functions and motives" by Yuri I. Manin of a course given in 1991. One can download a scanned version of the paper from the homepage of Katia Consani as a huge 23.1 Mb file. Of F-un relevance is the first section "Absolute Motives?" in which "...we describe a highly speculative picture of analogies between arithmetics over $\mathbb{F}_q$ and over $\mathbb{Z}$, cast in the language reminiscent of Grothendieck's motives. We postulate the existence of a category with tensor product $\times$ whose objects correspond not only to the divisors of the Hasse-Weil zeta functions of schemes over $\mathbb{Z}$, but also to Kurokawa's tensor divisors. This neatly leads to teh introduction of an "absolute Tate motive" $\mathbb{T}$, whose zeta function is $\frac{s-1}{2\pi}$, and whose zeroth power is "the absolute point" which is teh base for Kurokawa's direct products. We add some speculations about the role of $\mathbb{T}$ in the "algebraic geometry over a one-element field", and in clarifying the structure of the gamma factors at infinity." (loc.cit. p 1-2) I'd welcome links to material explaining this section to people knowing no motives. The second one is the unpublished paper "Cohomology determinants and reciprocity laws : number field case" by Mikhail Kapranov and A. Smirnov. This paper features in blog-posts at the Arcadian Functor, in John Baez' Weekly Finds and in yesterday's post at Noncommutative Geometry. You can download every single page (of 15) as a separate file from here. But, in order to help spreading the Fun-gospel, I've made these scans into a single PDF-file which you can download as a 2.6 Mb PDF. In the introduction they say : "First of all, it is an old idea to interpret combinatorics of finite sets as the $q \rightarrow 1$ limit of linear algebra over the finite field $\mathbb{F}_q$. This had lead to frequent consideration of the folklore object $\mathbb{F}_1$, the "field with one element", whose vector spaces are just sets. One can postulate, of course, that $\mathbf{spec}(\mathbb{F}_1)$ is the absolute point, but the real problem is to develop non-trivial consequences of this point of view." They manage to deduce higher reciprocity laws in class field theory within the theory of $\mathbb{F}_1$ and its field extensions $\mathbb{F}_{1^n}$. But first, let us explain how they define linear algebra over these absolute fields. Here is a first principle : in doing linear algebra over these fields, there is no additive structure but only scalar multiplication by field elements. So, what are vector spaces over the field with one element? Well, as scalar multiplication with 1 is just the identity map, we have that a vector space is just a set. Linear maps are just set-maps and in particular, a linear isomorphism of a vector space onto itself is a permutation of the set. That is, linear algebra over $\mathbb{F}_1$ is the same as combinatorics of (finite) sets. A vector space over $\mathbb{F}_1$ is just a set; the dimension of such a vector space is the cardinality of the set. The general linear group $GL_n(\mathbb{F}_1)$ is the symmetric group $S_n$, the identification via permutation matrices (having exactly one 1 in every row and column) Some people prefer to view an $\mathbb{F}_1$ vector space as a pointed set, the special element being the 'origin' $0$ but as $\mathbb{F}_1$ doesnt have a zero, there is also no zero-vector. Still, in later applications (such as defining exact sequences and quotient spaces) it is helpful to have an origin. So, let us denote for any set $S$ by $S^{\bullet} = S \cup { 0 }$. Clearly, linear maps between such 'extended' spaces must be maps of pointed sets, that is, sending $0 \rightarrow 0$. The field with one element $\mathbb{F}_1$ has a field extension of degree n for any natural number n which we denote by $\mathbb{F}_{1^n}$ and using the above notation we will define this field as : $\mathbb{F}_{1^n} = \mu_n^{\bullet}$ with $\mu_n$ the group of all n-th roots of unity. Note that if we choose a primitive n-th root $\epsilon_n$, then $\mu_n \simeq C_n$ is the cyclic group of order n. Now what is a vector space over $\mathbb{F}_{1^n}$? Recall that we only demand units of the field to act by scalar multiplication, so each 'vector' $\vec{v}$ determines an n-set of linear dependent vectors $\epsilon_n^i \vec{v}$. In other words, any $\mathbb{F}_{1^n}$-vector space is of the form $V^{\bullet}$ with $V$ a set of which the group $\mu_n$ acts freely. Hence, $V$ has $N=d.n$ elements and there are exactly $d$ orbits for the action of $\mu_n$ by scalar multiplication. We call $d$ the dimension of the vectorspace and a basis consists in choosing one representant for every orbits. That is, $~B = { b_1,\ldots,b_d }$ is a basis if (and only if) $V = { \epsilon_n^j b_i~:~1 \leq i \leq d, 1 \leq j \leq n }$. So, vectorspaces are free $\mu_n$-sets and hence linear maps $V^{\bullet} \rightarrow W^{\bullet}$ is a $\mu_n$-map $V \rightarrow W$. In particular, a linear isomorphism of $V$, that is an element of $GL_d(\mathbb{F}_{1^n})$ is a $\mu_n$ bijection sending any basis element $b_i \rightarrow \epsilon_n^{j(i)} b_{\sigma(i)}$ for a permutation $\sigma \in S_d$. An $\mathbb{F}_{1^n}$-vectorspace $V^{\bullet}$ is a free $\mu_n$-set $V$ of $N=n.d$ elements. The dimension $dim_{\mathbb{F}_{1^n}}(V^{\bullet}) = d$ and the general linear group $GL_d(\mathbb{F}_{1^n})$ is the wreath product of $S_d$ with $\mu_n^{\times d}$, the identification as matrices with exactly one non-zero entry (being an n-th root of unity) in every row and every column. This may appear as a rather sterile theory, so let us give an extremely important example, which will lead us to our second principle for developing absolute linear algebra. Let $q=p^k$ be a prime power and let $\mathbb{F}_q$ be the finite field with $q$ elements. Assume that $q \cong 1~mod(n)$. It is well known that the group of units $\mathbb{F}_q^{\ast}$ is cyclic of order $q-1$ so by the assumption we can identify $\mu_n$ with a subgroup of $\mathbb{F}_q^{\ast}$. Then, $\mathbb{F}_q = (\mathbb{F}_q^{\ast})^{\bullet}$ is an $\mathbb{F}_{1^n}$-vectorspace of dimension $d=\frac{q-1}{n}$. In other words, $\mathbb{F}_q$ is an $\mathbb{F}_{1^n}$-algebra. But then, any ordinary $\mathbb{F}_q$-vectorspace of dimension $e$ becomes (via restriction of scalars) an $\mathbb{F}_{1^n}$-vector space of dimension $\frac{e(q-1)}{n}$. Next time we will introduce more linear algebra definitions (including determinants, exact sequences, direct sums and tensor products) in the realm the absolute fields $\mathbb{F}_{1^n}$ and remarkt that we have to alter the known definitions as we can only use the scalar-multiplication. To guide us, we have the second principle : all traditional results of linear algebra over $\mathbb{F}_q$ must be recovered from the new definitions under the vector-space identification $\mathbb{F}_q = (\mathbb{F}_q^{\ast})^{\bullet} = \mathbb{F}_{1^n}$ when $n=q-1$. (to be continued) ## Comments Sun, 06/08/2008 - 05:45 LOL! I was hoping you would mention more secret texts, which I was unaware of ... and yes, a basic guide to motives would be a real treasure ... but then they are so vaguely defined one might think a physicist had invented them. ## Add new comment ### Filtered HTML • Web page addresses and e-mail addresses turn into links automatically. • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> • Lines and paragraphs break automatically. ### Plain text • No HTML tags allowed. • Web page addresses and e-mail addresses turn into links automatically. • Lines and paragraphs break automatically.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 83, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249069690704346, "perplexity_flag": "head"}
http://polylogblog.wordpress.com/2009/09/27/bite-sized-stream-ams-sketching/
# Bite-sized streams: AMS Sketching September 27, 2009 in educational My Biased Coin recently discussed a new paper extending some work I’d done a few years back. I’ll briefly mention this work at the end of the post but first here’s another bite-sized result from [Alon, Matias, Szegedy] that is closely related. Consider a numerical stream that defines a frequency vector $\mathbf{f}=(f_1, \ldots, f_n)$ where $f_i$ is the frequency of $i$ in the stream. Here’s a simple sketch algorithm that allows you to approximate $F_2(\mathbf{f})=\sum_i f_i^2.$ Let $\mathbf{x}=(x_1, \ldots, x_n)\in \{-1,1\}^n$ be a random vector where the $x_i$ are 4-wise independent and unbiased. Consider $r=\mathbf{x}\cdot \mathbf{f}$ and note that $r$ can be computed incrementally as the stream arrives (given that $\mathbf{x}$ is implicitly stored by the algorithm.) By the weaker assumption of 2-wise independence, we observe that $E[x_i x_j]=0$ if $i\neq j$ and so: $E[r^2]=\sum_{i,j\in [n]} f_i f_j E[x_i x_j]=F_2$ By the assumption of 4-wise independence, we also observe that $E[x_i x_j x_k x_l]=0$ unless $i=j, k=l$ or $i=k, j=l$ or $i=l, j=k$ and so: $\mbox{Var}[r^2]= \sum_{i,j,k,l\in [n]} f_i f_j f_k f_l E[x_i x_j x_k x_l]-F_2^2=4\sum_{i<j} f_i^2f_j^2\leq 2 F_2^2$ Hence, if we repeat the process with $O(\epsilon^{-2} \log \delta^{-1})$ independent copies of $x$, it’s possible to show (via Chebyshev and Chernoff bounds) that by appropriately averaging the results, we get a value that is within a factor $(1+\epsilon)$ of $F_2$ with probability at least $1-\delta$. Note that it was lucky that we didn’t need full coordinate independence because that would have required $O(n)$ bits just to remember $\mathbf{x}$. It can be shown that remembering $O(\log n)$ bits is sufficient if we only need 4-wise independence. BONUS! The recent work… Having at least 4-wise independence seemed pretty important to getting a good bound on the variance of $r^2$. However, it was observed in [Indyk, McGregor] that the following also worked. First pick $\mathbf{y,z}\in \{-1,1\}^{\sqrt{n}}$ where $\mathbf{y,z}$ are independent and the coordinates of each are 4-wise independent. Then let the coordinate values of $\mathbf{x}$ be $\{y_i z_j:1\leq i,j\leq\sqrt{n}\}$. It’s no longer the case that the coordinates are 4-wise independent but it’s still possible to show that $\mbox{Var}[r^2]=O(F_2^2)$ and this is good enough for our purposes. In follow up work by [Braverman, Ostrovsky] and [Chung, Liu, Mitzenmacher], it was shown that you can push this idea further and define $\mathbf{x}$ based on $k$ random vectors of length $n^{1/k}$. The culmination of this work shows that the variance increases to at most $3^k F_2^2$ and the resultant algorithm uses $\tilde{O}(\epsilon^{-2} 3^k \log \delta^{-1})$ space. At this point, you’d be excused for asking why we all cared about such a construction. The reason is that it’s an important technical step in solving the following problem: given a stream of tuples from $[n]^k$, can we determine if the $k$ coordinates are independent? In particular, how “far” is the joint distribution from the product distribution defined by considering the frequency of each value in each coordinate separately. When “far” is measured in terms of the Euclidean distance, a neat solution is based on the above analysis. Check out the papers for the details. If you want to measure independence in terms of the variational distance, check out [Braverman, Ostrovsky]. In the case $k=2$, measuring independence in terms of the KL-divergence gives the mutual information. For this, see [Indyk, McGregor] again. ### About A research blog about data streams and related topics. ### Recently Tweeted • Coding! Complexity! Sparsity! eecs.umich.edu/eecs/SPARC2013/ 2 days ago • None technical thought for the day: "You - Ha Ha Ha" really reminds me of "Fineshrine" fwiw #hashcollision. 1 week ago • SPIRE 2013 deadline extended to 9 May. u.cs.biu.ac.il/~porately/spir… 2 weeks ago • SODA 2014 CFP. siam.org/meetings/da14/ 4 weeks ago • I can't be the first person to have confused circadian and Acadian rhythms but I'm the most recent. #toomuchdaniellanois 1 month ago ## 6 comments The AMS sketch can be interpreted as Johnson-Lindenstrauss, or a +1/-1 version of it. Do these weaker independence version of AMS lead to any improvement in the JL space? How do your “distance-from-independent” results translate to that space? September 28, 2009 at 11:28 am polylogblog AMS and JL are indeed related but note that AMS doesn’t immediately give JL because of some details that I glossed over when I referred to “appropriately averaging.” This involves taking both medians and means. Had it just been means then the correspondence would have been stronger… AMS will map m points p1, …, pm in R^n to q1, …, qm in R^k where k=O(eps^{-2} log m) such that qi and qj can be used to estimate l2(pi-pj) up to a (1+eps) factor. However, it’s not the case that it is sufficient to use a scaled l2(qi-qj) for the estimate. However, take a look at http://www.sigmod.org/pods/proc01/online/p93.pdf for another result in this direction. The relationship between “distance-from-independence” and metric embeddings is an interesting question that may be good to think about. I’d initially thought something “deeper” was going on when Piotr and I could get a 1+eps approx for l2 but only a O(log n) approx for l1. But now Braverman and Ostrovsky have a 1+eps approx for l1 so that aspect is less mysterious than I thought. I indeed had Dimitiri’s paper in mind, but had forgotten that AMS do the median of means. So am I correct in concluding that l2(qi-qj) would work with truly random +1/-1 bits, and AMS magically allows us to use much limited randomness, if we are willing to do something slightly more complicated. And your work allows us to further reduce the randomness needed. Are there any known results on how far one can hope to go, i.e. lower bounds on the amount of randomness needed? Yes, that’s a fair summary although I’d be wary about thinking about the recent work as further reducing the randomness since the number of truly random bits is still going to be \Omega(log n) for each x. In fact, since the number of x’s that we need to use grows with the variance, we would probably end up using more random bits. As far as a limit on how far randomness can be reduced, AMS show that any deterministic algorithm that approximates F2 within a factor 1.1 requires \Omega(n) bits of space. From this we can deduce that any randomized algorithm using o(n/s) space requires log s random bits. [...] I’ll be posting about [Epstein, Levin, Mestre and Segev] soonish. The [Ostrovsky, Braverman] paper is the same one I mentioned previously. [...] [...] Lost Letter had a great post about [Magniez, Mathieu, Nayak] and I previously mentioned the main problem from [Braverman, Ostrovsky]. I have a soft spot for both because they resolve some [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280779361724854, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/08/02/pulling-back-and-pushing-forward-structure/?like=1&source=post_flair&_wpnonce=9916d21231
# The Unapologetic Mathematician ## Pulling Back and Pushing Forward Structure Remember that we defined measurable functions in terms of inverse images, like we did for topological spaces. So it should be no surprise that we move a lot of measurable structure around between spaces by “pulling back” or “pushing forward”. First of all, let’s say that $(Y,\mathcal{T})$ is a measurable space and consider a function $f:X\to Y$. We can always make $f$ into a measurable function by pulling back the $\sigma$-ring $\mathcal{T}$. For each measurable subset $E\subseteq Y$ we define the preimage $f^{-1}(E)=\{x\in X\vert f(x)\in E\}$ as usual, and define the pullback $f^{-1}(\mathcal{T})$ to be the collection of subsets of $X$ of the form $f^{-1}(E)$ for $E\in\mathcal{T}$. Taking preimages commutes with arbitrary setwise unions and setwise differences, and $f^{-1}(\emptyset)=\emptyset$, and so $f^{-1}(\mathcal{T})$ is itself a $\sigma$-ring. Every point $x\in X$ gives us a point $f(x)\in Y$, and every point $f(x)\in Y$ is contained in some measurable set $E\in\mathcal{T}$. Thus $x$ is contained in the set $f^{-1}(E)\in f^{-1}(\mathcal{T})$, and so we find that $(X,f^{-1}(\mathcal{T}))$ is a measurable space. Clearly, $f^{-1}(\mathcal{T})$ contains the preimage of every measurable set $E\in\mathcal{T}$, and so $f:(X,f^{-1}(\mathcal{T}))\to(Y,\mathcal{T})$ is measurable. Measures, on the other hand, go the other way. Say that $(X,\mathcal{S},\mu)$ is a measure space and $f:(X,\mathcal{S})\to(Y,\mathcal{T})$ is a measurable function between measurable spaces, then we can define a new measure $\nu$ on $Y$ by “pushing forward” the measure $\mu$. Given a measurable set $E\subseteq Y$, we know that its preimage $f^{-1}(E)\subseteq X$ is also measurable, and so we can define $\nu(E)=\mu(f^{-1}(E))$. It should be clear that this satisfies the definition of a measure. We’ll write $\nu=f(\mu)$ for this measure. If $f:X\to Y$ is a measurable function, and if $\mu$ is a measure on $X$, then we have the equality $\displaystyle\int g\,d(f(\mu))=\int(g\circ f)\,d\mu$ in the sense that if either integral exists, then the other one does too, and their values are equal. As usual, it is sufficient to prove this for the case of $g=\chi_E$ for a measurable set $E\subseteq Y$. Linear combinations will extend it to simple functions, the monotone convergence theorem extends to non-negative measurable functions, and general functions can be decomposed into positive and negative parts. Now, if $\chi_E$ is the characteristic function of $E$, then $\left[\chi_E\circ f\right](x)=1$ if $f(x)\in E$ — that is, if $x\in f^{-1}(E)$ — and $0$ otherwise. That is, $\chi_E\circ f=\chi_{f^{-1}(E)}$. We can then calculate $\displaystyle\int\chi_E\,d(f(\mu))=\left[f(\mu)\right](E)=\mu(f^{-1}(E))=\int\chi_{f^{-1}(E)}\,d\mu=\int(\chi_E\circ f)\,d\mu$ As a particular case, applying the previous result to the function $g\chi_E$ shows us that $\displaystyle\begin{aligned}\int\limits_Eg(y)\,d\left[f(\mu)\right](y)&=\int\limits_Eg\,d(f(\mu))\\&=\int g\chi_E\,d(f(\mu))\\&=\int(g\circ f)(\chi_E\circ f)\,d\mu\\&=\int(g\circ f)\chi_{f^{-1}(E)}\,d\mu\\&=\int\limits_{f^{-1}(E)}(g\circ f)\,d\mu\\&=\int\limits_{f^{-1}(E)}g(f(x))\,d\mu(x)\end{aligned}$ We can go back and forth between either side of this equation by the formal substitution $y=f(x)$. Finally, we can combine this with the Radon-Nikodym theorem. If $f:X\to Y$ is a measurable function from a measure space $(X,\mathcal{S},\mu)$ to a totally $\sigma$-finite measure space $(Y,\mathcal{T},\nu)$ so that the pushed-forward measure $f(\mu)$ is absolutely continuous with respect to $\nu$. Then we can select a non-negative measurable function $\displaystyle\phi=\frac{d(f(\mu)}{d\nu}:Y\to\mathbb{R}$ so that $\displaystyle\int g(f(x))\,d\mu(x)=\int g(y)\phi(y)\,d\nu(y)$ again, in the sense that if one of these integrals exists then so does the other, and their values are equal. The function $\phi$ plays the role of the absolute value of the Jacobian determinant. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 4 Comments » 1. [...] Functions on Pulled-Back Measurable Spaces We start today with a possibly surprising result; pulling back a -ring puts significant restrictions on measurable functions. If is a function from a set into a [...] Pingback by | August 3, 2010 | Reply 2. [...] be a morphism of measure spaces. That is, is a measurable function from to , so contains the pulled-back -algebra . This pull-back defines a map . Further, since is a morphism of measure spaces it must [...] Pingback by | August 5, 2010 | Reply 3. [...] useful thing about a coordinate patch is that it lets us pull back coordinates from to our manifold, or at least to the open set . Let’s say is sent to the [...] Pingback by | February 23, 2011 | Reply 4. [...] there are also constructions that involve more than one space. The direct image functor is a way of pushing forward a sheaf structure along a continuous map. It’s relatively simple and we may find it useful, [...] Pingback by | March 21, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220829010009766, "perplexity_flag": "head"}
http://mathoverflow.net/questions/53990/frobenius-elements-from-the-point-of-view-of-etale-fundamental-groups
## Frobenius elements from the point of view of étale fundamental groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The goal of this question is to find a "geometric" definition of Frobenius element in $\text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. Here are two definitions that don't work, but that should help explain what I mean. Fix an algebraic closure of $\mathbb{Q}$ and, for a prime $p$, fix an algebraic closure of $\mathbb{F}_p$. If there were a canonical morphism $f : \mathbb{Q} \to \mathbb{F}_p$, then the induced map $f_{\ast} : \pi_1(\text{Spec } \mathbb{F}_p) \to \pi_1(\text{Spec } \mathbb{Q})$ of étale fundamental groups (basepoints the algebraic closures above, suppressed) would be an obvious way to define Frobenius elements: we would just take the image of a generator of $\pi_1(\text{Spec } \mathbb{F}_p) \simeq \hat{\mathbb{Z}}$. (The general motivation being: "gee, if I had a category $C$ and a functor, let's call it $\pi_1 : C \to \text{Grp}$, wouldn't it be nice if I could find an object, let's call it $S^1$, with $\pi_1(S^1) \simeq \mathbb{Z}$, so I could probe elements of $\pi_1(X)$ using morphisms $S^1 \to X$," and it seems for schemes that $\pi_1(S^1) \simeq \hat{\mathbb{Z}}$ is the next best thing.) But of course this is nonsense since no such $f$ exists. The next obvious thing (from my perspective, being someone who knows nothing about all this étale stuff) is to look at the canonical morphism $f : \mathbb{Z} \to \mathbb{F}_p$, which induces a map $f_{\ast} : \pi_1(\text{Spec } \mathbb{F}_p) \to \pi_1(\text{Spec } \mathbb{Z})$, and then to look at the canonical morphism $g : \mathbb{Z} \to \mathbb{Q}$ and the induced map $g_{\ast} : \pi_1(\text{Spec } \mathbb{Q}) \to \pi_1(\text{Spec } \mathbb{Z})$. Perhaps the Frobenius elements at $p$ are nothing more than the inverse image under $g_{\ast}$ of the image of a generator of $\pi_1(\text{Spec } \mathbb{F}_p)$ under $f_{\ast}$ (up to conjugacy to account for changes in basepoint). But this is also nonsense since $\pi_1(\text{Spec } \mathbb{Z})$ is trivial. So what is the correct version of this construction? Here's my guess: instead of using $\mathbb{Z}$, we have to use the localization $R = \mathbb{Z}_{(p)}$. As before there are morphisms $f : R \to \mathbb{F}_p, g : R \to \mathbb{Q}$ inducing maps $f_{\ast} : \pi_1(\text{Spec } \mathbb{F}_p) \to \pi_1(\text{Spec } R)$ and $g_{\ast} : \pi_1(\text{Spec } \mathbb{Q}) \to \pi_1(\text{Spec } R)$, and maybe now something like the statement "the Frobenius elements in $\pi_1(\text{Spec } \mathbb{Q})$ at $p$ are the inverse image under $g_{\ast}$ of the image of a generator of $\hat{\mathbb{Z}}$ under $f_{\ast}$ (up to conjugacy)" is finally true. Is it? And what does $g_{\ast}$ look like? - 3 Qiaochu: is it helpful to remark that Gal(Q-bar/Q) doesn't have a Frob_p because Q-bar is ramified at p? Usually one looks at the quotient Gal(Q_S/Q)---the maximal extension unramified outside some set of primes S---and then (and only then) you have conj classes Frob_p for all p not in S. – Kevin Buzzard Feb 2 2011 at 0:09 @Kevin: yes, that is more or less what I'm getting from the other answers. I have read a few expositions which vaguely say things like "well, there are Frobenius elements - only they're not elements, they're conjugacy classes - only they're not conjugacy classes, they're unions of conjugacy classes - but their trace in a Galois representation still makes sense!" and trying to figure out what kind of construction could account for this, and it seems clear-ish now. – Qiaochu Yuan Feb 2 2011 at 0:35 ## 3 Answers Your guess is right. $\pi_1(Spec(R))$ is the automorphism group of the maximal extension $\mathbb{Q}^{p\textrm{-}ur} \subset \overline{\mathbb{Q}}$ unramified at $p$. (This is a special case of SGA 1 Exp. V Prop 8.2 about the fundamental group of normal schemes). The morphism $g_*$ is simply the restriction map $Gal(\overline{\mathbb{Q}}/\mathbb{Q}) \to Gal(\mathbb{Q}^{p\textrm{-}ur}/\mathbb{Q})$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think you must write $\mathbf{Q}$ as $\mathrm{colim}_n \mathbf{Z}[\frac{1}{n}]$. - 5 I don't understand why this was down-voted. This answer is correct, if somewhat abbreviated: if $p \nmid n$ then there is a map $\mathbb Z[1/n] \to \mathbb F_p$, as well as a map $\mathbb Z[1/n] \to \mathbb Q$. These induces maps $G_{\mathbb F_p} \to \pi_1(\mathbb Z[1/n])$ and $G_{\mathbb Q} \to \pi_1(\mathbb Z[1/n])$ (the former being injective and the latter surjective), and these are precisely the maps giving rise to the Frobenius elements. – Emerton Feb 2 2011 at 2:02 Your picture is right. The one thing I'll say (and this is somewhat tangential to your question) is that, for me, the analogue of $\pi_1(S^1)$ viewed as $\pi_1(\mathbb{C}\setminus{0})$ is not quite $\pi_1(\operatorname{Spec}\mathbb{F}_p)$, but rather the tame fundamental group of $\operatorname{Spec}\mathbb{Z}_p^{nr}$, where $\mathbb{Z}_p^{nr}$ is the maximal unramified extension of $\mathbb{Z}_p$ (i.e. its universal cover: it's simply connected and therefore analogous to a disk around the origin in $\mathbb{C}$). In other words, $\pi_1(S^1)$ classifies $n$-fold covers of the circle (or of a punctured disk around the origin). Similarly, the tame fundamental group classifies $n$-fold covers of the 'punctured disk' $\operatorname{Spec}\mathbb{Z}_p^{nr}\setminus{p}$. The only hitch is that $n$ has to be prime to $p$ in this setting. This part of the Galois group is in the `geometric' direction (nothing's happening to the residue field of the point), while the part where the Frobenii live is in the arithmetic direction (all about extensions of $\mathbb{F}_p$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957116425037384, "perplexity_flag": "head"}
http://mathoverflow.net/questions/37825/what-are-jacob-luries-key-insights
## What are Jacob Lurie’s key insights? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is inspired by this Tim Gowers blogpost. I have some familiarity with the work of Jacob Lurie, which contains ideas which are simply astounding; but what I don't understand is which key insight allowed him to begin his programme and achieve things which nobody had been able to achieve before. People had looked at $\infty$-categories for years, and the idea of $(\infty,n)$-categories is not in itself new. What was the key new idea which started "Higher Topos Theory", the proof of the Baez-Dolan cobordism hypothesis, "Derived Algebraic Geometry", etc.? - 16 There seems to be an implicit assumption in the above paragraph that Jacob had only one key idea, and I strongly disagree with that. I'd rather say that he has a rare combination of technical mastery and drive to view ideas in a broad context. – S. Carnahan♦ Sep 6 2010 at 0:28 19 I don't believe in superheros in math- I believe in super ideas. I don't believe that Jacob Lurie was just trying the same thing everyone else was trying, and it happened to work for him because of greater intelligence or technical ability. There must have been key new ideas he brought to the table, and if Gromov is to be believed about the number of great ideas a mathematician has in life, then the number of these essentially new ideas is small. Most important, there's insight to be gained in knowing what these ideas are, because they led to breakthrough where so many had been stuck. – Daniel Moskovich Sep 6 2010 at 0:53 18 Lots of math is done using a lot of good ideas put together, rather than a superidea. Believing in superideas seems to me just as naive as believing in superheroes. Certainly someone who works more efficiently or thinks more clearly would be able to get more good work done without needing to posit some special qualities of genius. What was Euler's superidea? (None of the above is meant to refer to Jacob's work specifically.) – Noah Snyder Sep 6 2010 at 3:45 21 Can you, please, change the title to a more neutral form, perhaps, along the lines of "What are some key ideas behind higher topos theory"? Strange as this may sound, I think that using a personal name in the title is responsible for all the talk of superheros and other sillyness, which distracts from answering the question mathematically. – Victor Protsak Sep 6 2010 at 9:31 10 As an example of the kind of symbiosis that the internet makes possible so nicely, I shall now link from my blog post to this discussion. – gowers Sep 6 2010 at 10:39 show 7 more comments ## 6 Answers My answer would be that his insight was firstly that it pays to take what Grothendieck said in his various long manuscripts, extremely seriously and then to devote a very large amount of thought, time and effort. Many of the methods in HTT have been available from the 1980s and the importance of quasi-categories as a way to boost higher dimensional category theory was obvious to Boardman and Vogt even earlier. Lurie then put in an immense amount of work writing down in detail what the insights from that period looked like from a modern viewpoint.It worked as the progress since that time had provided tools ripe for making rapid progress on several linked problems. His work since HTT continues the momentum that he has built up. As far as `abstracter than thou' goes, I believe that Grothendieck's ideas in Pursuing Stacks were not particularly abstract and Lurie's continuation of that trend is not either. Once you see that there are some good CONCRETE models for $\infty$-categories the geometry involved gets quite concrete as well. Simplicial sets are not particularly abstract things, although they can be a bit scary when you meet them first. Quasi-categories are then a simple-ish generalisation of categories, but where you can use both categorical insights and homotopy insights. That builds a good intuition about infinity categories... now bring in modern algebraic topology with spectra, etc becoming available. - This is a great answer! – Daniel Moskovich Sep 6 2010 at 12:30 3 Just a point of clarification: I am rather certain that he had not looked at any of Grothendieck's long manuscripts until well after his work on derived algebraic geometry had borne some fruit (in particular, after his infinity topoi preprint and his Ph.D.). This is of course assuming that by "long manuscripts" you did not mean published works like SGA. What you said in your answer does not strictly contradict what I'm saying, but I think it could be a source of confusion. – S. Carnahan♦ Sep 6 2010 at 15:59 3 That seems highly likely. The background ideas that from about 1984 onwards benefitted from some of the intuitions of, for instance, ` 'pursuing stacks' (PS) rarely used explicitly the constructions of that manuscript. They rather used the general context of those documents. AG sent PS to us in Bangor and Ronnie Brown helped in its distribution by providing (with explicit permission from AG) for copies to be sent to those people who requested it. It was discussed in seminars in various places with some very good mathematicians exploring some of the ideas from their own viewpoints. – Tim Porter Sep 7 2010 at 5:59 1 (continued) This also happened with others of the lengthy manuscripts he wrote at about that time, although without Bangor's involvement (so I am not sure of the mechanisms). AG's esquisse d'une programme is another document that is very influential. This process of course is not restricted to this context but is one of the many ways maths grows through its transmission. (If you do not know the story of our involvement in PS you may like to look at Ronnie's webpage: bangor.ac.uk/~mas010/pstacks.htm where he tells the story from his own point of view.) – Tim Porter Sep 7 2010 at 6:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think, one of the key insights underlying derived algebraic geometry and Lurie's treatment of elliptic cohomology is taking some ideas of Grothendieck really serious. Two manifestations: 1) One of the points of the scheme theory initiated by Grothendieck is the following: if one takes intersection of two varieties just on the point-set level, one loses information. One has to add the possibility of nilpotents (somewhat higher information) to preserve the information of intersection multiplicities and get the "right" notion of a fiber product. Now one of the points of derived algebraic geometry (as explained very lucidly in the introduction to DAG V) is that for homological purposes this is not really the right fiber product - you need to take some kind of homotopy fiber product. This is, because one still loses information because one is taking quotients - one should add isomorphism instead and view it on a categorical level. Thus, you can take a meaningful intersection of a point with itself, for example. This is perhaps an instance where the homological revolution, which went to pure mathematics last century, benefits from a second wave, a homotopical revolution - if I am allowed to overstate this a bit. 2) Another insight of Grothendieck and his school was, how important it is to represent functors in algebraic geometry - regardless of what you want at the end. [as Mazur reports, Hendrik Lenstra was once sure that he did want to solve Diophantine equations and did not want to represent functors - and later he was amused that he represented functors to solve Diophantine equations.] And this is Lurie's approach to elliptic cohomology and tmf: Hopkins and Miller showed the existence of a certain sheaf of $E_\infty$-ring spectra on the moduli stack of elliptic curves. Lurie showed that this represents a derived moduli problem (of oriented derived elliptic curves). Also his solution of the cobordism hypothesis has a certain flavor of Grothendieck: you have to put things in a quite general framework to see the essence. This philosophy also shines quite clearly through his DAG, I think. Besides, I do not think, there is a single key insight in Higher Topos Theory besides the feeling that infinity-categories are important and that you can find analogues to most of classical category theory in quasi-categories. Then there are lots of little (but every single one amazing) insights, how this transformation from classical to infinity-category theory works. - 1 Of course one suspects that selectivity also has a role to play, in dealing with "big abstraction". "Abstracter than thou" is not in itself a potent heuristic, despite the success Grothendieck had with it: these days I suppose one also wonders what is the catalyst when big abstract machines are cranked up. There is still something arcane about that, it seems. – Charles Matthews Sep 6 2010 at 10:58 2 Rather than "abstracter than thou", Jacob has, I think, a specific fixed abstract framework through whose lens he views mathematics. I don't understand, though, which specific quality of this particular abstract framework makes it better than all others- i.e. where exactly the conceptual breakthrough occured. This response partially answers. – Daniel Moskovich Sep 6 2010 at 11:06 One of the things that Jacob Lurie finds important is to "think invariantly": Do not use model dependent definitions. Do not prove model-dependent results. Here's an example to illustrate what I mean: "The $\infty$-category of spetra is the free stable $\infty$-category with colimits generated by a single object". That's a nice model independent definition of the $\infty$-category of spectra. - 6 The first time I saw oo-category, I was super-confuseed. Here I was thinking somebody meant "egg-category" instead of "infinity-category"! – drvitek Sep 5 2010 at 23:39 3 There's a unicode character ∞ that should work well enough on most browsers, so there's no particular need to resort to ASCII art to name ∞-categories. – David Eppstein Sep 5 2010 at 23:53 2 In particular, in html, you can typeset it as `&infin;`. – Theo Johnson-Freyd Sep 6 2010 at 0:31 2 And note that for Lurie oo-category means (oo,1)-category... – David Roberts Sep 6 2010 at 0:32 My understanding (which isn't very deep and so someone should please correct me if I'm wrong) is that one key idea in the Hopkins-Lurie proof of the cobordism hypothesis was to generalize the cobordism hypothesis to allow more general types of cobordism categories. This stronger result the becomes easier to prove because they could do a double-induction argument that moves between the (stably n-)framed case (which is the usual cobordism hypothesis) and the oriented case. Furthermore, it means that the final results they get are stronger, because you get a classification not just of framed TFTs, but of TFTs with more general kinds of structures on the bordism categories. - I think one of the insights leading to his successes, which is of course not unique to him, is that when doing higher category theory it is useful to add invertible higher cells going all the way up to the top before you try to add noninvertible ones anywhere. This is along the same lines as Noah's answer: the cobordism hypothesis, which was originally stated in terms of n-categories, becomes easier once you generalize it to a stronger statement about (∞,n)-categories, since then induction becomes easier/possible. - Daniel,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618592262268066, "perplexity_flag": "middle"}
http://m6c.org/w/tag/reverse-mathematics/
# Computable roots of computable functions Posted on February 14, 2012 by Here are several interesting results from computable analysis: Theorem 1. If $f$ is a computable function from $\mathbb{R}$ to $\mathbb{R}$ and $\alpha$ is an isolated root of $f$, then $\alpha$ is computable. Corollary 2. If $p(x)$ is a polynomial over $\mathbb{R}$ with computable coefficients, then every root of $p(x)$ is computable. Theorem 3. There is a effective closed subset of $\mathbb{R}$ which is nonempty (in fact, uncountable) but which has no computable point. Theorem 4. There is a computable function from $\mathbb{R}$ to $\mathbb{R}$ which has uncountably many roots but no computable roots. Continue reading → Posted in Results worth knowing | | # The logic of Reverse Mathematics Posted on December 20, 2011 by This post is about a research idea I have been thinking about which is quite different from my usual research. It’s an example of a project with an “old fashioned” feel to it, as if it could have been studied 50 years ago. It’s almost a toy problem, so I haven’t spent too long digging through references yet. For all I know it was solved 50 years ago. Continue reading → Posted in Musings, Research | | ### About • Site maintained by Carl Mummert • Part of Boole's Rings
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549940824508667, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/207910/prove-convergence-of-the-sequence-z-1z-2-cdots-z-n-n-of-cesaro-means
# Prove convergence of the sequence $(z_1+z_2+\cdots + z_n)/n$ of Cesaro means Prove that if $\lim_{n \to \infty}z_{n}=A$ then: $$\lim_{n \to \infty}\frac{z_{1}+z_{2}+\cdots + z_{n}}{n}=A$$ I was thinking spliting it in: $$(z_{1}+z_{2}+\cdots+z_{N-1})+(z_{N}+z_{N+1}+\cdots+z_{n})$$ where $N$ is value of $n$ for which $|A-z_{n}|<\epsilon$ then taking the limit of this sum devided by $n$ , and noting that the second sum is as close as you wish to $nA$ while the first is as close as you wish to $0$. Not sure if this helps.... - 2 Possibly easier to first show for $A=0$ – Thomas Andrews Oct 5 '12 at 19:19 @Thomas Andrews I edited the question, adding my idea. Could you tell me what you think, please? – Mykolas Oct 5 '12 at 19:29 Changed the title. There is no series here. – Did Oct 5 '12 at 20:54 Maybe this question was asked here before (I did not search), but at least I remember that we had questions such that this can be obtained as a consequence of the results from those questions, see, e.g., If $\sigma_n=\frac{s_1+s_2+\cdots+s_n}{n}$ then $\limsup\sigma_n \leq \limsup s_n$ and limit of quotient of two series. – Martin Sleziak Oct 6 '12 at 9:28 1 – Martin Sleziak Oct 27 '12 at 6:53 show 1 more comment ## 1 Answer It seems like Homework problem,hence I'll just give hint: $$\frac{z_1+z_2+\cdots +z_n}{n}-A=\frac {(z_1-A)+(z_2-A)+\cdots +(z_n-A)}{n}$$ Now use the defn of limit that for every $\epsilon > 0$ there exists $N_0 \in \mathbb N$ such that $|z_m-A| < \epsilon \ \forall m \geq N_0$ Also remember triangle inequality : $|a_1+a_2+\cdots +a_n| \leq |a_1| + |a_2| +\cdots +|a_n|$ Can You find proper $a_i$ in terms of say $z_i$'s?? - thankyou for help. It's not homework. Would $a_{i}$ be $\frac{z_{i}}{n}$, and since for $n \to\infty$all sums go to $0$, it would be proved? @TheJoker – Mykolas Oct 5 '12 at 19:40 and would my first idea be wrong? – Mykolas Oct 5 '12 at 19:44 You can take $a_i=\frac{z_1-A}{n}$. Now use the fact that limit of $z_i$ is $A$. about your approach,it is true,just write rigorously :-).also there is no dofference between mine and your approach as your next step would be same. – TheJoker Oct 5 '12 at 19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556264877319336, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/207683-help-bisection-method.html
# Thread: 1. ## HELP- Bisection Method Using the bisection method, find a solution to x^2-x-2=0 to one decimal place. Use starting values 1 and 2 and show all necessary iterations. 2. ## Re: HELP- Bisection Method Let f(x) = x^2-x-2 Start with the value 1. f(1) = 1^2 - 1 - 2 = -2 < 0 Next, we evaluate the value 2. f(2) = 2^2 - 2 -2 = 0 = 0 Hence we know that there exists a y in the interval [1,2] such that f(y) = 0. Let's test more points, like 1.5 f(1.5) = (1.5)^2 - 1.5 - 2 = 2.25 - 1.5 - 2 < 0 Hence we know that there exists a y in the interval [1.5,2] such that f(y) = 0 You can continue this process iteratively to get your solution. EDIT: omg, I made a terrible arithmetic error! sorry! 3. ## Re: HELP- Bisection Method How do i know when i have my answer ? O.o 4. ## Re: HELP- Bisection Method This is odd. You are specifically required to use the "bisection method" but you have no idea what that is? Very strange! The mathematical idea is that if a continuous function, f, is positive for one value of x, x= a, say, and negative for another, x= b, say, then there must be a value of x between a and b where f(x)= 0. Since it could be any point between a and b, the simplest thing to do is to choose the midpoint: (a+ b)/2. When you evaluate f((a+ b)/2), there are three possibilities: (1) f((a+ b)/2)= 0 in which case you are done! You have found a solution. (2) f((a+ b)/2)> 0. Since f(b) was negative, we now know a solution is between (a+ b)/2 and b and you can now find the midpoint of those. For example, suppose we want to solve $f(x)= x^2- 2$ and we choose to start with a= 1 and b= 2. That's a good choice because f(1)= 1- 2= -1< 0 and f(2)= 4- 2= 2> 0. One is positive and the other negative so we know there must be a solution in that interval. The midpoint is (1+ 2)/2= 3/2. f(3/2)= 9/4- 2= 9/4- 8/4= 1/4> 0. That's positive so there must be a root between 3/2 and 1. The midpoint of (3/2, 1) is (1+ 3/2)/2= 5/4. [tex]f(x)= 25/16- 2= 25/16- 32/16= -7/16. That's negative so there must be a root between 5/4 and 3/2. Repeat until you get two consecutive points closer together that the acceptable error. I used this example, rather than the the problem you give, $f(x)= x^2- x- 2= 0$ with starting values x= 1 and x= 2 because a rather peculiar thing happens at x= 2! What is f(2)? 5. ## Re: HELP- Bisection Method Originally Posted by NoIdea123 How do i know when i have my answer ? O.o Do you understand what a "solution to an equation" means? 0 is a solution (NOT O.o- those are letters!) because it satifies the equation: 2^2- 2- 2= 4- 4= 0. I seriously wonder if you have copied the problem correctly. $x^2- x- 2= (x- 2)(x+ 1)= 0$ so x= 2 and x= -1. There is really no point in using the bisection method for that. Not even for "practice" if one of the suggested starting points is the only solution in that interval. 6. ## Re: HELP- Bisection Method Sorry, correction. x^3-x-2=0 7. ## Re: HELP- Bisection Method The process is essentially the same. You need to have a basic idea of where a root is. To do this, evaluate the function at two different points and show that there is a change in sign. To go from negative to positive, or vice versa, the function needs to go through 0 at some point in between. Then continuously halve the interval to make two new intervals, evaluate the value of the midpoint which will then give you the endpoints of both intervals, see which of those intervals has a sign change. You can then disregard the other half, and repeat the process. Stop when you have enough digits of accuracy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9040878415107727, "perplexity_flag": "middle"}
http://ocw.mit.edu/courses/mathematics/18-s997-introduction-to-matlab-programming-fall-2011/root-finding/the-secant-method/
Subscribe to the OCW Newsletter Home » Courses » Mathematics » Introduction To MATLAB Programming » Root-Finding » The Secant Method # The Secant Method • Course Home • Syllabus • Debugging with MATLAB • Download Course Materials While Newton's method is fast, it has a big downside: you need to know the derivative of $$f$$ in order to use it. In many "real-life" applications, this can be a show-stopper as the functional form of the derivative is not known. A natural way to resolve this would be to estimate the derivative using \begin{equation} \label{eq:dervative:estimate} f'(x)\approx\frac{f(x+\epsilon)-f(x)}{\epsilon} \end{equation} for $$\epsilon\ll1$$. The secant method uses the previous iteration to do something similar. It approximates the derivative using the previous approximation. As a result it converges a little slower (than Newton's method) to the solution: \begin{equation} \label{eq:3} x_{n+1}=x_n-f(x_n) \frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})}. \end{equation} Since we need to remember both the current approximation and the previous one, we can no longer have such a simple code as that for Newton's method. At this point you are probably asking yourself why we are not saving our code into a file, and it is exactly what we will now learn how to do. ## Coding in a File Instead of writing all your commands at the command prompt, you can type a list of commands in a file, save it and then have MATLAB® "execute" all of the commands as if you had typed them into the command prompt. This is useful when you have more than very few lines to write because inevitably you are bound to make a small mistake every time you write more than 5 lines of code. By putting the commands in a file you can correct your mistakes without introducing new ones (hopefully). It also makes it possible to "debug" your code, something we will learn later. For guided practice and further exploration of how to debug, watch Video Lecture 6: Debugging. MATLAB files have names that end with `.m`, and the name itself must comprise only letters and numbers with no spaces. The first character must be a letter, not a number. Open a new file by clicking on the white new-file icon in the top left of the window, or select from the menu File$$\rightarrow$$New$$\rightarrow$$Script. Copy the Newton method code for $$\tanh(x)=x/3$$ into it. Save it and give it a name (NewtonTanh.m for example). Now on the command prompt you "run" the file by typing the name (without the `.m`) and pressing Enter . A few points to look out for: • You can store your files wherever you want, but they have to be in MATLAB's "search path" (or in the current directory). To add the directory you want to the path select File$$\rightarrow$$Set path… select "Add Folder", select the folder you want, click "OK" then "Save". To check if your file is in the path you can type `which NewtonTanh` and the result should be the path to your file. • If you choose a file-name that is already the name of a MATLAB command, you will effectively "hide'' that command as MATLAB will use your file instead. Thus, before using a nice name like `sum`, or `find`, or `exp`, check, use `which` to see if it already defined. • The same warning (as the previous item) applies to variable names, a variable will "hide" any file or command with the same name. • If you get strange errors when you try to run your file, make sure that there are no spaces or other non-letters in your filename, and that the file is in the path. • Remember that after you make changes to your file, you need to save it so that MATLAB will be aware of the changes you made. For guided practice and further exploration of how to use MATLAB files, watch Video Lecture 3: Using Files. Exercise 7. Save the file as SecantTanh.m and modify the code so that it implements the Secant Method. You should increase the number of iterations because the Secant Method doesn't converge as quickly as Newton's method. Notice that here it is not enough to use `x` like in the Newton's method, since you also need to remember the previous approximation $$x_{n-1}$$. Hint: Use another variable (perhaps called `PrevX`). ## Convergence Different root-finding algorithms are compared by the speed at which the approximate solution converges (i.e., gets closer) to the true solution. An iterative method $$x_{n+1}=g(x_n)$$ is defined as having $$p-$$th order convergence if for a sequence $$x_n$$ where $$\lim_{n\rightarrow\infty}x_n=\alpha$$ exists then \begin{equation} \label{eq:convergence:order} \lim_{n\rightarrow\infty}\frac{|{x_{n+1}-\alpha}|}{|{x_n-\alpha}|^p} = L \ne 0. \end{equation} Newton's method has (generally) second-order convergence, so in Eq. (3) we would have $$p=2$$, but it converges so quickly that it can be difficult to see the convergence (there are not enough terms in the sequence). The secant method has a order of convergence between 1 and 2. To discover it we need to modify the code so that it remembers all the approximations. The following code, is Newton's method but it remembers all the iterations in the list `x`. We use `x(1)` for $$x_1$$ and similarly `x(n)` for $$x_n$$: ````x(1)=2; % This is our first guess, put into the first element of x for n=1:5 % we will iterate 5 times using n to indicate the current % valid approximation x(n+1)=x(n)-(tanh(x(n))-x(n)/3)/(sech(x(n))^2-1/3); %here we % calculate the next approximation and % put the result into the next position % in x. end x % sole purpose of this line is to show the values in x.```` The semicolon (`;`) at the end of line 4 tells MATLAB not to display the value of `x` after the assignment (also in line 1. Without the lonely `x` on line 9 the code would calculate `x`, but not show us anything. After running this code, `x` holds the 6 approximations (including our initial guess) with the last one being the most accurate approximation we have: ````x = 2.0000 3.1320 2.9853 2.9847 2.9847 2.9847```` Notice that there is a small but non-zero distance between x(5) and x(6): ````>> x(6)-x(5) ans = 4.4409e-16```` This distance is as small as we can hope it to be in this case. We can try to verify that we have second order convergence by calculating the sequence defined in Eq. (3). To do that we need to learn more about different options for accessing the elements of a list like $$x$$. We have already seen how to access a specific element; for example to access the 3rd element we write `x(3)`. MATLAB can access a sublist by giving it a list of indexes instead of a single number: ````>> x([1 2 3]) ans = 2.0000 3.1320 2.9853```` We can use the colon notation here too: ````x(2:4) ans = 3.1320 2.9853 2.9847```` Another thing we can do is perform element-wise operations on all the items in the list at once. In the lines of code below, the commands preceding the plot command are executed to help you understand how the plot is generated: ````>> x(1:3).^2 ans = 4.0000 9.8095 8.9118 >> x(1:3)*2 ans = 4.0000 6.2640 5.9705 >> x(1:3)-x(6) ans = -0.9847 0.1473 0.0006 >> x(2:4)./(x(1:3).^2) ans = 0.4002 0.0018 0.0387 >> plot(log(abs(x(1:end-2)-x(end))),log(abs(x(2:end-1)-x(end))),'.'))```` The last line makes the following plot (except for the green line, which is $$y=2x$$): MATLAB can calculate roots through Newton's method, and verification of convergence is graphed. The main point here is that the points are more or less on the line y=2x, which makes sense: Taking the logarithm of the sequence in (3) leads to \begin{equation} \label{eq:convergence:plots} \log|{x_{n+1}-\alpha}| \approx \log L + p\log|{x_{n}-\alpha}| \end{equation} for $$n\gg1$$, which means that the points $$(\log|{x_{n}-\alpha}|, \log|{x_{n+1}-\alpha}|)$$ will converge to a line with slope $$p$$. The periods in front of `*`, `/`, and `^` are needed (as in the code above) when the operation can have a linear algebra connotation, but what is requested is an element-by-element operation. Since matrices can be multiplied and divided by each other in a way that is not element-by-element, we use the point-wise version of them when we are not interested in the linear algebra operation. Exercise 8. Internalize the differences between the point-wise and regular versions of the operators by examining the results of the following expressions that use the variables `A=[1 2; 3 4]`, `B=[1 0; 0 2]`, and `C=[3;4]`. Note: some commands may result in an error message. Understand what the error is and why it was given. • `A*B` vs. `A.*B` vs. `B.*A` vs. `B*A` • `2*A` vs. `2.*A` • `A^2` vs. `A*A` vs. `A.*A` vs. `A.^2` vs. `2.^A` vs. `A^A` vs. `2^A`. The last one here might be difficult to understand…it is matrix exponentiation. • `A/B` vs. `A\B` vs. `A./B` vs. `A.\B` • `A*C` vs. `A*C'` vs. `C*A` vs. `C'*A` • `A\C` vs. `A\C'` vs. `C/A` vs. `C'/A` Homework 2. Modify your secant method code so that it remembers the iterations (perhaps save it in a new file?). Now plot the points that, according to (4) should be on a line with slope $$p$$. What is $$p$$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011709094047546, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem/57169
References for “modern” proof of Newlander-Nirenberg Theorem Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I'm starting to prepare a graduate topics course on Complex and Kahler manifolds for January 2011. I want to use this course as an excuse to teach the students some geometric analysis. In particular, I want to concentrate on the Hodge theorem, the Newlander-Nirenberg theorem, and the Calabi-Yau theorem. I have many excellent references (and have lectured before) on the Hodge and CY theorems. However, for the Newlander-Nirenberg theorem, I am finding it hard to find a "modern" treatment. I recall going through the original paper in my graduate student days, but I hope that there is a more streamlined version floating around somewhere. (I want to consider the general smooth case, not the easy real-analytic version). Besides the original paper, so far I can only find these references: J. J. Kohn, "Harmonic Integrals on Strongly Pseudo-Convex Manifolds, I and II" (Annals of Math, 1963) and L. Hormander, "An introduction to complex analysis in several variables" (Third Edition, 1990) Both are easier to follow than the original paper. But my question is: are there any other proofs in the literature, preferably from books rather than papers? The standard texts on complex and Kahler geometry that I have looked at don't have this. - Have you looked at the paper by Nijenhuis and Wolf in the Annals in the early sixties where they also prove existence of pseudoholomorphic curves. – Mohan Ramachandran Mar 2 2011 at 21:23 5 Answers There is a proof due to Malgrange which can be found in Nirenberg's, Lectures on Linear Partial Differential Equations. I am not sure that one can call the proof modern, but it is the simplest proof that I know. - Thanks for this one too. I will check it out. – Spiro Karigiannis Jul 7 2010 at 13:42 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi Spiro: I have had much the same difficulties as you, but I now know a modern proof. At heart, the original proof is an application of the implicit function theorem. More specifically, let $U$ be a polydisk in $C^n$ consider the sequence of Banach manifolds $Diff^{k,\alpha}(U,C^n) \to AC^{k-1,\alpha}(U) \to (A^{0,2})^{k-2,\alpha}(U,TU).$ These are respectively the diffeomorphisms $U\to C^n$ of class $(k,\alpha)$, the almost complex structures on $U$ of class $(k-1,\alpha)$ and the $(0,2)$ forms on $U$ with values in the holomorphic tangent bundle, of class $(k-2,\alpha)$. The first map is the pullback of the standard complex structure, and the second is the Frobenius integrability form $\phi \mapsto \overline \partial \phi - \frac 12 [\phi\wedge \phi].$ The object is to show that the first map is locally surjective onto the inverse image of $0$ by the second. These spaces are Banach manifolds, and if you can show that the sequence of derivatives (respectively at the identity, at the standard complex structure and at 0) is split exact, the result follows from the implicit function theorem. This sequence of derivatives is the Dolbeault sequence on $U$ (in the appropriate class), and it is split exact, though this is NOT obvious. There is an error in the original paper, or rather in the paper of Chern's that it depends on, but the result is true. The remainder of the mess in the original proof is due to the authors writing out the Picard iteration in the specific case, rather than isolating the needed result. I am working on getting this written up with Milena Pabiniak, a graduate student here at Cornell. Write me at [email protected] if you are interested in seeing details. John Hubbard - Mister Hubbard, welcome to MO! – Victor Protsak Mar 2 2011 at 21:05 @John: Something using the Banach space implicit function theorem and a split short exact sequence definitely sounds like a modern proof to me. I would certainly love to see the details of this. I'll contact you. Thanks! – Spiro Karigiannis Mar 4 2011 at 12:31 It's covered in Demailly's too-little-known book, Complex analytic and differential geometry, though the proof given there is apparently modelled on the references you cited. Edit: I just noticed that the MathOnline link currently seems to be non-functional, so here's a link to Demailly's webpage. - Thanks. Indeed, I did not know about this book. It has a ton of useful other stuff too. I'd still be interested to hear about any other references, though. – Spiro Karigiannis Jun 17 2010 at 15:39 1 +1 for the mathonline plug! :) – Kevin Lin Jun 17 2010 at 17:32 This is not quite an answer to your question, but you might consult the book by Donaldson and Kronheimer "The geometry of 4-manifolds". In chapter 2 they prove an integrability theorem for holomorphic vector bundles, the point being that this can be regarded as a simpler version of the Newlander-Nirenberg theorem, and (in my view) very suitable for your course. You might also want to mention the following simple example for instructional purposes: the nilpotent Lie group H^3 x R where H^3 is the Heisenberg group has an obvious left-invariant almost-complex structure whose Nijenhius tensor vanishes. Although not a complex Lie group, it is easy to find independent local complex coordinates z_1, z_2. I suspect that there are similar classes of almost-complex examples where the integration is elementary. - Thanks, Simon. I am indeed considering the simpler version from Donaldson-Kronheimer. My course plans may be a little bit too ambitious. – Spiro Karigiannis Jul 26 2010 at 13:40 In my continued searches for modern proofs of Newlander-Nirenberg, I found this great source: it's "Applications of Partial Differential Equations to Some Problems in Geometry", a set of lecture notes by Jerry Kazdan, which are available on his website at UPenn. The proof of Newlander-Nirenberg in here is based on Malgrange's proof, which is also in Nirenberg's book (mentioned by Morris KaLka in his answer above), but these notes by Kazdan cover a lot of basic geometric analysis theorems, so they're an excellent resource. I wish I knew about these when I was in graduate school... - Hi Spiro, thank you for the reference. By the way, have you get some examples you can do the explicit calculation following there proofs? – Jay Nov 16 at 17:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374586939811707, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1155/how-does-one-provide-a-secure-and-authentic-communication-channel
# How does one provide a secure and authentic communication channel? Let assume two participants Alice and Bob who perform a given protocol which is a sequence of messages exchange between them. My question is: How can I provide a secure and authentic communication channel using cryptography, so that when Bob sends a message "M" to Alice, the latter will be able to know whether or not the message "M" is from Bob? Is there a solution without using certificates? - ## 3 Answers "Authentic" is defined with regards to an identity. A message from Bob is authentic only insofar as Bob is distinct from Charlie. There must be something which, from the outside, makes Bob and Charlie two different entities. In a computerized world with networks, Alice can "see" other people only through the data they send. Moreover, everybody can buy a computer, so what Bob can compute, so can Charlie. The only remaining way for Bob to be distinct from Charlie is what Bob knows. That's how identities are defined in cryptography: you are what you know. So Alice can make sure that a given message $M$ comes from Bob (and not Charlie) only if Bob does some computation over the message $M$ which involves some data that Bob knows, but not Charlie. A basic, traditional setting is when the "secret" known by Bob is a sequence of bytes which Alice knows too. That sequence of bytes can then be used as key in a Message Authentication Code: this is a kind of checksum, which Bob can compute over $M$, and that Alice can recompute over $M$ since she also knows the "secret"; that's how Alice verifies the MAC. But Charlie cannot compute the MAC since he does not know the secret. The shared secret can also be used as key for symmetric encryption, so that the message contents can remain confidential: there again, Alice can decrypt the data encrypted by Bob (since she knows the secret) but Charlie cannot. Combining symmetric encryption and a MAC while using the same secret as key for both can be somewhat perilous; there are subtleties. The traditional scheme has the drawback of having a shared secret between Alice and Bob: since identities are "what we know", this means that Alice and Bob are really the same entity, "Bobalice". The scheme above is about Bobalice talking to himself/herself. That's not a useless model: that's what happens when you encrypt some data so that you can store it, and decrypt it back later; i.e. you are talking with yourself through time. In practice, for two distinct human beings, this means that Alice can authenticate Bob's message, but cannot prove it, because whatever MAC Bob computes, Alice could have computed as well. Also, setting up a shared secret between two people can be hard in some contexts (they must have met beforehand, or something like that). Finally, in the context of software (which is, by definition, quite dumb), there can be some issues if an attacker tries to send back to Alice one of her own messages: Alice knows that a message $M$ is from Bob because only Alice and Bob know the secret value which is needed to compute the MAC, and (that's the critical point) Alice remembers that she did not send that specific message -- so it must be from Bob. The "remember" bit can be tricky; a simple way is to add the sender name in each message (so that a message from Alice always begins by "From Alice"). To go further, we need public key cryptography. This leads to the following protocol: Bob has a key pair for an asymmetric encryption algorithm (e.g. RSA) or a key exchange algorithm (e.g. Diffie-Hellman). The "public" part of the key is known by everybody (that's what "public" means), including Alice. The "private" part of the key is known by Bob only (it is "private"). Alice choose a random sequence of bytes $K$ and encrypts $K$ with Bob's public key; she sends the result to Bob. Bob uses his private key to decrypt $K$, at which point ALice and Bob have a shared secret ($K$) and they can use it as above, with symmetric encryption and a MAC. With public key cryptography, Bob is distinct from everybody else, including Alice; he has his very own private key. Bob's identity is defined as "whoever controls the private key corresponding to that public key". This still requires Alice to be able to know, in a reliable way, Bob's public key, where "reliable" means that Alice will not be induced into mistakenly using Charlie's public key. That's where certificates come into play: a certificate is a piece of data which is signed by an authority and says: "this is Bob's public key". It is a way to distribute public keys (and their binding to identities) in a verifiable fashion. There can be other ways. For instance, in the SSH protocol, a client (which tries to connect to a remote server) knows the server public key by remembering it (the client stores a local copy of the public key). This requires a specific bootstrap procedure, for the first time the client connects to a given server, but afterwards the server key is known to the client, and the client just uses it to authenticate the server. - There needs to be some secret or private information which Bob knows (and a possible attacker doesn't), and we need a way how Alice somehow can check that Bob has this information. This could be a private key, where Alice knows the corresponding public key (and that it is owned by Bob). Then Bob can use the private key to sign the message M, or some messages used during the negotiation of the channel, to make sure that there is no man-in-the-middle attack. The custom way to make sure that Alice knows that Bob is the owner is by providing a certificate which states this, but any other way (i.e. they did meet before) would work, too. Another possibility would be a shared secret, like a password. Of course, Bob can't simply send the password with the message (at least, if he isn't sure about Alice's identity and the confidentiality of the connection), as then an interceptor can read the password. But Bob and Alice both can derive a key from the password , and use this key as the authentication key for a MAC (message authentication code), accompanying the message. (You can also derive an encryption key from this password, to also get confidentiality.) More elaborate protocols allow that Alice doesn't have Bob's password itself, but only a "password verifier", which allows checking that Bob has the password, but is not enough to authenticate a message by itself. All three methods are available with the SSL or TLS protocols: the first is usually done with certificates, but also works without them (or using self-signed certificates), as long as Alice can somehow verify Bob's public key. The second is known as "pre-shared key", the third one as SRP (both TLS extensions). - Yes, of course there is a way. Paulo answered in the context of generating the cryptographical session (that is, when Alice starts the conversation, how does she know she's really talking to Bob); I'll answer in the context of the actual messages (the session she originally established is with Bob, but how does she know that the message she just got is from Bob as well). Well, the mechanism that's most common method uses a 'Message Authentication Code' (or MAC). This is a cryptographical primitive that takes a message and a key, and generates a 'tag'. The fundamental property of a MAC is that if you don't know the key, you cannot generate the 'tag' for any message (even if you've seen other message/tag pairs with the same key). One common MAC is called HMAC (see http://en.wikipedia.org/wiki/HMAC ) Now, when you generate the cryptographical session, we generate session keys that both Alice and Bob know (and no one else); we use those keys to encrypt the traffic. So, what we do is also generate a pair of MAC keys as well (one to authenticate the traffic from Alice to Bob, and one to authenticate the traffic from Bob to Alice). Now, when Bob sends a message to Alice, he takes his copy of the "Bob to Alice" MAC key, and uses it to compute the MAC of his message; he then appends that message to the encrypted message. And, when Alice gets this message, she takes her copy of the "Bob to Alice" MAC key, and computes the MAC of Bob's message. She then compares the MAC that she computed with the MAC she finds in the message. If the two MAC compares, she accepts the message. Here is why this works: she knows that only someone who knows the MAC key can generate a MAC correctly. She also knows that there are only two people who knows that key; herself and Bob. She also knows that she'll never create a message based on that key (she uses another MAC key when sending a message to Bob), and so the message must have come from Bob (and wasn't modified in transit). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550735354423523, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52099/cohomology-theory-for-algebraic-groups/52102
## cohomology theory for algebraic groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a cohomology theory for algebraic groups which captures the variety structure and restricts to the ordinary group cohomology under certain conditions. - ## 5 Answers For a full treatment of the foundations it's best to consult Part I of the book Representations of Algebraic Groups by J.C. Jantzen (2nd ed., AMS, 2003) even though it's not easily available online. Rational (or Hochschild) cohomology has been well developed, including the broader scheme framework (Demazure-Gabriel book and Jantzen). What CPS and van der Kallen do in their important paper is to relate indirectly the algebraic group cohomology with finite group cohomology for related finite groups of Lie type. This theme has been much further developed in many later papers, but is subtle. For the algebraic groups themselves, this kind of cohomology theory has also been studied in many papers; but relating it to abstract group cohomology for the algebraic (rather than finite) groups such as the special linear group is not at all obvious. By the way, the Inventiones paper and some others by CPS et al. are freely available online through http://gdz.sub.uni-goettingen.de (just do a quick search for Parshall). ADDED: Maybe I can answer the original question in more detail and respond to Ralph's further question. For an affine group scheme `$G$` over a field `$k$`, rational (Hochschild) cohomology is defined as usual in terms of derived functors of the fixed point functor. But everything is done in the category of rational `$G$`-modules; for an affine algebraic group over an algebraically closed field and finite dimensional modules this means that representing matrices have coordinate functions in `$k[G]$`. Hochschild realized that for groups with added structure, one must use injective resolutions (there are usually not enough projectives). in any case, rational cohomology tends to diverge a lot from the usual group cohomology. In characteristic 0, you are essentially getting Lie algebra cohomology. Studying rational `$G$`-modules is equivalent to studying modules for the Hopf dual of `$k[G]$` (hyperalgebra, or algebra of distributions). So the answer to Ralph's question is yes: the notions of cohomology agree. Jantzen's main focus is on prime characteristic and reductive algebraic groups, where powers of the Frobenius map yield kernels which are finite group schemes. Roughly speaking, injectives for `$G$` are direct limits of injectives for finite dimensional hyperalgebras, starting with the restricted enveloping algebra of the Lie algebra of `$G$` (whose cohomology usually differs from the ordinary Lie algebra cohomology). Relating rational cohomology of `$G$` to ordinary cohomology of finite subgroups gets even more subtle, as discussed above. By now there is a lot of literature on these matters but many unanswered questions. - @Jim: Let $G$ be a finite group scheme (defined over the field $k$) and let $A$ be the corresponding cocommutative hopf algebra (i.e. A is the dual hopf algebra of the coordinate ring $k[G]$ of $G$). Is there a connection between the rational cohomology of $G$ and the cohomology of $A$ defined as $Ext_A(k,-)$ ? – Ralph Jan 14 2011 at 23:27 4 The first edition of Jantzen's book is available online: gen.lib.rus.ec/… – Dmitri Pavlov Jan 15 2011 at 4:41 1 @Dmitri: This can be useful for the immediate purpose, since the foundational Part I in the original 1987 Academic Press edition is essentially unchanged in the newer edition (though the longer Part II has been greatly expanded and partly rewritten). – Jim Humphreys Jan 15 2011 at 13:31 Jim, thanks for this information. It follows in particular that properties of hopf algebra cohomology (like cup products, Steenrod operations, Tate cohomology, etc.) carry over to the cohomology of group schemes. – Ralph Jan 15 2011 at 20:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes, it's called "rational cohomology" -- not to be confused with cohomology with rational coefficients... see eg "Rational and Generic cohomology" by Cline, Parshall, Scott and van der Kallen, Inventiones. By using google I have even found a link, make sure it is legal for you to download this file: http://www.digizeitschriften.de/main/dms/gcs-wrapper/?gcsurl=http%253A%252F%252Flocalhost%253A8086%252Fgcs%252Fgcs%253F%2526%2526%2526%2526%2526%2526%2526%2526%2526action%253Dpdf%2526metsFile%253DPPN356556735_0039%2526divID%253Dlog12%2526pdftitlepage%253Dhttp%25253A%25252F%25252Fwww.digizeitschriften.de%25252Fmain%25252Fdms%25252Fpdf-titlepage%25252F%25253FmetsFile%25253DPPN356556735_0039%252526divID%25253Dlog12%2526targetFileName%253D_log12.pdf - This is largely redundant with Jim Humphrey's answer, but I thought I'd add the following remarks. Ordinary group cohomology is defined via derived functors, but can be described using cocycles -- this amounts to taking an explicit free resolution of the trivial module. In the setting of an algebraic group, you can also describe cohomology via cocycles; here the cocycles you should take are regular functions. More precisely: If $G$ is a (linear) algebraic group over a field $k$, and if $V$ is a finite dimensional linear representation of $G$ as $k$-algebraic group ("rational repr"), one can consider the group $C^i(G,V)$ of all regular functions $$\prod^iG=G \times \cdots \times G \to V;$$ using the "usual" boundary mappings for group cohomology, $C^\bullet(G,V)$ can be viewed as a complex. The key feature is that the cohomology of the complex $C^\bullet(G,V)$ coincides with the derived functor cohomology of $V$ in the category of rational representations of $G$. (I'm suppressing here the correct definition of $C^\bullet(G,V)$ for infinite dimensional rational representations $V$ of $G$). This point of view makes (more?) clear how this "algebraic" cohomology can diverge from "ordinary" cohomology. Consider e.g. the additive group $G = \mathbf{G}_a$ over $k$, and consider the trivial repr. $V = k$. The algebraic cohomology $H^1(\mathbf{G}_a,k)$ identifies with the set of additive regular functions $\mathbf{G}_a \to k$; this is 1-dimensional if $k$ has char. 0, while if $k$ has char. $p>0$ this cohomology has a $k$-basis of the form {$T^{p^i} \mid i \ge 0$ } (for a suitable regular function $T:\mathbf{G}_a \to k$). On the other hand, the "ordinary" first cohomology for the group $k = \mathbf{G}_a(k)$ is just the set of all "abstract" group homomorphisms $k \to k$. In general, there are many such homomorphisms which are not regular functions (e.g. take $p$th-roots of the function $T$ in positive characteristic). - Thanks all for the valuable information. - gurs, are you sim? – Yemon Choi Jan 16 2011 at 10:30 also: MO policy prefers that comments not be left as "answers" – Yemon Choi Jan 16 2011 at 10:31 Many thanks to Professors Humphreys and McNinch for their insightful answers. - If you like some of the answers, perhaps you might accept one of them? or do they not quite answer your question? – Yemon Choi Jan 16 2011 at 10:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917744517326355, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/136732/showing-that-a-set-is-closed-in-the-base-variety-of-a-family-v-to-b
# Showing that a set is closed in the base variety of a family $V\to B$. We work over a field $k$. Let $B$ be an algebraic variety over $k$. Suppose we are given a family of subvarieties of $\textbf P_k^n$ with base $B$, by which I mean a subvariety $V\subset B\times\textbf P_k^n$ together with the projection $\pi:V\to B$. The members of the family are the fibers $V_b\subset\textbf P_k^n$. Suppose we also have a subvariety $X\subset\textbf P_k^n$. $\textbf{Question}\,\,1$: How to show that the locus $L=\{b\in B\,\,|\,\,V_b\cap X\neq \emptyset \}$ is closed in $B$? I just observed that $\pi$ is a closed map (it is proper), and noticed that $L=\pi(Z)$ with $Z=\{(b,x)\in B\times X\,\,|\,\,x\in X\cap V_b\}$. But how to show then that $Z$ is closed in $B\times X$? $\textbf{Question}\,\,2.$ Also, (forgetting about $X$) if we are given a second family $\pi':W\to C$, I would like to see that the locus $L'=\{(b,c)\,\,|\,\,V_b\cap W_c\neq\emptyset\}$ is closed in $B\times C$. My problem here - and also above - is that I cannot translate in a nice way the nonempty condition, which is the only one I have. Moreover, if I try to make $L,L'$ explicit, I just come up with infinite unions of closed, while I would like intersections, for instance. Thank you for any advise. [Note: these problems come from Joe Harris, "Algebraic Geometry. A first course."] - ## 2 Answers For Question 1, observe that $Z=(B\times X)\cap V$. (Just notice that $x\in V_b$ means $(b, x)\in V$.) Question 2: consider the map $$f : (B\times C)\times {\bf P}^n \to (B\times {\bf P}^n)\times (C\times {\bf P}^n), \quad (b,c,x)\mapsto ((b,x), (c,x)).$$ Then $L'$ is the image of $f^{-1}(V\times W)$ by the projection $(B\times C)\times {\bf P}^n\to B\times C$. - Oh. So $x\in V_b\cap W_c$ means $((b,x),(c,x))\in V\times W$. Now I see. Thank you very much! – atricolf Apr 25 '12 at 17:55 Question 1 If $\rho:V\to \mathbb P^n_k$ is the other projection, you have $L=\pi(\rho^{-1} (X))$ Question 2 You have a morphism $p:B\times \mathbb P^n_k\times C\times \mathbb P^n_k\to \mathbb P^n_k\times \mathbb P^n_k$. Consider the diagonal $\Delta \subset \mathbb P^n_k\times \mathbb P^n_k$ and its inverse image $D=p^{-1}(\Delta)\subset B\times \mathbb P^n_k\times C\times \mathbb P^n_k$. Then if you look at $\pi\times \pi':V\times W\to B\times C$ you have $$L'=(\pi\times \pi')(D)$$ - Thank you! it's very clear! – atricolf Apr 25 '12 at 18:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9684455990791321, "perplexity_flag": "head"}
http://mathoverflow.net/questions/22883?sort=votes
Etale coverings of certain open subschemes in Spec O_K Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Although my number theory is really weak, I'm trying to understand the notion of etale coverings in this context. I think this could provide a very interesting point of view. Let $U$ be an open subscheme of $\textrm{Spec} \ \mathbf{Z}$. The complement of $U$ is a divisor $D$ on $\textrm{Spec} \ \mathbf{Z}$. Q. Can we classify the etale coverings of $U$? (Say of a given degree...) This is what I (think I) know. Given a finite etale morphism $\pi:V\longrightarrow U$, the normalization of $X$ in the function field $L$ of $V$ is $\textrm{Spec} \ O_L$. This is a nontrivial fact, of course. So, in view of the previous remark, can we also say what $V$ itself should be? I'm guessing this has to do with the etale fundamental group $\pi_1(U)$. The latter is, I believe, finitely generated. I think that has to do with the fact that there are only finitely many ramified covers of a certain degree, right? Of course, we can complicate things by replacing $\textrm{Spec} \ \mathbf{Z}$ by `$\textrm{Spec} \ O_K$`. Example. Take $U= \textrm{Spec} \ \mathbf{Z} -{ (2)}$ and let $V\rightarrow U$ be an finite etale morphism. Suppose that $V$ is connected and that let $K$ be its function field. The normalization of $\textrm{Spec} \ \mathbf{Z}$ in $K$ is of course $O_K$. The extension `$\mathbf{Z}\subset O_K$` is unramified outside $(2)$ and (possibly) ramified at $(2)$. Can one give a description of $V$ here? EDIT. I just realized one can also ask themselves a similar question for $\mathbf{P}^1_{\mathbf{C}}$. Or even better, for any Riemann surface $X$. - 3 V is just O_K[1/2], right? – Kevin Buzzard Apr 28 2010 at 18:14 2 In general any open subset of the spectrum of a Dedekind domain is necessarily affine, and finite étale covers of affine schemes (being finite) are necessarily affine. You can work out this kind of question by thinking of a few things : since you're concerned with bases $S = Spec \mathcal{O}_K$, you know that all finite śtale covers $X\to S$ are going to be $Spec$ of some finite flat $O_K$-algebra... Then go back to the definition of unramified (i.e. calculate the stalk map for $Spec(B)\to Spec(A)$, it's just a the localization of $A\to B$ at a prime of $B$). – GD Apr 28 2010 at 18:38 2 @Ariyan: by definition if S-->T is finite and T is affine, then S is affine. – Kevin Buzzard Apr 28 2010 at 19:36 3 @Giovanni: for general Dedekind domain $A$, why should every open subset of the spectrum be affine? Why is the complement of the closed point corresponding to a maximal ideal $m$ affine (obvious if $m$ torsion in class group, but more generally...)? A good example: for complement of origin in an elliptic curve, removing a rational non-torsion point yields an open affine (via Riemann-Roch) but not basic open affine (= inverting some nonzero element). This is an embarrassingly elementary (but "useless") question about general Dedekind domains. I checked with two colleagues, as stumped as me. – BCnrd Apr 28 2010 at 21:00 5 @BCnrd: Any nonempty open U is the complement of a finite set S of closed points. Let A[1/S] be the set of x in Frac(A) whose valuation at each prime outside S is nonnegative. Then Spec A[1/S] --> Spec A is an open immersion with image U since this can be checked on a covering by basic open affines on which the primes in S become principal. – Bjorn Poonen Apr 29 2010 at 23:30 show 7 more comments 4 Answers As Kevin points out, $V$ is indeed $\mathcal{O}_K[\frac{1}{2}]$ in your example. Your link to the fundamental group is also correct. $\pi_1(U)$ is the Galois group of the maximal extension of $\mathbb{Q}$ unramified outside 2 (since you can restrict attention to the connected etale $\mathbb{Q}$-algebras)*. More generally, in your original question, these are replaced by $\mathcal{O}_L$ with the primes above the support of $D$ inverted, and the Galois group of the maximal extension of $\mathbb{Q}$ unramified outside the support of $D$. These groups can get pretty horrendous, and so number-theorists tend to (at least in my view) study the more well-behaved but still very mysterious maximal pro-$p$-quotients of these groups. Now these pro-$p$ groups are not only finitely generated by work of Shafarevich ("Extensions with prescribed ramification points"), they are $d$-generated, where $d$ is the cardinality of the support of $D$ (for "tame" and not silly $D$). More impressively, the relation rank is also calculated/bounded (in this case, it's also $d$!!!), frequently leading to conclusions about when these groups are finite or infinite. The "more complicated" starting point of $\mathcal{O}_K$ is in a sense not actually much more complicated. You're still asking for the Galois group of the maximal extension of $K$ unramified outside a finite set of primes, and Shafarevich's work gives formulas for the relation rank and generator rank for the pro-$p$-quotients. The answer to your overall classification question is thus pretty difficult, and consume entire subdisciplines of number theory. As an example, Christian Maire has constructed number fields with a trivial class group but infinite unramified extensions -- you'd have to have a complete understanding of when this could happen before you could hope to prescribe all unramified extensions of a given degree, with or without certain primes inverted. There are certain cases where this can be done via, e.g., root discriminant bounds, but the story is far from being complete at this point. As in Lars's answer, the situation is much much better understood for $\mathbb{P}_\mathbb{C}^1$ and Riemann surfaces than it is for number fields. *: Actually, you have to be a little careful about 2-extensions. Etaleness doesn't pick up on whether or not infinite primes ramify. Everything's fine if you start with an totally imaginary field. - Thanks. Should be fixed up now -- anything dumb left is more likely ignorance than a typo. – Cam McLeman Apr 28 2010 at 21:21 This is great. Thanks. How is the situation on the 2-dimensional analogue? That is, what if we replace Spec $\mathbf{Z}$ by $\mathbf{P}^1_{\mathbf{Z}}$? Can one say something about the maximal pro-p quotients of the fundamental group? I think this is where things will get really interesting. For now, I will just focus on the "easy" 1-dimensional case, though. – Ariyan Javanpeykar Apr 29 2010 at 21:25 I'm surprised that nobody has pointed out the nice article by Mazur on the étale cohomology of number fields : numdam.org/numdam-bin/… – Chandan Singh Dalawat Sep 5 2010 at 3:02 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As to your addendum regarding the fundamental group of Riemann surfaces, the situation is as follows: If $X$ is a smooth projective algebraic curve of genus $g$ over an algebraically closed field $K$ of characteristic $0$, and if $U\subset X$ is obtained by removing $r$ distinct closed points, then $\pi_1(U)$ is the profinite completion of the surface group `\[\left<a_1,\ldots, a_g, b_1,\ldots, b_g, c_1,\ldots, c_r | [a_1,b_1]\cdot\ldots\cdot[a_g,b_g]c_1\cdot\ldots\cdot c_r = 1\right>.\]` Phrased more traditionally in terms of Galois theory of fields, if $K$ is the function field of $X$, then this group is precisely the Galois group of the maximal algebraic extension of $K$ which is unramified with respect to all valuations of $K$ except those corresponding to the $r$ points that were removed. More concretely: The finite coverings of $U$ arise by taking finite extensions of $K$, unramified except possibly at the removed points, and taking the normalization of $U$ in $L$ (that is, $U$ is an affine curve given by a ring $A$ contained in $K$, the covering will be the spectrum of the normal closure of $A$ in $L$). Edit: If $K$ is not algebraically closed and $U$ is geometrically connected, then there is a short exact sequence `\[1\rightarrow \pi_1(U\times_K \bar{K}) \rightarrow \pi_1(U)\rightarrow Gal(\bar{K}/K)\rightarrow 1\]` (one has to pick compatible base points), and if $X$ has a $K$-rational point, then this sequence splits, so the structure is "known", as far as the Galois group of the base is known. - This is really clear. Thanks. – Ariyan Javanpeykar Apr 29 2010 at 21:29 Here is a result concerning Riemann surfaces which might be relevant (cf. your EDIT). Fix a Riemann surface $X$, a discrete closed subset $D\subset X$ and put $X_0 =X\setminus D$ . Let $\mathcal RevRam (X;D)$ be the category with objects finite ramified coverings of $X$ (= proper non constant morphisms to $X$), étale over $X_0$. And let $\mathcal RevEt(X_0)$ be the category with objects finite étale coverings of $X_0$. Theorem The restriction functor $\mathcal Revram (X;D) \to \mathcal Rev(X_0)$ is an equivalence of categories. This is similar to Lars's interesting answer but the difference is that there is no assumption of algebraicity on the Riemann surfaces involved (for example $X$ could be an arbitrary open subset of $\mathbb C$) - 1. Shouldn't the objects of RevRam(X,D) be finite $\textbf{possibly}$ ramified coverings of $X$? 2. I'm guessing a morphism in RevRam(X,D) from $(Y,\pi)$ to $(Y^\prime,\pi^\prime)$ is a commutative diagram and similarly for Rev$(X_0)$. 3. How does the restriction functor act on morphisms? Given $(Y,\pi)$ in RevRam(X,D), I send this to $(\pi^{-1}(X_0),\pi|_{\pi^{-1}(X_0)}$, right? I don't see how I get a morphism in Rev$(X_0)$, though. Unless I demand something extra for morphisms in RevRam(X,D). I think this is really interesting. If true, can't one show eq. of cat. for sm. proj. varieties? – Ariyan Javanpeykar Apr 29 2010 at 21:37 Yes, dear Ariyan, you're right: "ramified" here means "POSSIBLY ramified". (This is abuse of language.) And a morphism $F:(Y,\pi) \to (Y', \pi ')$ in the category $\mathcal RevRam (X;D)$ gets sent to its restriction $F _0 : Y_0=\pi^{-1} (X_0) \to Y'_0=(\pi ')^{-1} (X_0)$, a morphism in the category $\mathcal Rev (X_0)$. Amusingly, I had asked myself in how much detail I should post my answer. I opted for terseness for the sake of fluidity of style, but maybe fluidity turned into sloppiness... – Georges Elencwajg May 1 2010 at 11:19 As to your last question: yes, there is a vast generalization of the above to higher dimensional varieties, due to Grauert-Remmert. You can read about it in SGA1 in the very interesting Exposé 12 by Mrs. M.Raynaud (not to be confused with Mr. Michel Raynaud ! ). – Georges Elencwajg May 1 2010 at 11:53 @Ariyan and BCnrd: Cf. Exercise 1.9, Section 4.1 of Q. Liu's wonderfully written book, "Algebraic Geometry and Arithmetic Curves". @Ariyan: Exercise: re-write the one-dimensional (i.e. classical) idele class group as a certain complement of $Spec\mathcal{O}_K$...cf. the intro to "Global class field theory" by Kato-Saito. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 108, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363647103309631, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/9294/list
## Return to Answer 2 added 1651 characters in body Addendum: Here are some further simple considerations which unify some of the other examples given. For a topological space $X$, consider the specialization relation: a point $x$ specializes to the point $y$ if $y$ lies in the closure of ${x}$. This implies that any sequence which converges to $x$ also converges to $y$. (If in the previous sentence we replace "sequence" by "net", we get a characterization of the specialization relation.) The specialization relation is always reflexive and transitive, so is a quasi-order. Note that a topological space is T_1, or separated, iff the specalization relation is simply equality. Thus in a space which is not separated, there exist distinct points $x$ and $y$ such that every net which converges to $x$ also converges to $y$. If $X$ is first countable, we may replace "net" by "sequence". A topological space $X$ satisfies the T_0 separation axiom, or is a Kolmogorov space, if for any distinct points $x,y \in X$, there is an open set containing exactly one of $x$ and $y$. A space is Kolmogorov iff the specalization relation is anti-symmetric, i.e., is a partial ordering. Thus in a non-Kolmogorov space, there exist distinct points $x$ and $y$ such that a net converges to $x$ iff it converges to $Y$. (If $X$ is first countable...) An example of a first countable non-Kolmogorov space is a pseudo-metric space which is not a metric space (a pseudo-metric is like a metric except $\rho(x,y) = 0 \iff x = y$ is weakened to $\rho(x,x) = 0$). In particular, the topology defined by a semi-norm which is not a norm always gives such examples. 1 Here are two relevant facts: 1) In a Hausdorff space, a sequence converges to at most one point. 2) A first-countable space in which each sequence converges to at most one point is Hausdorff. See e.g. pages 4 to 5 of http://math.uga.edu/~pete/convergence.pdf for the (easy) proofs of these facts, together with the definition of first-countable. See p. 6 for an example showing that 2) does not hold with the hypothesis of first-countability dropped. It seems like a worthwhile exercise to use 2) to find spaces that have the property you want. For instance, the cofinite topology on a countably infinite set is first-countable and not Hausdorff, so there must be non-uniquely convergent sequences.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047326445579529, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-applied-math/148690-looks-simple-integral-but-fake.html
# Thread: 1. ## Looks Simple Integral But it is fake HI , can anyone give me a way to solve : $<br /> \int \frac{x^2}{1+x^4}dx<br />$ 2. ## Thought I'm considering multiply both top and bottom by $x^{-2}$, then let $x = e^u$ which I believe converts to a familiar function (but you may also have to integrate by parts). 3. Originally Posted by parkhid HI , can anyone give me a way to solve : $<br /> \int \frac{x^2}{1+x^4}dx<br />$ Another way is to expand the integrand using partial fractions. $\frac{x^2}{1+x^4}=\frac{1}{2\sqrt{2}}\cdot\frac{x} {x^2-\sqrt{2}x+1}-\frac{1}{2\sqrt{2}}\cdot\frac{x}{x^2+\sqrt{2}x+1}$ 4. You can also use complex line integration. See here for a very similar sort of integral. 5. Originally Posted by Ackbeet You can also use complex line integration. See here for a very similar sort of integral. There is only a minor detail: the integral proposed by parkhid is indefinite and its solution is a family of functions... the integral of Your example is definite and its solution [if it exists...] is a real [or complex] number... Kind regards $\chi$ $\sigma$ 6. Very true. My bad. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126498699188232, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4497/exposition-of-growth-in-a-perpetuity/4501
# Exposition of Growth in a Perpetuity Something that's bugged me since I've ever learned anything about finance: Philosophically(?) speaking, why does growth subtract from a perpetuity's return? I know the mathematical explanation, but it's so counter-intuitive. Why should a perpetuity with 0% expected growth be valued at the same proportion as one with high growth when holding the natural (risk free) rate constant? Maybe if one uses the stock market as a transposition, you see that P/Es are higher with higher expected growth companies, reducing the net rate, explaining the $r - g$ in the denominator, but what about a natural rate of $X$ and a growth rate $g > X$? So, we're supposed to get paid for the privilege of holding this asset now? - Take the dividend discount model for example. In your example the dividend growth rate $g$ would be greater than the investors required return $r$. So the required return should definitely be larger than the dividend growth rate. Finally, you should end up with the price of the stock after all. :-) – vanguard2k Nov 9 '12 at 9:26 ## 1 Answer I think the formula you refer to is $$PV=\frac{C}{r-g}$$ If that's the case, then you do not subtract growth, the minus sign has an advantage on the present value. The initial formula $PV=\frac{C}{r}$ assumes no evolution in $C$, but the other one assumes the that the payment will grow in time hence yes, you get paid for that. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389313459396362, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/36998/list
## Return to Question 2 typo Let A be a C*-algebra, let H be a Hilbert space, and let $T:A\rightarrow B(H)$ be a completely bounded (cb) map (that is, the dilations to maps $M_n(A)\rightarrow M_n(B(H))$ are uniformly bounded). We can write T has $T_1-T_2+iT_3-iT_4$ where each $T_i$ is completely boundedpositive. If $T$ is hermitian in that `$T(x^*)^* = T(x)$` for all $x\in A$, then $T=T_1-T_2$. We can order the hermitian cb maps $A\rightarrow B(H)$ by saying that $T\geq S$ if $T-S$ is completely positive. I'm interested in criteria by which we can recognise that $T\geq S$. Even special cases would be good (for example, I'm happy to assume that $T$ is completely positive). An old paper of Arveson ("Subalgebras of C*-algebras") shows that if T and S are both completely positive, and T has the minimal Stinespring dilation `$T(x) = V^*\pi(x)V$`, then $T\geq S\geq 0$ if and only if `$S(x) = V^*\pi(x)AV$` where $0\leq A\leq1$ is a positive operator in the commutant of $\pi(A)$. This is nice, but suppose all I know is that `$T(x)=V^*\pi(x)V$` and `$S(x) = U^*\pi(x)U$` (notice that the representation $\pi$ is the same). Can I "see" if $T\geq S$ by looking at $U$ and $V$? What if S is only cb, so $S(x)=A\pi(x)B$? Maybe that's too much to hope for, but anything vaguely in this direction would be interesting. 1 # Ordering of completely bounded maps Let A be a C*-algebra, let H be a Hilbert space, and let $T:A\rightarrow B(H)$ be a completely bounded (cb) map (that is, the dilations to maps $M_n(A)\rightarrow M_n(B(H))$ are uniformly bounded). We can write T has $T_1-T_2+iT_3-iT_4$ where each $T_i$ is completely bounded. If $T$ is hermitian in that `$T(x^*)^* = T(x)$` for all $x\in A$, then $T=T_1-T_2$. We can order the hermitian cb maps $A\rightarrow B(H)$ by saying that $T\geq S$ if $T-S$ is completely positive. I'm interested in criteria by which we can recognise that $T\geq S$. Even special cases would be good (for example, I'm happy to assume that $T$ is completely positive). An old paper of Arveson ("Subalgebras of C*-algebras") shows that if T and S are both completely positive, and T has the minimal Stinespring dilation `$T(x) = V^*\pi(x)V$`, then $T\geq S\geq 0$ if and only if `$S(x) = V^*\pi(x)AV$` where $0\leq A\leq1$ is a positive operator in the commutant of $\pi(A)$. This is nice, but suppose all I know is that `$T(x)=V^*\pi(x)V$` and `$S(x) = U^*\pi(x)U$` (notice that the representation $\pi$ is the same). Can I "see" if $T\geq S$ by looking at $U$ and $V$? What if S is only cb, so $S(x)=A\pi(x)B$? Maybe that's too much to hope for, but anything vaguely in this direction would be interesting.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964749813079834, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/homework+geometry
# Tagged Questions 1answer 102 views ### Wolfram Alpha's Mysterious Trig Abilities [closed] This is a very straight-forward question: I'm trying to simplify [sin(2pi*t +pi/4) + sin(2pi*t -pi/4)], and failing at it: ... 3answers 842 views ### Finding unit tangent, normal, and binormal vectors for a given r(t) For my Calc III class, I need to find $T(t), N(t)$, and $B(t)$ for $t=1, 2$, and $-1$, given $r(t)=\{t,t^2,t^3\}$. I've got Mathematica, but I've never used it before and I'm not sure how to coerce ... 1answer 596 views ### Using triangulation I have been presented with 3 known points and the power densities at those points. I need to use those points to find the location of the actual antenna which is generating the signals. Power ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326037764549255, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/13750/what-are-the-applications-of-hypergraphs/13754
## What are the Applications of Hypergraphs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hypergraphs are like simple graphs, except that instead of having edges that only connect 2 vertices, their edges are sets of any number of vertices. This happens to mean that all graphs are just a subset of hypergraphs. It strikes me as odd, then, that I have never heard of any algorithms based on hypergraphs, or of any important applications, for modeling real-world phenomena, for instance. I guess that the superficial explanation is that it's a much more complex structure than a regular graph, and given this and its generality it's harder to make neat algorithms for, but I would expect there to be something! Has anyone heard of a hypergraph-based algorithm, or application? It perplexes me that ordinary graphs can be so wonderfully useful, but their big brothers have nothing to offer. - 8 A general comment here is that a hypergraph is an absurdly general object; it's basically an element of the power set of the power set of a set. For example, topologies and measurable spaces are both (technically) special cases of hypergraphs. So any theorems or applications necessarily need to focus on special cases. – Qiaochu Yuan Feb 2 2010 at 4:21 2 Calling it a subset of the powerset makes it seem less intimidating =p. – Harry Gindi Feb 2 2010 at 12:00 3 Here are two partial explanations for why algorithms based on hypergraphs are less common than algorithms based on graphs. 1. Some polynomial-time algorithms for graphs turn into NP-complete problems when you try to generalize them to hypergraphs (e.g., finding a perfect matching). 2. We often use graphs to model symmetric binary relations, and symmetric binary relations appear much more frequently than symmetric ternary relations (and beyond). – gowers Mar 31 2012 at 21:20 1 @Harry Gindi: for me, it's just the other way roudn. Hypergraph? Cool. Subset of the powerset? Arrgh, complex stuff way over my head. :) – Felix Goldberg Sep 20 at 8:56 1 Hypergraphs also crop up in physics of many-body systems. Usual graphs are only good for modelling of the pairwise interaction. But oftentimes (for example in statistical physics and effective theories) one works with general interactions that depend on more than two particles. In this situation there is usually some restriction on the number of vertices a hyperedge can contain and this important class of hypergraphs is no longer absurdly general. – Marek Sep 20 at 9:03 show 2 more comments ## 10 Answers Hypergraphs and various properties that we can prove about them are the basis of many techniques that are used in modern mathematics. I will mention the proof of Density Hales-Jewett Theorem by Tim Austin. Multidimensional Szmeredi theorem is also be another key-word you might want to look up. The Furstenberg-Katznelson theorem can be proven using hypergraph methods. The mathematics subject classification is 05C65. And more importantly, take a look at What's new and search for "hypergraphs" to see a lot of other results that involve hypergraph methods in their proofs. One more thing, for real world applications, hypergraph methods appear in various places including declustering problems which are quite important to scale up the performance of parallel databases. - Here's another example for "real world" applications: research.microsoft.com/en-us/um/people/denzho/… – Felix Goldberg Sep 20 at 8:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One example: A 3-uniform hypergraph is the natural way to model the variable/clause structure of a 3-Sat instance. Since 3-Sat is one of the most important algorithmic problems in computational complexity theory, hypergraphs play an important role there. For just one of many possible examples, take a look at the following paper of Feige, Kim, and Ofek: http://research.microsoft.com/en-us/um/redmond/groups/theory/feige/homepagefiles/witness5.9.06.pdf - Hypergraphs (with no uniformity restriction) are also the natural way to model clause sets of general SAT. Each hyperedge represents the single set of literals that is forbidden by some clause. These structures have also been studied in constraint satisfaction, under the name microstructure complements. The hyperedges are there colourfully called "nogoods", a name coined by Stallman and Sussman in 1976 (yes, the Richard Stallman). hdl.handle.net/1721.1/6255 – András Salamon Jul 20 2010 at 17:34 I've done some work which made me appreciate the view that labelled hypergraphs are one of the most widely appropriate, general ways to represent data on stateful machines. In computer science, we commonly want to divide up state into a number of, possibly overlapping data structures, which will contain and be referenced by pointers. This lends itself to the following representation: data structures are hyperedges. Non-pointer data within data structures are labels of the associated hyperedge. And pointers are represented by vertices, possibly (not always!) needing an attribute to indicate which hyperedge is the source and which is the target of the pointer. Computation, then, is graph rewriting. As Qiaochu says, hyperedges are absurdly general. Likewise, the notion of data. To make this useful, one needs to constrain the form the hyperedges take. What is nice is that, likewise, programming languages must constrain the way they represent state, and one can often cleanly map the programming-driven constraints into reasonable constraints on the hypergraphs. The idea crops up again and in again the literature on graph transformations. A good stepping off point is Drewes, Habel, & Kreowski, 1997, Hyperedge Replacement Graph Grammars, In Rozenberg, Handbook of Graph Grammars and Computing by Graph Transformations. - Matroids and more generally, greedoids, are special classes of hypergraphs. For these classes greedy algorithms give optimal solutions for optimization problems, and have low polynomial time complexity. Special cases are • Kruskal's algorithm for finding minimal spanning trees, and • Dijkstra's algorithm for finding shortest paths, both in weighted graphs. There are many other matroid algorithms. See for example • Bixby and Cunningham's chapter in the Handbook of Combinatorics, volume 1, or • Jungnickel's book Graphs, networks and algorithms. - Hypergraphs have been very useful algorithmically for the following "Steiner tree problem:" given a graph (V, E) with a specified "required/terminal" vertex subset R of V and a cost for each edge, find a minimum-cost set of edges which connects all the terminals (and includes whatever subset of V \ R you like). Any minimal solution is a tree all of whose leaves are terminals (a so-called Steiner tree). Hypergraphs are useful because there is a "full component decomposition" of any Steiner tree into subtrees; the problem of reconstructing a min-cost Steiner tree from the set of all possible full components is the same as the min-cost spanning connected hypergraph problem (a.k.a. min hyper-spanning tree problem) for a hypergraph whose vertex set is R. That's the approach used by many modern algorithms for the Steiner tree problem (whether they are integer-program based exact algorithms that are actually implemented, or non-implemented approximation algorithms with good provable approximation guarantees). I like this application since one must view the hypergraph as "like a graph" (want it to be connected and acyclic) and not like a set system. This approach was used implicitly starting around 1990 by Zelikovsky and brought out more explicitly around 1997 by (I think) Warme and Prömel & Steger. A very cute paper using this approach is coming out at STOC 2010 by Byrka et al. As an $\epsilon$-shameless self-reference, there is more information in my thesis which then delves into linear programming relaxations for this approach. - Directed hypergraphs are used to model chemical reaction networks. This is closely related to the biological application Peter Arndt mentions in his answer. The reaction network and the underlying hypergraph are related via the stoichiometry matrix, which is a matrix consisting of one's, zeros and minus ones which generalizes the adjacency matrix of a graph. One obvious question you might ask about such a network is "are there any feedback loops"? This translates into the mathematical problem of finding directed hypercircuits in a directed hypergraph. This turns out to be an NP-complete problem (as shown in this paper by Can Özturan) and so gives another example of the type gowers mentions in his comment. - • Hypergraphs can be used to model some concurrent processes. "it's a much more complex structure than a regular graph" Hypergraphs could be represented as ordinary graphs, if one represents each "hyperedge" with an additional ordinary node and ordinary edges which connect the new node with the nodes incident to "hyperedge". It makes me feel that hypergraphs aren't a strict subset. - Hypergraphs can arise as Bruhat-Tits buildings of groups, see e.g. here. Some real world applications: In this article the authors list some applications to biology. Their nice starting example is that if one wants to model a chemical reaction one can write A-->B for a process which transforms A into B and see this as the edge of a graph. Sometimes such a process only works in the presence of some catalyzer (A+C-->B+C), making it a relation between three instead of two ingredients and giving a 2-edge of a hypergraph. - probably many thms and applications of math that dont explicitly refer to hypergraphs are actually related to them implicitly & could be recast in those terms. because hypergraphs are equivalently just "sets of sets". in this way they're also often interchangeable with/analogous to a 2d boolean array in computer science (and how ubiquitous is that structure in both software and mathematics? in computer science it might be referred to as a "design pattern" or even just a simple "discrete structure"). here is one key appearance/application of hypergraphs not mentioned so far. the erdos-rado sunflower lemmas[1], a key discovery of extremal graph/set theory, are about an intrinsic order or emergent "structure" to "random" hypergraphs if certain somewhat modest constraints are satisfied. this lemma shows up in numerous important lower bounds proofs in monotone circuit theory in computer science, including new versions that strengthen or generalize the lemma.[2] because of their particular role in these "bottleneck"-type proofs its not outlandish to conjecture that variations might be crucial in some future-established comp sci complexity class separations. [1] erdos-rado sunflowers survey/refs, TCS.se [2] The Monotone Complexity of k-Clique on Random Graphs by Rossman, containing new stronger lemmas on "quasi sunflowers" [3] Razborovs method of approximations by WT Gowers - footnote— technically a hypergraph is equiv to the comp sci discrete stucture of a 2d set of arrays of booleans, or an unordered list of lists. but note in comp sci often lists that can be implemented as unordered are implemented as ordered ones because the array ordering is actually more efficient to implement than a set. – vzn Sep 20 at 4:31 I believe a hypergraph can implement, or at least represent the transition states of, a nondeterministic Turing machine. Can't yet find any literature demonstrating that though, which makes me wonder. I have an open question about this over on StackOverflow right now, with no takers as of this writing: http://stackoverflow.com/questions/9953981/can-a-hypergraph-represent-a-nondeterministic-turing-machine -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292475581169128, "perplexity_flag": "middle"}
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.106.028701
# Synopsis: Band together #### Engineering the Electronic Band Structure for Multiband Solar Cells N. López, L. A. Reichertz, K. M. Yu, K. Campman, and W. Walukiewicz Published January 10, 2011 The sun offers us light for free, but most of the spectrum is missed by semiconductor solar cells, which are typically only efficient at converting light to electricity over a window of wavelengths. Connecting a series of different semiconductors can capture more of the solar spectrum, but these devices are complex and expensive to fabricate. Now, in a paper appearing in Physical Review Letters, Nair López Martínez and colleagues at Lawrence Berkeley National Laboratory and RoseStreet Labs Energy, both in the US, present a prototype solar cell where they fold the light sensitivity of many semiconductors into a single material. A semiconductor will absorb light most efficiently at the energy of its band gap—the energy it takes to excite an electron from the valence to the conduction band. To get around the limitation of a single band gap, López et al. engineered a semiconductor alloy, $GaNAs$, which has both an intermediate and wide band gap and is therefore sensitive to both low- and high-energy light. In their test of the first “proof of concept” solar cell based on an intermediate band semiconductor, the team shows their device has a promising efficiency over a wide spectrum of the sun’s light. – Jessica Thomas ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265536069869995, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/177932/is-there-always-a-telescopic-series-associated-with-a-rational-number?answertab=active
# Is there always a telescopic series associated with a rational number? Here is something I thought up while I was bored and my, erm, fish were busy: Given a rational number $p\in(0,1)$, are there always positive integers $n$, $c_m$, and $w_m$ such that $$p=\sum_{k=1}^\infty \frac1{\prod\limits_{m=1}^n (c_m k+w_m)}?$$ If so, how do you find these integers? I got curious about this when I started playing around with telescopic series. As you know, the good thing about them is that they are easily evaluated through clever rewriting of the general term, after which there is much cancellation, leaving behind a few terms that add up to the desired result. That got me wondering on whether a rational number always has a telescopic series representation. I don't really know how this might be proven (and I'm not that good at math), so I wish somebody could enlighten me. Thanks! - – draks ... Aug 2 '12 at 9:27 @draks, I have seen that, but the numerators in that answer are $\neq 1$, while I am asking for a telescopic series whose general term has $1$ as the numerator. Otherwise, you can always just multiply whatever telescopic series with an appropriate rational factor. – Timmy Turner Aug 2 '12 at 9:30 ok, good luck and +1 – draks ... Aug 2 '12 at 9:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9698147773742676, "perplexity_flag": "head"}
http://mathoverflow.net/questions/63923/shuffle-hopf-algebra-how-to-prove-its-properties-in-a-slick-way/63928
## Shuffle Hopf algebra: how to prove its properties in a slick way? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $k$ be a commutative ring with $1$, and let $V$ be a $k$-module. Let $TV$ be the $k$-module $\bigoplus\limits_{n\in\mathbb N}V^{\otimes n}$, where all tensor products are over $k$. We define a $k$-linear map $\mathrm{shf}:\left(TV\right)\otimes\left(TV\right)\to TV$ by $\mathrm{shf}\left(\left(a_1\otimes a_2\otimes ...\otimes a_i\right)\otimes\left(a_{i+1}\otimes a_{i+2}\otimes ...\otimes a_n\right)\right)$ $= \sum\limits_{\sigma\in\mathrm{Sh}\left(i,n-i\right)} a_{\sigma^{-1}\left(1\right)} \otimes a_{\sigma^{-1}\left(2\right)} \otimes ... \otimes a_{\sigma^{-1}\left(n\right)}$ for every $n\in \mathbb N$ and $a_1,a_2,...,a_n\in V$. Here, $\mathrm{Sh}\left(i,n-i\right)$ denotes the set of all $\left(i,n-i\right)$-shuffles, i. e. of all permutations $\sigma\in S_n$ satisfying $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$ and $\sigma\left(i+1\right) < \sigma\left(i+2\right) < ... < \sigma\left(n\right)$. We define a $k$-linear map $\eta:k\to TV$ by $\eta\left(1\right)=1\in k=V^{\otimes 0}\subseteq TV$. We define a $k$-linear map $\Delta:TV\to \left(TV\right)\otimes\left(TV\right)$ by $\Delta\left(a_1\otimes a_2\otimes ...\otimes a_n\right) = \sum\limits_{i=0}^n \left(a_1\otimes a_2\otimes ...\otimes a_i\right)\otimes\left(a_{i+1}\otimes a_{i+2}\otimes ...\otimes a_n\right)$ for every $n\in \mathbb N$ and $a_1,a_2,...,a_n\in V$. We define a $k$-linear map $\varepsilon:TV\to k$ by $\varepsilon\left(x\right)=x$ for every $x\in V^{\otimes 0}=k$, and $\varepsilon\left(x\right)=0$ for every $x\in V^{\otimes n}$ for every $n\geq 1$. Then the claim is: 1. The $k$-module $TV$ becomes a Hopf algebra with multiplication $\mathrm{shf}$, unit map $\eta$, comultiplication $\Delta$ and counit $\varepsilon$. It even becomes a graded Hopf algebra with $n$-th graded component $V^{\otimes n}$. 2. The antipode $S$ of this Hopf algebra satisfies $S\left(v_1\otimes v_2\otimes ...\otimes v_n\right) = \left(-1\right)^n v_n\otimes v_{n-1}\otimes ...\otimes v_1$ for every $n\in \mathbb N$ and any $v_1,v_2,...,v_n\in V$. I call this Hopf algebra the shuffle Hopf algebra, although I am not sure whether this is the standard notion. What I know is that the algebra part of it is called the shuffle algebra (note that it is commutative), while the coalgebra part of it is called the tensor coalgebra or deconcatenation coalgebra. Question: Is there a slick, or at least a not-too-long proof (I'm speaking of <10 pages in detail) for the statements 1 and 2? The best I can come up with is this here: For 1, we WLOG assume that $V$ is a finite free $k$-module (because all we have to prove are some identities involving finitely many elements of $V$; now we can see these elements as images of a map from a finite free $k$-module $W$, and by functoriality it is thus enough to prove these identities in $W$). Then, we have $V^{\ast\ast}\cong V$, and we notice that the graded dual of our above graded Hopf algebra (we don't know that it is a graded Hopf algebra yet, but at least it has the right signature) is the tensor Hopf algebra of $V^{\ast}$, for which Hopf algebraicity is much easier to show. (Note that this only works with the graded dual, not with the standard dual, because $TW$ is free but not finite free.) For 2, we prove that $v_1\otimes v_2\otimes ...\otimes v_n\mapsto \left(-1\right)^n v_n\otimes v_{n-1}\otimes ...\otimes v_1$ is indeed a $\ast$-inverse of $\mathrm{id}$ by checking the appropriate equalities combinatorially (i. e., showing that positive and negative terms cancel out). These things are ultimately not really difficult, but extremely annoying to write up. Somehow it seems to me that there are simpler proofs, but I am unable to find any proof of this at all in literature (except of the "obviously" kind of proof). One reason why I am thinking that there are simpler proofs is that the similar statements for the tensor Hopf algebra (this is another Hopf algebra with underlying $k$-module $TV$; it has the same counit and unit map as the shuffle Hopf algebra, but the multiplication is the standard tensor algebra multiplication, and the comultiplication is the so-called shuffle comultiplication) are significantly easier to prove. In particular, 2 holds verbatim for the tensor Hopf algebra, but the proof is almost trivial (since $v_1\otimes v_2\otimes ...\otimes v_n$ equals $v_1\cdot v_2\cdot ...\cdot v_n$ in the tensor Hopf algebra). What would Grothendieck do? Is there a good functorial interpretation, i. e., is the algebraic group induced by the shuffle Hopf algebra (since it is commutative) of any significance? - Persi Diaconis gave a talk at Berkeley a couple weeks ago on shuffling (of cards) and Hopf algebras. Unfortunately I don't remember much of what he said. – Michael Lugo May 4 2011 at 15:53 Let me just add a little further complication: one could also start with a graded vector space $V = \bigoplus_{k \in \mathbb{Z}} V^k$ instead of $V$. Then the same kind of construction goes through with zillions of annoying additional signs. This is really needed for several purposes. The only proofs I know mean at some point to sit down and actually do it... :( – Stefan Waldmann May 4 2011 at 16:11 I haven't been able to "sit down and actually do it" for some parts of 1 actually (which is why I trick around with functoriality, freeness and graded duality). – darij grinberg May 4 2011 at 16:18 @Stefan: If one gives the proofs correctly that Darij asks for, then there should be nothing extra needed with the signs. The trick when working with super(-style) vector spaces is to work with a "generalized element" formalism that allows, at the end of the day, all calculations to occur in regular vector spaces with no weird signs. For details, see the article by Deligne and Morgan in Quantum fields and strings: a course for mathematicians, MR1701597. – Theo Johnson-Freyd May 9 2011 at 3:01 ## 8 Answers Hi Darij, ### A geometric way of thinking about the shuffle algebra A geometric way of seeing $T(V)$ with the shuffle product is by considering functions on the loop space of `$V^*$` (i.e. the space of continuous maps from $S^1$ to $V^*$) that are given by iterated integrals. You'll see that the product of two iterated integrals is precisely the shuffle product. Moreover, this way you can see the coproduct coming from the concatenation of loops and the antipode from reversing orientation (if I remember well). But here you have to be in a situation when $V^{**}\cong V$. Let me be a bit more precise. For convenience I will work with $T(V^*)$ equipped with the shuffle product $\star$, the deconcatenation coproduct $\Delta$, and $S$ as you defined it. Now let me consider the algebra $\mathcal A$ of functionals on `$L(V^*)=C_*^0(S^1,V^*)$` (the subscript `$*$` means that I ask that $0$ is sent to $0$). There is an algebra monomorphism $T(V^*)\to \mathcal{A}$ given as follows: ```$$ \xi_1\otimes\cdots\otimes \xi_n\mapsto \left(\gamma\mapsto \int_{0<t_1<\cdots<t_n<1}\xi_1(\gamma(t_1))\dots\xi_n(\gamma(t_n))dt_1\dots dt_n\right) $$``` Now observe that the composition of loops and the orientation reversing define algebra morphisms $\Delta_A:\mathcal A\to\mathcal A\otimes\mathcal A$ and $S_A:\mathcal A\to\mathcal A$. The point is that $\Delta_A$ and $S_A$ do not really satisfy the axioms you want (e.g. coassociativity of $\Delta_A$) BUT their restriction onto the image of $T(V^*)$ does (you have to use an avatar of Stokes' Theorem to see this - or, shortly: iterated integrals are not sensitive to reparametrization), and actually coincide with $\Delta$ and $S$ (very simple computation). Below are a few algebraic considerations (independant from the above answer). ### The main point concerning the antipode is that any connected filtered bialgebra is a filtered Hopf algebra, the antipode being defined as $S(x)=\sum_{k\geq0}(\eta\circ\epsilon-id)^{*k}(x)$ Here $*$ denotes the convolution product. In your example the bialgebra you consider is actually graded so the result applies. You can find the above claim (and its proof) in these lecture notes (I think you are going to like them) by Dominique Manchon (Corollary II.3.2). ### The group algebra of a free group The degree completion of $T(V)$ is the structure ring of the pro-unipotent completion of a free group. I hope this can help. - It is not that clear to me how your formula for $S$ implies my part 2. But thanks for reminding me of the Manchon paper at a moment when I am actually near to a good printer! – darij grinberg May 4 2011 at 19:21 1 the formula for $S$ is equivalentto the following recursive one. For $x\in ker(\epsilon)$ one has $$S(x)=-x-S(x')\star x''$$ where $x'\otimes x''$ is the Sweedler notation for the reduced coproduct. Then you can prove by induction that it gives your easier formula. But I think that the loop interpretation is nicer (one does not need to compute a lot). – DamienC May 4 2011 at 19:31 That recursive formula is actually my argument - one still must show that everything cancels out nicely. – darij grinberg May 4 2011 at 19:42 I will look into the loop integrals tomorrow. I need to be in good shape to understand analysis... – darij grinberg May 4 2011 at 19:43 There is not that much analysis! The point is that iterated integrals are simply integrals over simplices (of products of pull-back of linear forms). Now if you want to express an integral over the cartesian product of two simplices in term of linear combinations of integrals over simplices, then you'll see that shuffles appear very naturally :-) – DamienC May 4 2011 at 19:52 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Suppose $V$ is a free $k$-module with base $X = {x_i,i\in I}$. I like to take as a definition of the shuffle algebra $Sh(X)$ that it is the topological dual of the Hopf algebra $k\langle\langle X\rangle\rangle$ of non commutative series in the variables $x_i$ (i.e. the completed tensor algebra $\widehat{T(V^*)}$ wrt the augmentation ideal). This last algebra is easy to understand. Then one can derive the formula for the shuffle product and we find the same as yours. And your points 1 and 2 are obvious by duality. Note that in this point of vue, instead of looking about a morphism of algebras $\varphi: Sh(X) \to A$ we just look at the corresponding generating series $\Phi = \sum_m \varphi(m) m^*$ ($m$ a basis). The fact that "$\varphi$ is a morphism of algebras" translates as "$\Phi$ is diagonal in $A\langle\langle X\rangle\rangle$" (i.e. $\Delta \Phi = \Phi\otimes \Phi$ and $\varepsilon(\Phi)= 1$). - 2 This is more or less my argument (you are using topological duals where I am using graded duals), except I didn't notice that part 2 too follows by duality from its analogue in the tensor algebra. +1 for this nice observation, but the proof is still not of the simplicity I strived for... -- So the algebraic group corresponding to the shuffle Hopf algebra is the multiplicative group of diagonal power series in $X$. If the characteristic of $k$ is zero, this should be isomorphic to the additive group of primitive power series in $X$. Interesting. – darij grinberg May 4 2011 at 16:32 I believe the right way to consider this algebra is to view it as the free zinbiel algebra. A zinbiel algebra has a single operation o which must satisfy (x o y) o z = x o (y o z + z o y) The zinbiel operation o in your algebra is the sum over all of the (p,q) shuffles which in the notation of the question have ${\sigma^{-1}(1)} < {\sigma^{-1}(i+1)}$ So the commutative product defined in the question is a.b = a o b + b o a. The answer to your question now comes from a result which will state that the free zinbiel algebra viewed as a commutative algebra is itself free, then you just need to check that your shuffle coproduct is defined on the generators. This will show that it is a bialgebra. An analogous result is that the free associative algebra is a free Lie algebra with the associated Lie bracket. One way to prove this result would be to decompose the Zinbiel operad as a left module for the commutative operad. But I imagine that the result is already in the literature somewhere. I guess that there are other names for zinbiel algebras, perhaps shuffle algebras or something similar. They do occur naturally, for instance if you want to decompose the direct product of two simplices (which isn't a simplex) into simplices in a natural way; see p278 of Allen Hatcher's Algebraic Topology. - 2 I had never heard of that before, and I'm not sure if I love or hate the name. At least it is more creative than calling it co-Leibniz. – Dan Petersen May 9 2011 at 11:49 1 Very interesting, but the shuffle algebra $TV$ is not the free commutative algebra over $V$. (I am wondering whether it is the free commutative algebra over something, though - maybe over the free Lie algebra of $V$ in some way?) – darij grinberg May 9 2011 at 12:22 2 @Dan: I don't like the name, although perhaps it's more agreeable in French. It does make sense in some ways as co-Leibniz could be interpreted as the cooperad which is the linear dual of Leibniz. So perhaps it would be nice to have a convention for the naming of Koszul duals. Trying to come up with names for self dual operads could be fun! – James Griffin May 9 2011 at 12:48 1 Zinbiel operad is a free module over Com - that surely has been proved by many people, but in particular the technique of my paper arxiv.org/abs/0907.4958 applies in a very straightforward way to do that. – Vladimir Dotsenko May 26 2011 at 12:40 1 @Darij: $T(V)$ equipped with the shuffle product is isomorphic to the symmetric algebra generated by Lyndon words. Or, if you prefer, it is the symmetric algebra $S(L^c(V))$ of the free Lie CO-algebra $L^c(V)$ of $V$. This is dual to the standard statement that $T(W)$ equipped with the concatenation product and shuffle coproduct is isomorphic as a Hopf algebra to $U(L(W))$ (take $W=V^*$). – DamienC Sep 14 2011 at 20:26 show 3 more comments Well, assume again that $V$ is a free $k$-module with base $X=x_i,i∈I$. One has to avoid the fact that $k\langle\langle X\rangle\rangle$ is NOT a Hopf algebra, be it with shuffle or concatenation except when $X=\emptyset$ because you have to take Sweedler's dual and cannot consider complete dual. A striking (but limited to free - finite or infinitely generated - $V$) proof of the first statement goes as follows a) The Hopf algebra $k\langle X\rangle$ with concatenation as product and co-shuffle as coproduct is graded - in finite dimensions - over $\mathbb{N}^{(I)}$. b) Then, the shuffle Hopf algebra is exactly the graded dual of it with the pairing given by $$\langle x_{i_1}\otimes\ldots\otimes x_{i_n}\mid x_{j_1}\otimes\ldots\otimes x_{j_n}\rangle=\delta_{i_1,j_1}\ldots \delta_{i_n,j_n}$$ and 0 if $n\not=m$. c) (For statement 2.) the antipode is just $S^*=S$ (the adjoint of $S$). - Thanks. This is a nice variation of the proof from YBL's post. (I think $K\left\langle \left\langle X \right\rangle \right\rangle$ is a Hopf algebra in an appropriate category of topological vector spaces with some kind of completed tensor products, but I don't know this category. Or is there a good reason why there is no such category?) – darij grinberg Jul 21 at 17:39 Thanks and to finish the proof in the general case, one can proceed as follows : Let $V$ be a $k$-module (free or not) and $X=x_i,i∈I$ (finite or not) a generating family of $V$ with $\gamma : F=k^{(I)}\rightarrow V$ the canonical map. Then, the tensor extension $$T(\gamma) : k\langle X\rangle \rightarrow T(V)$$ is a onto mapping between of algebra with unit and co-algebra with counit between shuffle algebras and concatenation co-algebras. From this, one gets without effort the bi-algebra structure of $T(V)$. Antipode is derived as previously for the proof of $log_*$. – Duchamp Gérard H. E. Jul 21 at 22:05 Oh, well, there is maybe some way out (I learned from Martin Bordmann at some point) It does not avoid all compuptations but works slightly more transparent than just brute force... The main point is that the deconcatenation $\Delta$ makes the tensor algebra $TV$ the cofree coalgebra in a certain subcategory of coalgebras (with a unique group-like element $1$ and nilpotent augmentation ideal). Then the idea is to \emph{define} the shuffle multiplication as a coalgebra morphism $\mu\colon TV \times TV \longrightarrow TV$ which we only need to specify on cogenerators where one sets $\mu\colon \xi\otimes\eta \mapsto \mathrm{pr}_V(\xi)\epsilon(\eta) + \epsilon(\xi)\mathrm{pr}_V(\eta)$ for $\xi, \eta \in TV$ with $\mathrm{pr}_V$ being the projection onto $V$. Being a coalgebra morphism, the associativity of $\mu$ can then again be checked on cogenerators only, which is pretty easy since $\mu \circ (\mathrm{id} \otimes \mu)$ as well as $\mu \circ (\mu \otimes \mathrm{id})$ are still both coalg morphism and the relevant coalgs are cofree. The advantage of this point of view is that one can elaborate on it a bit to also include the symmetrized versions as well as the graded versions of it. Beside that one learns something important about the cofreeness (at least in this restricted category). OOPS: YBL's answer just popped in: I think this is essentially the same idea. - 1 With that approach (which, I think, Loday-Valette use in math.unice.fr/~brunov/Operads.pdf ) one would have to prove that the $\mu$ is really my $\mathrm{shf}$. I don't understand how this can be done, except by invoking the usual duality handwaving (it is easy for the dual case). Also, it doesn't say much about the explicit form of $S$. – darij grinberg May 4 2011 at 16:39 I understand. That is indeed a shortcome of this way of constructing things. The only hope is that one actually does not need the explicit formula for the shuffle mutliplication beside on (co-)genenrators. At lieast in some applications this is the case and the above approach is useful. As I said, unwinding this in the graded case is a nightmare, I don't even know the explicit formula for $\mathrm{shf}$ unsless I'm allowed to use the $\pm$ sign to hide the problem ;) – Stefan Waldmann May 4 2011 at 16:46 I really hope it is the formula I gave transformed by the Koszul rule. "Hope" because I have never seen the Koszul rule abstractly formulated and proven, but somehow everybody seems to believe in it. – darij grinberg May 4 2011 at 16:47 Yeah, this is always be put under the carpet. In any case: thanks for the link, these lecture notes look very nice. – Stefan Waldmann May 4 2011 at 16:56 They are very nice when they actually prove things. ;) – darij grinberg May 4 2011 at 17:43 show 1 more comment I have come across a nice way to think about this Hopf algebra. Let $v_1\otimes\ldots\otimes v_n$ be a monomial in $TV$. Rather than thinking of it as just a word, think of it as a path of $n$ steps in $V$. Geometrically (if $V$ is defined over $\mathbb{R}$) it could be a piecewise linear path $\mu:[0,n]\rightarrow V$, where $\mu(k) = v_1+\ldots+v_k$. Ofcourse it's not really a geometric thing, it's a sequence of points, so we don't need anything to be over $\mathbb{R}$. The coproduct comes from splitting up the path at the integer values (including 0 and n). The coassociativity is obvious, both sides of the equation split a path into three. The counit sends any non-trivial path to 0, the trivial path to 1. The product is not the concatenation of paths of course, let $\mu$ and $\nu$ be paths of length $n$ and $m$ respectively. Their direct product $\mu\times\nu$ has domain $[0,n]\times[0,m]$ and codomain $V\times V$, not the tensor product. So how do we get a path from this direct product? Well we take paths in $[0,n]\times[0,m]$ where each step is a positive unit step in either the horizontal or vertical direction. The image of such a step is of the form $(v,0)$ or $(0,v)$, forget the zeros and we have a new path of the same type as we started with. But this required choice, we chose a path in $[0,n]\times[0,m]$, what possible choices were there? Well the set of such paths is easily seen as the set of shuffles! And so the product is given by summing all the possible paths. The associativity of the product is now easily seen as the product of three paths involves a summation over all possible paths in not a square but now a cube. The unit is the trivial path. So how about the bialgebra relation? Well this now has a simple combinatorial description. Take two paths $\mu$ and $\nu$, on one side of the bialgebra compatibility we have to take their product and then their coproduct. Their product is indexed by paths through a square, taking the coproduct splits such a path at a point, so taking the product then the coproduct gives a sum over paths in a square with a chosen point. Reindex: a sum over all integer points of the square with a path from $(0,0)$ to that point, followed by a path from the point to $(n,m)$ the other corner of the square. Now onto the other side of the equation, take the coproducts of each of path, this is indexed by an integer point in $[0,n]$ and an integer point in $[0,m]$, which taken together is an integer point in the square. Now take the products of the paths, left side with left side and right side with right side. We get precise what we hoped for, all possible paths from $(0,0)$ to the point in the square, followed by a path from the point to $(n,m)$. So we have a bialgebra, and the antipode just reverses the paths. Now I should stress that all I have done is to make the combinatorics apparent as paths in a square, if you want a quick proof, just do the combinatorics. - This is very beautiful! I would wish for a more detailed elaboration of "the antipode just reverses the paths" though, as I don't understand that claim in this form. – darij grinberg May 26 2011 at 11:42 1 Well a path is a series of steps, the antipode applies those steps in the reverse order with a sign added for each step. At least that's what the formula in the question does :-). However actually showing that this reversal defines a good antipode seems to be much harder than I had thought. I think I have a proof, but it's really just checking the combinatorics by hand and certainly isn't enlightening. I'll try to think of a better proof involving paths. – James Griffin May 26 2011 at 15:52 I think there is a nice way to prove the antipode identity, but I've not quite worked it out. You can view the convolution of the antipode and the identity as a kind of summation over foldings of the paths; for each point of the path you look at the two paths leaving that point, then you shuffle them together, now when you compare the 'foldings' arising from the two neighbouring points you find that they cancel out. Perhaps. – James Griffin May 30 2011 at 9:27 For 1. If you are willing to accept facts about the bar construction, consider $A = \Sigma^{-1} V\oplus k$ and define an algebra structure on $A$ by insisting that the product on $\Sigma^{-1} V$ is zero. This makes $A$ into a commutative algebra. Now, we have $BA$ the bar construction on $A$, and $BA$ is the same as your $TV$. A map $C \to BA$ of graded coalgebras is completely determined by the projection $C\to BA \to A$ of degree $-1$. Given a map $f: C\to A$ of degree $-1$, it extends to a coalgebra map $C\to BA$ provided that it satisfies $0 = m(f\otimes f)\Delta$ where $m$ is the product on $A$, and $\Delta$ is the coproduct on $C$. Now, check that if $A$ is a commutative algebra, then the map $BA\otimes BA \to A$ given by $[a]\otimes 1 \mapsto a$, and $1\otimes [a] \mapsto a$, and zero on all other tensor factors, satisfies the condition. Then, check that the induced map $BA\otimes BA \to BA$ is the shuffle product. Similarly, check that it is associative by projecting to $A$. - I don't know so much about the antipode (or perhaps hopf algebras in general) but what you describe looks an awful lot like the bar construction. It has the same product, coproduct, unit, and counit. In fact, it will always be of this form. There are quite general version of the Bar construction, and the above looks like the bar construction of a monad applied to $V$ since $V$ does not have a monoidal structure to begin with, or maybe the bar construction on$TV$ since $TTV \cong TV$. Most things that are true about the bar construction have slick proofs. (I think, or you can appeal to some general facts about the Bar construction) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 212, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448384046554565, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/04/24/more-about-hom/?like=1&source=post_flair&_wpnonce=8cb43e0c56
# The Unapologetic Mathematician ## More about hom Many times we’ll be interested in an abelian group on which more than one ring has an action, or on which a ring has two different actions. Most interesting are the cases when these different actions commute with each other. That is, if $M$ carries an $R$-module structure with action $r\cdot m$ and an $S$-module structure with action $s*m$, then it’s really nice if $r\cdot(s*m)=s*(r\cdot m)$. We’ll keep track of this sort of thing by hanging subscripts off of the module’s name to keep track of mutually commuting actions. In this case we’ll say ${}_{RS}M$ to denote two commuting left actions, one by $R$ and one by $S$. Every module over a commutative ring has both a left and a right action, and these clearly commute with each other because the ring is commutative. That is, if $M$ is an $R$-module for a commutative ring $R$, we automatically get two commuting $R$ actions, written ${}_RM_R$. Another example we’ll see in more generality later is the tensor powers of an abelian group $A$. The group itself is a module over ${\rm End}(A)$. Now the tensor power $A^{\otimes n}$ also is a module over ${\rm End}(A)$. We just define $f\cdot(a_1\otimes...\otimes a_n)=f(a_1)\otimes...\otimes f(a_n)$ and extend linearly. But we also have a homomorphism from the symmetric group $S_n$ to the monoid of endomorphisms of $A^{\otimes n}$. For a permutation $\sigma$ we define $\sigma\cdot(a_1\otimes...\otimes a_n)=a_{\sigma^{-1}(1)}\otimes...\otimes a_{\sigma^{-1}(n)}$, shuffling around the factors. This extends to a unique homomorphism $\mathbb{Z}[S_n]\rightarrow{\rm End}(A^{\otimes n})$, which gives another module structure on $A^{\otimes n}$. These two actions, one of $\mathbb{Z}[S_n]$ and one of ${\rm End}(A)$, commute with each other. Now that I have some examples down, I want to consider how these extra module structures play with $\hom_{R-{\rm mod}}$ — which tells us the homomorphisms between left $R$-modules — and $\hom_{{\rm mod}-R}$ — which does the same for right $R$ modules. I’ll mostly treat the left module case because the other side is very similar. The first thing to make clear is that $\hom_{R-{\rm mod}}$ eats up the $R$-module structure. If we take left modules ${}_RM$ and ${}_RN$ then $\hom_{R-{\rm mod}}({}_RM,{}_RN)$ is just an abelian group with no $R$-module structure at all. The interesting things happen when we’ve got extra module structures floating around. If $N$ also has a right $S$-module structure for another ring $S$, then we get a right $S$-module structure on the homomorphisms. We can define $\left[f\cdot s\right](m)=f(m)\cdot s$. On the right side of the equation we’re using the given action of $S$ on $N$. It’s not too hard to verify that this defines a right $S$ action on $\hom_{R-{\rm mod}}({}_RM,{}_RN_S)$. On the other hand, if $M$ has a right $S$-module structure, we get a left $S$ action on the homomorphisms. We define $\left[s\cdot f\right](m)=f(m\cdot s)$. Let’s verify this one carefully: $\left[(s_1s_2)\cdot f\right](m)=f(m\cdot(s_1s_2))=f((m\cdot s_1)\cdot s_2)=$ $\left[s_2\cdot f\right](m\cdot s_1)=\left[s_1\cdot(s_2\cdot f)\right](m)$ There are a number of similar cases, which you should check through: • $\hom_{R-{\rm mod}}({}_{RS}M,{}_RN)$ is a right $S$-module. • $\hom_{R-{\rm mod}}({}_RM_S,{}_RN)$ is a left $S$-module. • $\hom_{R-{\rm mod}}({}_RM,{}_{RS}N)$ is a left $S$-module. • $\hom_{R-{\rm mod}}({}_RM,{}_RN_S)$ is a right $S$-module. • $\hom_{{\rm mod}-R}({}_SM_R,N_R)$ is a right $S$-module. • $\hom_{{\rm mod}-R}(M_{RS},N_R)$ is a left $S$-module. • $\hom_{{\rm mod}-R}(M_R,{}_SN_R)$ is a left $S$-module. • $\hom_{{\rm mod}-R}(M_R,N_{RS})$ is a right $S$-module. In general, $\hom_{R-{\rm mod}}$ eats a left $R$-module structure from each module we stick in. Extra module structures in the second slot carry through, while extra module structures in the first slot get flipped over from right to left and back. The same goes for $\hom_{{\rm mod}-R}$, except it eats a right $R$-modules structure from each slot. One explicit example of this effect: over a commutative ring $R$, every left module is also a right module and vice-versa. There’s really no difference between $\hom_{R-{\rm mod}}$ and $\hom_{{\rm mod}-R}$ here, so we’ll just write $\hom_R$. Now we’re looking at $\hom_R({}_RM_R,{}_RN_R)$, so one structure (say the left one, for now) on each module gets eaten, leaving a right $R$-module structure on each slot. The second slot carries through and the first slot flips over, giving a left and a right action of $R$ on $\hom_R(M,N)$. This will come in very handy when we start considering modules over fields. ### Like this: Posted by John Armstrong | Ring theory ## 4 Comments » 1. [...] has deeper structure itself. For example, the set of homomorphisms between two abelian groups is itself an abelian group, because abelian groups are modules over the commutative ring . More generally, the set of [...] Pingback by | August 13, 2007 | Reply 2. Why is $\sigma$ inverted in the definition of $\mathbb{Z}[S_n]\rightarrow{\rm End}(A^{\otimes n})$? Comment by MathOutsider | October 21, 2007 | Reply 3. It’s so that the permutation $(1\,2\,3)$ sends $a_1\otimes a_2\otimes a_3$ to $a_3\otimes a_1\otimes a_2$. That is, what was in slot 1 is now in slot 2, what was in slot 2 is now in slot 3, and what was in slot 3 is now in slot 1. Comment by | October 21, 2007 | Reply 4. [...] categories. In fact, since we’re working over a field (which is a commutative ring) the properties of -functors tell us that is enriched over [...] Pingback by | May 19, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 85, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8817285895347595, "perplexity_flag": "head"}
http://www.cgal.org/Manual/3.2/doc_html/cgal_manual/Straight_skeleton_2/Chapter_main.html
# Chapter 162D Straight Skeleton and Polygon Offsetting Fernando Cacciola ## 16.1   Definitions ### 16.1.1   2D Contour A 2D contour is a closed sequence (a cycle) of 3 or more connected 2D oriented straight line segments called contour edges. The endpoints of the contour edges are called vertices. Each contour edge shares its endpoints with at least two other contour edges. If the edges intersect only at the vertices and at most are coincident along a line but do not cross one another, the contour is classified as simple. A contour is topologically equivalent to a disk and if it is simple, is said to be a Jordan Curve. Contours partition the plane in two open regions: one bounded and one unbounded. If the bounded region of a contour is a singly-connected set, the contour is said to be strictly-simple. The Orientation of a contour is given by the order of the vertices around the region they bound. It can be Clockwise (CCW) or Counter-clockwise (CW). The bounded side of a contour edge is the side facing the bounded region of the contour. If the contour is oriented CCW, the bounded side of an edge is its left side. A contour with a null edge (a segment of length zero given by two consecutive coincident vertices), or with edges not connected to the bounded region (an antenna: 2 consecutive edges going forth and back along the same line), is said to be degenerate (collinear edges are not considered a degeneracy). ### 16.1.2   2D Polygon with Holes A 2D polygon is a contour. A 2D polygon with holes is a contour, called the outer contour, having zero or more contours, called inner contours, or holes, in its bounded region. The intersection of the bounded region of the outer contour and the unbounded regions of each inner contour is the interior of the polygon with holes. The orientation of the holes must be opposite to the orientation of the outer contour and there cannot be any intersection among any contour. A hole cannot be in the bounded region of any other hole. A polygon with holes is strictly-simple if its interior is a singly-connected set. The orientation of a polygon with holes is the orientation of its outer contour. The bounded side of any edge, whether of the outer contour or a hole, is the same for all edges. That is, if the outer contour is oriented CCW and the holes CW, both contour and hole edges face the polygon interior to their left. Throughout the rest of this chapter the term polygon will be used as a shortcut for polygon with holes. Figure:  Examples of strictly simple polygons: One with no holes and two edges coincident (left) and one with 2 holes (right). Figure:  Examples of non-simple polygons: One folding into itself, that is, non-planar (left), one with a vertex touching an edge (right), and one with a hole crossing into the outside (bottom) ### 16.1.3   Inward Offset of a Non-degenerate Strictly-Simple Polygon with Holes For any 2D non-degenerate strictly-simple polygon with holes called the source, there can exist a set of 0, 1 or more inward offset polygons with holes, or just offset polygons for short, at some euclidean distance t $$>0 (each being strictly simple and non-degenerate). Any contour edge of such offset polygon, called an offset edge corresponds to some contour edge of the source polygon, called its source edge. An offset edge is parallel to its source edge and has the same orientation. The Euclidean distance between the lines supporting an offset edge and its source edge is exactly t. An offset edge is always located to the bounded side of its source edge (which is an oriented straight line segment). An offset polygon can have less, equal or more sides as its source polygon. If the source polygon has no holes, no offset polygon has holes. If the source polygon has holes, any of the offset polygons can have holes itself, but it might as well have no holes at all (if the distance is sufficiently large). Each offset polygon has the same orientation as the source polygon. Figure:  Offset contours of a sample polygon ### 16.1.4   Straight Skeleton of a 2D Non-degenerate Strictly-Simple Polygon with Holes The 2D straight skeleton of a non-degenerate strictly-simple polygon with holes [AAAG95] is a special partitioning of the polygon interior into straight skeleton regions corresponding to the monotone areas traced by a continuous inward offsetting of the contour edges. Each region corresponds to exactly 1 contour edge. These regions are bounded by angular bisectors of the supporting lines of the contour edges and each such region is itself a non-convex non-degenerate strictly-simple polygon. Figure:  Straight skeleton of a complex shaggy contour Figure:  Other examples: A vertex-event (left), the case of several collinear edges (middle), and the case of a validly simple polygon with tangent edges (right). #### Angular Bisecting Lines and Offset Bisectors Given two points and a line passing through them, the perpendicular line passing through the midpoint is the bisecting line (or bisector) of those points. Two non-parallel lines, intersecting at a point, are bisected by two other lines passing through that intersection point. Two parallel lines are bisected by another parallel line placed halfway in between. Given just one line, any perpendicular line can be considered the bisecting line (any bisector of any two points along the single line). The bisecting lines of two edges are the lines bisecting the supporting lines of the edges (if the edges are parallel or collinear, there is just one bisecting line). The halfplane to the bounded side of the line supporting a contour edge is called the offset zone of the contour edge. Given any number of contour edges (not necessarily consecutive), the intersection of their offset zones is called their combined offset zone. Any two contour edges define an offset bisector, as follows: If the edges are non-parallel, their bisecting lines can be decomposed as 4 rays originating at the intersection of the supporting lines. Only one of these rays is contained in the combined offset zone of the edges (which one depends on the possible combinations of orientations). This ray is the offset bisector of the non-parallel contour edges. If the edges are parallel (but not collinear) and have opposite orientation, the entire and unique bisecting line is their offset bisector. If the edges are parallel but have the same orientation, there is no offset bisector between them. If the edges are collinear and have the same orientation, their offset bisector is given by a perpendicular ray to the left of the edges which originates at the midpoint of the combined complement of the edges. (The complement of an edge/segment are the two rays along its supporting line which are not the segment and the combined complement of N collinear segments is the intersection of the complements of each segment). If the edges are collinear but have opposite orientation, there is no offset bisector between them. #### Faces, Edges and Vertices Each region of the partitioning defined by a straight skeleton is called a face. Each face is bounded by straight line segments, called edges. Exactly one edge per face is a contour edge (corresponds to a side of the polygon) and the rest of the edges, located in the interior of the polygon, are called skeleton edges, or bisectors. The bisectors of the straight skeleton are segments of the offset bisectors as defined previously. Since an offset bisector is a ray of a bisecting line of 2 contour edges, each skeleton edge (or bisector) is uniquely given by two contour edges. These edges are called the defining contour edges of the bisector. The intersection of the edges are called vertices. Although in a simple polygon, only 2 edges intersect at a vertex, in a straight skeleton, 3 or more edges intersect a any given vertex. That is, vertices in a straight skeleton have degree $$>=3. A contour vertex is a vertex for which 2 of its incident edges are contour edges. A skeleton vertex is a vertex who's incident edges are all skeleton edges. A contour bisector is a bisector who's defining contour edges are consecutive. Such a bisector is incident upon 1 contour vertex and 1 skeleton vertex and touches the input polygon at exactly 1 endpoint. An inner bisector is a bisector who's defining contour edges are not consecutive. Such a bisector is incident upon 2 skeleton vertices and is strictly contained in the interior of the polygon. ## 16.2   Representation This CGAL package represents a straight skeleton as a specialized Halfedge Data Structure (HDS) whose vertices embeds 2D Points (see the StraightSkeleton_2 concept in the reference manual for details). Its halfedges, by considering the source and target points, implicitly embeds 2D oriented straight line segments (each halfedge per see does not embed a segment explicitly). A face of the straight skeleton is represented as a face in the HDS. Both contour and skeleton edges are represented by pairs of opposite HDS halfedges, and both contour and skeleton vertices are represented by HDS vertices. In a HDS, a border halfedge is a halfedge which is incident upon an unbounded face. In the case of the straight skeleton HDS, such border halfedges are oriented such that their left side faces outwards the polygon. Therefore, the opposite halfedge of any border halfedge is oriented such that its left side faces inward the polygon. This CGAL package requires the input polygon (with holes) to be non-degenerate, strictly-simple, and oriented counter-clockwise. The skeleton halfedges are oriented such that their left side faces inward the region they bound. That is, the vertices (both contour and skeleton) of a face are circulated in counter-clockwise order. There is one and only one contour halfedge incident upon any face. The contours of the input polygon are traced by the border halfedges of the HDS (those facing outward), but in the opposite direction. That is, the vertices of the contours can only by traced from the straight skeleton data structure by circulating the border halfedges, and the resulting vertex sequence will be reversed w.r.t the input vertex sequence. A skeleton edge, according to the definition given in the previous section, is defined by 2 contour edges. In the representation, each one of the opposite halfedges that represent a skeleton edge is associated with one of the opposite halfedges that correspond to one of its defining contour edges. Thus, the 2 opposite halfedges of a skeleton edge link the edge to its 2 defining contour edges. Starting from any border contour halfedge, circulating the structure walks through border counter halfedges and traces the vertices of the polygon's contours (in opposite order). Starting from any non-border but contour halfedge, circulating the structure walks counter-clockwise around the face corresponding to that contour halfedge. The vertices around a face always describe a non-convex non-degenerate strictly-simple polygon. A vertex is the intersection of contour and/or skeleton edges. Since a skeleton edge is defined by 2 contour edges, any vertex is itself defined by a unique set of contour edges. These are called the defining contour edges of the vertex. A vertex is identified by it's set of defining contour edges. Two vertices are distinct if they have differing sets of defining contour edges. Note that vertices can be distinct even if they are geometrically embedded at the same point. The degree of a vertex is the number of halfedges around the vertex incident upon (pointing to) the vertex. As with any halfedge data structure, there is one outgoing halfedge for each incoming (incident) halfedge around a vertex. The degree of the vertex counts only incoming (incident) halfedges. In a straight skeleton, the degree of a vertex is not only the number of incident halfedges around the vertex but also the number of defining contour halfedges. The vertex itself is the point where all the defining contour edges simultaneously collide. Contour vertices have exactly two defining contour halfedges, which are the contour edges incident upon the vertex; and 3 incident halfedges. One and only one of the incident halfedges is a skeleton halfedge. The degree of a contour vertex is exactly 3. Skeleton vertices have at least 3 defining contour halfedges and 3 incident skeleton halfedges. If more than 3 edges collide simultaneously at the same point and time (like in any regular polygon with more than 3 sides), the corresponding skeleton vertex will have more than 3 defining contour halfedges and incident skeleton halfedges. That is, the degree of a skeleton vertex is $$>=3 (the algorithm initially produces nodes of degree 3 but in the end all coincident nodes are merged to form higher degree nodes). All halfedges incident upon a skeleton vertex are skeleton halfedges. The defining contour halfedges and incident halfedges around a vertex can be traced using the circulators provided by the vertex class. The degree of a vertex is not cached and cannot be directly obtained from the vertex, but you can calculate this number by manually counting the number of incident halfedges around the vertex. Each vertex stores a 2D point and a time, which is the euclidean distance from the vertex's point to the lines supporting each of the defining contour edges of the vertex (the distance is the same to each line). Unless the polygon is convex, this distance is not equidistant to the edges, as in the case of a Medial Axis, therefore, the time of a skeleton vertex does not correspond to the distance from the polygon to the vertex (so it cannot be used to obtain the deep of a region in a shape, for instance). If the polygon is convex, the straight skeleton is exactly equivalent to the polygon's voronoi diagram and each vertex time is the equidistance to the defining edges. Contour vertices have time zero. Figure:  Straight Skeleton Data Structure ## 16.3   API The straight skeleton data structure is defined by the StraightSkeleton_2 concept and modeled in the Straight_skeleton_2<Traits,Items,Alloc> class. The straight skeleton construction algorithm is encapsulated in the class Straight_skeleton_builder_2<Gt,Ss> which is parameterized on a geometric traits (class Straight_skeleton_builder_traits<Kernel>) and the Straight Skeleton class (Ss). The offset contours construction algorithm is encapsulated in the class Polygon_offset_builder_2<Ss,Gt,Container> which is parameterized on the Straight Skeleton class (Ss), a geometric traits (class Polygon_offset_builder_traits<Kernel>) and a container type where the resulting offset polygons are generated. To construct the straight skeleton of a polygon with holes the user must: (1) Instantiate the straight skeleton builder. (2) Enter one contour at a time, starting from the outer contour, via the method enter_contour. The input polygon with holes must be non-degenerate, strictly-simple and counter-clockwise oriented (see the definitions at the beginning of this chapter). Collinear edges are allowed. The insertion order of each hole is unimportant but the outer contour must be entered first. (3) Call construct_skeleton once all the contours have been entered. You cannot enter another contour once the skeleton has been constructed. To construct a set of inward offset contours the user must: (1) Construct the straight skeleton of the source polygon with holes. (2) Instantiate the polygon offset builder passing in the straight skeleton as a parameter. (3) Call construct_offset_contours passing the desired offset distance and an output iterator that can store a boost::shared_ptr of Container instances into a resulting sequence (typically, a back insertion iterator) Each element in the resulting sequence is an offset contour, given by a boost::shared_ptr holding a dynamically allocated instance of the Container type. Such a container can be any model of the VertexContainer_2 concept, for example, a CGAL::Polygon_2, or just a std::vector of 2D points. The resulting sequence of offset contours can contain both outer and inner contours. Each offset hole (inner offset contour) would logically belong in the interior of some of the outer offset contours. However, this algorithm returns a sequence of contours in arbitrary order and there is no indication whatsoever of the parental relationship between inner and outer contours. On the other hand, each outer contour is counter-clockwise oriented while each hole is clockwise-oriented. And since offset contours do form simple polygons with holes, it is guaranteed that no hole will be inside another hole, no outer contour will be inside any other contour, and each hole will be inside exactly 1 outer contour. Parental relationships are not automatically reconstructed by this algorithm because this relation is not directly given by the input polygon with holes and doing it robustly is a time-consuming operation. A user can reconstruct the parental relationships as a post processing operation by testing each inner contour (which is identified by being clockwise) against each outer contour (identified as being counter-clockwise) for insideness. This algorithm requires exact predicates but not exact constructions Therefore, the Exact_predicates_inexact_constructions_kernel should be used. ### 16.3.1   Exterior Skeletons and Exterior Offset contours This CGAL package can only construct the straight skeleton and offset contours in the interior of a polygon with holes. However, constructing exterior skeletons and exterior offsets is possible: Say you have some polygon made of 1 outer contour C0 and 1 hole C1, and you need to obtain some exterior offset contours. The interior region of a polygon with holes is connected while the exterior region is not: there is an unbounded region outside the outer contour, and one bounded region inside each hole. To construct an offset contour you need to construct an straight skeleton. Thus, to construct exterior offset contours for a polygon with holes, you need to construct, separately, the exterior skeleton of the outer contour and the interior skeleton of each hole. Constructing the interior skeleton of a hole is directly supported by this CGAL package; you just need to input the hole's vertices in reversed order as if it were an outer contour. Constructing the exterior skeleton of the outer contour is possible by means of the following trick: place the contour as a hole of a big rectangle (call it frame). If the frame is sufficiently separated from the contour, the resulting skeleton will be practically equivalent to a real exterior skeleton. To construct exterior offset contours in the inside of each hole you just use the skeleton constructed in the interior, and, if required, revert the orientation of each resulting offset contour. Constructing exterior offset contours in the outside of the outer contour is just a little bit more involved: Since the contour is placed as a hole of a frame, you will always obtain 2 offset contours for any given distance; one is the offseted frame and the other is the offseted contour. Thus, from the resulting offset contour sequence, you always need to discard the offsetted frame, easily identified as the offset contour with the largest area. It is necessary to place the frame sufficiently away from the contour. If it is not, it could occur that the outward offset contour collides and merges with the inward offset frame, resulting in 1 instead of 2 offset contours. However, the proper separation between the contour and the frame is not directly given by the offset distance at which you want the offset contour. That distance must be at least the desired offset plus the largest euclidean distance between an offset vertex and its original. This CGAL packages provides a helper function to compute the required separation: compute_outer_frame_margin If you use this function to place the outer frame you are guaranteed to obtain an offset contour corresponding exclusively to the frame, which you can always identify as the one with the largest area and which you can simple remove from the result (to keep just the relevant outer contours). Figure:  Exterior skeleton obtained using a frame (left) and 2 sample exterior offset contours (right) ### 16.3.2   Example ```#include<vector> #include<iterator> #include<iostream> #include<iomanip> #include<string> #include<boost/shared_ptr.hpp> #include<CGAL/basic.h> #include<CGAL/Cartesian.h> #include<CGAL/Polygon_2.h> #include<CGAL/Exact_predicates_inexact_constructions_kernel.h> #include<CGAL/Straight_skeleton_builder_2.h> #include<CGAL/Polygon_offset_builder_2.h> #include<CGAL/compute_outer_frame_margin.h> // // This example illustrates how to use the CGAL Straight Skeleton package // to construct an offset contour on the outside of a polygon // // This is the recommended kernel typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel; typedef Kernel::Point_2 Point_2; typedef CGAL::Polygon_2<Kernel> Contour; typedef boost::shared_ptr<Contour> ContourPtr; typedef std::vector<ContourPtr> ContourSequence ; typedef CGAL::Straight_skeleton_2<Kernel> Ss; typedef Ss::Halfedge_iterator Halfedge_iterator; typedef Ss::Halfedge_handle Halfedge_handle; typedef Ss::Vertex_handle Vertex_handle; typedef CGAL::Straight_skeleton_builder_traits_2<Kernel> SsBuilderTraits; typedef CGAL::Straight_skeleton_builder_2<SsBuilderTraits,Ss> SsBuilder; typedef CGAL::Polygon_offset_builder_traits_2<Kernel> OffsetBuilderTraits; typedef CGAL::Polygon_offset_builder_2<Ss,OffsetBuilderTraits,Contour> OffsetBuilder; int main() { // A start-shaped polygon, oriented counter-clockwise as required for outer contours. Point_2 pts[] = { Point_2(-1,-1) , Point_2(0,-12) , Point_2(1,-1) , Point_2(12,0) , Point_2(1,1) , Point_2(0,12) , Point_2(-1,1) , Point_2(-12,0) } ; std::vector<Point_2> star(pts,pts+8); // We want an offset contour in the outside. // Since the package doesn't support that operation directly, we use the following trick: // (1) Place the polygon as a hole of a big outer frame. // (2) Construct the skeleton on the interior of that frame (with the polygon as a hole) // (3) Construc the offset contours // (4) Identify the offset contour that corresponds to the frame and remove it from the result double offset = 3 ; // The offset distance // First we need to determine the proper separation between the polygon and the frame. // We use this helper function provided in the package. boost::optional<double> margin = CGAL::compute_outer_frame_margin(star.begin(),star.end(),offset); // Proceed only if the margin was computed (an extremely sharp corner might cause overflow) if ( margin ) { // Get the bbox of the polygon CGAL::Bbox_2 bbox = CGAL::bbox_2(star.begin(),star.end()); // Compute the boundaries of the frame double fxmin = bbox.xmin() - *margin ; double fxmax = bbox.xmax() + *margin ; double fymin = bbox.ymin() - *margin ; double fymax = bbox.ymax() + *margin ; // Create the rectangular frame Point_2 frame[4]= { Point_2(fxmin,fymin) , Point_2(fxmax,fymin) , Point_2(fxmax,fymax) , Point_2(fxmin,fymax) } ; // Instantiate the skeleton builder SsBuilder ssb ; // Enter the frame ssb.enter_contour(frame,frame+4); // Enter the polygon as a hole of the frame (NOTE: as it is a hole we insert it in the opposite orientation) ssb.enter_contour(star.rbegin(),star.rend()); // Construct the skeleton boost::shared_ptr<Ss> ss = ssb.construct_skeleton(); // Proceed only if the skeleton was correctly constructed. if ( ss ) { // Instantiate the container of offset contours ContourSequence offset_contours ; // Instantiate the offset builder with the skeleton OffsetBuilder ob(*ss); // Obtain the offset contours ob.construct_offset_contours(offset, std::back_inserter(offset_contours)); // Locate the offset contour that corresponds to the frame // That must be the outmost offset contour, which in turn must be the one // with the largetst unsigned area. ContourSequence::iterator f = offset_contours.end(); double lLargestArea = 0.0 ; for (ContourSequence::iterator i = offset_contours.begin(); i != offset_contours.end(); ++ i ) { double lArea = CGAL_NTS abs( (*i)->area() ) ; //Take abs() as Polygon_2::area() is signed. if ( lArea > lLargestArea ) { f = i ; lLargestArea = lArea ; } } // Remove the offset contour that corresponds to the frame. offset_contours.erase(f); // Print out the skeleton Halfedge_handle null_halfedge ; Vertex_handle null_vertex ; // Dump the edges of the skeleton for ( Halfedge_iterator i = ss->halfedges_begin(); i != ss->halfedges_end(); ++i ) { std::string edge_type = (i->is_bisector())? "bisector" : "contour"; Vertex_handle s = i->opposite()->vertex(); Vertex_handle t = i->vertex(); std::cout << "(" << s->point() << ")->(" << t->point() << ") " << edge_type << std::endl; } // Dump the generated offset polygons std::cout << offset_contours.size() << " offset contours obtained\n" ; for (ContourSequence::const_iterator i = offset_contours.begin(); i != offset_contours.end(); ++ i ) { // Each element in the offset_contours sequence is a shared pointer to a Polygon_2 instance. std::cout << (*i)->size() << " vertices in offset contour\n" ; for (Contour::Vertex_const_iterator j = (*i)->vertices_begin(); j != (*i)->vertices_end(); ++ j ) std::cout << "(" << j->x() << "," << j->y() << ")" << std::endl ; } } } return 0; } ``` ## 16.4   Straight Skeletons, Medial Axis and Voronoi Diagrams The straight skeleton of a polygon is similar to the medial axis and the voronoi diagram of a polygon in the way it partitions it; however, unlike the medial axis and voronoi diagram, the bisectors are not equidistant to its defining edges but to the supporting lines of such edges. As a result, Straight Skeleton bisectors might not be located in the center of the polygon and so cannot be regarded as a proper Medial Axis in its geometrical meaning. On the other hand, only reflex vertices (whose internal angle $$>pi) are the source of deviations of the bisectors from its center location. Therefore, for convex polygons, the straight skeleton, the medial axis and the Voronoi diagram are exactly equivalent, and, if a non-convex polygon contains only vertices of low reflexivity, the straight skeleton bisectors will be placed nearly equidistant to their defining edges, producing a straight skeleton pretty much alike a proper medial axis. ## 16.5   Usages of the Straight Skeletons The most natural usage of straight skeletons is offsetting: growing and shrinking polygons (provided by this CGAL package). Anther usage, perhaps its very first, is roof design: The straight skeleton of a polygonal roof directly gives the layout of each tent. If each skeleton edge is lifted from the plane a height equal to its offset distance, the resulting roof is "correct" in that water will always fall down to the contour edges (roof border) regardless of were in the roof it falls. [LD03] gives an algorithm for roof design based on the straight skeleton. Just like medial axes, 2D straight skeletons can also be used for 2D shape description and matching. Essentially, all the applications of image-based skeletonization (for which there is a vast literature) are also direct applications of the straight skeleton, specially since skeleton edges are simply straight line segments. Consider the subgraph formed only by inner bisectors (that is, only the skeleton halfeges which are not incident upon a contour vertex). Call this subgraph a skeleton axis. Each node in the skeleton axis whose degree is $$>=3 roots more than one skeleton tree. Each skeleton tree roughly corresponds to a region in the input topologically equivalent to a rectangle; that is, without branches. For example, a simple letter "H" would contain 2 higher degree nodes separating the skeleton axis in 5 trees; while the letter "@" would contain just 1 higher degree node separating the skeleton axis in 2 curly trees. Since a skeleton edge is a 2D straight line, each branch in a skeleton tree is a polyline. Thus, the path-length of the tree can be directly computed. Furthermore, the polyline for a particular tree can be interpolated to obtain curve-related information. Pruning each skeleton tree cutting off branches whose length is below some threshold; or smoothing a given branch, can be used to reconstruct the polygon without undesired details, or fit into a particular canonical shape. Each skeleton edge in a skeleton branch is associated with 2 contour edges which are facing each other. If the polygon has a bottleneck (it almost touches itself), a search in the the skeleton graph measuring the distance between each pair of contour edges will reveal the location of the bottleneck, allowing you to cut the shape in two. Likewise, if two shapes are too close to each other along some part of their boundaries (a near contact zone), a similar search in an exterior skeleton of the two shapes at once would reveal the parts of near contact, allowing you to stitch the shapes. These cut and stitch operations can be directly executed in the straight skeleton itself instead of the input polygon (because the straight skeleton contains a graph of the connected contour edges). ## 16.6   Straight Skeleton of a General Figure in the Plane A straight skeleton can also be defined for a general multiply-connected planar directed straight-line graph [AA95] by considering all the edges as embedded in an unbounded region. The only difference is that in this case some faces will be only partially bounded. The current version of this CGAL package can only construct the straight skeleton in the interior of a simple polygon with holes, that is it doesn't handle general polygonal figures in the plane. Next: Reference Manual CGAL Open Source Project. Release 3.2.1. 13 July 2006.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011905193328857, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/83529/flat-cover-by-a-locally-noetherian-scheme/83602
## Flat cover by a locally Noetherian scheme ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Les S be a scheme. Does there exist a faithfully flat morphism T to S with T a locally Noetherian scheme? - 7 No, there is not necessarily such a $T$. For instance, let $S$ be $\text{Spec}(A)$ where $A=k[[x_1,x_2,...]]$. If there is such a faithfully flat $T$, then there is a point $t$ of $T$ which maps to the closed point of $S$. Let $B$ be the local ring of $T$ at $t$. Then there is a faithfully flat local homomorphism $A\to B$. In particular, the induced map $m_A/m_A^2 \to m_B/m_B^2$ is injective. Since $m_A/m_A^2$ is infinite dimensional, so is $m_B/m_B^2$, contradicting that $B$ is Noetherian. – Jason Starr Dec 15 2011 at 16:27 2 @Jason: This is not a comment - it is an answer. – Martin Brandenburg Dec 15 2011 at 19:52 3 @Jason: If $A \to B$ is a local extension of DVR's (automatically faithfully flat), then the map $m_A/m_A^2 \to m_B/m_B^2$ may be zero (e.g. if there is ramification). – Akhil Mathew Dec 16 2011 at 1:55 @Akhil: You are right. That map may be zero. But the map from $m_A/m_A^2$ to $m_AB/m_A^2B$ is nonzero, which gives the same result. – Jason Starr Dec 17 2011 at 19:02 @Akhil: I should say a bit more. Since $B$ is $A$-flat, the induced map $B\otimes_A (m_A/m_A^2) \to m_AB/m_A^2B$ is an isomorphism. In particular, $m_AB/m_AB^2$ is a free $B/m_AB$-module of infinite rank. But that implies that $m_AB$ cannot be a finitely generated ideal in $B$, thus $B$ is not Noetherian. – Jason Starr Dec 17 2011 at 19:41 ## 1 Answer Note that if $T\to S$ is also quasicompact, then $S$ must be locally noetherian: this boils down to the well-known fact that if $A\to B$ is a faithfully flat ring homomorphism and $B$ is noetherian, then so is $A$. This proves that Jason's example above is indeed a counterexample. More generally, any non-noetherian local scheme $S$ is a counterexample: if $T\to S$ is faithfully flat, there is an open affine $U\subset T$ which covers $S$. EDIT: In fact, here is a complete answer ($S$ is any given scheme): (1) The following are equivalent: (1a) There exists a locally noetherian scheme $T$ and a faithfully flat and quasicompact morphism $T\to S$. (1b) $S$ is locally noetherian. (2) The following are equivalent: (2a) There exists a locally noetherian scheme $T$ and a faithfully flat morphism $T\to S$. (2b) For each $s\in S$, the ring `$\mathcal{O}_{S,s}$` is noetherian. Proof: exercise. To show that (2b) implies (2a), take $T=\coprod_{s\in S}\mathrm{Spec}(\mathcal{O}_{S,s})$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8868261575698853, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/5994/missing-something-basic-about-simple-orbital-mechanics
Missing something basic about simple orbital mechanics I seem to be missing something basic. I've been trying to get a simple orbital simulation working, and my two objects are Earth around the Sun. My problem is this. I placed the Earth at 93M miles away from the Sun, or 155M km. As I understand it, the orbital velocity of something at 155M km from the Sun is: $$v = \sqrt{\frac{GM}{r}}$$ Plugging in the numbers for the Sun, I get a velocity of: $$29261 \frac{\mathrm{m}}{\mathrm{s}}$$ However, if I want to get the acceleration that the Sun has upon the earth, I use: $$g = \frac{GM}{r^2}$$ For the Sun, and 155M km, I get an acceleration of: $$0.0055\frac{\mathrm{m}}{\mathrm{s}^2}$$ Now, I start with a simple body at the proper radius out along the X axis and give it a simply vector of 29261 m/s along the Y axis, then I start applying the 0.0055 m/s^2 acceleration to it. And the acceleration of the Sun is simply not enough to hold the Earth. If the Earth starts with a vector of (0, 29261 m/s), and after and I add the acceleration vector of (-0.0055 m/s, 0) to it, you can see that after a single second, it doesn't move a whole lot. If I chunk things to days, 86400 seconds, then the acceleration vector is only, roughly, -477 m/day, but the velocity vector is: $$2,325,974,400 \frac{\mathrm{m}}{\mathrm{day}} = 29,261 \frac{\mathrm{m}}{\mathrm{s}} \times 86,400 \frac{\mathrm{s}}{\mathrm{day}}$$ As you can imagine, the -477 isn't going to move that much towards the Sun. I understand that better simulations use better techniques than simply adding basic vectors together, but that's not what this is. I seem to be missing something fundamental. I had assumed that given the correct velocity, that the pull of the Sun should keep the Earth in orbit, but the "pull" that I'm using doesn't seem to be having the desired effect. So, I'm curious what basic "D'oh" thing I'm missing here. Edit for Luboš Motl answer. Perhaps there's something more fundamental I'm missing here. I understand you point, but .0055 m/s * 86,400 is -477. I was doing that math fine. Simply, I have an object with a velocity vector. Then I apply an acceleration at a right angle. I do that for N seconds to come up with a new, right angle velocity vector. I then add that to the original vector to come up with the objects new vector. I then take that vector, apply to the current position of the object, and arrive at a new position. Clearly there is a granularity issue which makes some amount of seconds a better choice for a model than others, but this is high school level simple mechanics, so there's going to be some stepping. I chose one day so that my little dot of a planet on my screen would move. If I update every 1/10th of a second "real time", and each update is a day, then I should get a rough orbit that's really a 365ish polygon in a little over 30s real time. If I choose a step size of 1 second, then my acceleration (0.0055 m/s^2) * 1 s = a right angle velocity vector that's -0.0055 in magnitude. That vector is added to the original vector of 29261 (at right angles), giving me a new vector of (-0.0055, 29261). That's after one second. That's not much of a bump. It's barely a blip. If I apply one days full of acceleration, "all at once", I am obligated to not only multiply the acceleration by 86,400, but also the original vector (since it's 29261 m/s, and we have 86,400 s), thus giving me, proportionally, the same vector, just longer. And it's still just a bump. So, I'm mis-applying something somewhere here, as I think the numbers are fine. I'm simply "doing it wrong". Trying to figure out what that wrong part is. Edit 2, responding to Platypus Lover Thank you very much for the simple code you posted. It showed me my error. My confusion was conflating updating of the vector with the calculation of the velocity vector. I felt that I had to multiply both the original vector AND the acceleration amount by the time step, which would give me the silly results. It was just confused in my head. - You wrote "acceleration vector of (-0.0055 m/s, 0)" but that's actually a velocity vector, judging by the units. Was that a typo? – David Zaslavsky♦ Feb 27 '11 at 6:44 1) Unfortunately, you can't just "chunk things to days". 2) The acceleration vector changes as to always be at right angles to the velocity vector. $g_x v_x + g_y v_y = 0$ – Eelvex Feb 27 '11 at 6:49 @David: actually he treats this as "velocity vector per second" so it should be ok. – Eelvex Feb 27 '11 at 6:50 @Will: I don't understand why you think something is wrong in your calculation. – timur Feb 27 '11 at 17:36 @timur I thought something was wrong with my calculation when my simple animation showed "the Earth" flying off and away instead of in a circle like it should have. – Will Hartung Feb 27 '11 at 21:30 show 2 more comments 3 Answers I suspect there is something wrong in the way you are adding the acceleration vector to the velocity vector after the first timestep. The simplest way to check you are doing things correctly is to write down your scheme in cartesian coordinates. To point you in the right direction, I wrote a sample orbit integrator for you here: Simplest orbit integrator, which should be at a level appropriate for a high school student. This is probably simplest integrator you can possibly write. With your code, you should get a nice circle for an x-y plot, and a nice sinusoid for the position and velocity components: Notice that it uses Euler's method: $\vec{v}(t+\Delta t) = \vec{v}(t) + \Delta t \ \vec{a}(t)$ $\vec{x}(t+\Delta t) = \vec{x}(t) + \Delta t \ \vec{v}(t)$ which is probably what you have been doing so far without realizing. This is the most inaccurate method to use when integrating, and once you introduce elliptical orbits, this method will give you wrong results after a few orbits. There are many simple recommendations I can give you on how to further improve your code once it is fixed (normalizing units, a better integration scheme, etc.). - I added to the original post what my issue was, thank you. I'd be very interested in your recommendations. I know that this technique is fraught with problems, but the basic premise was to test that my simple gravity model and animation was being applied properly, which would demonstrate itself if the Earth ran in a circle. – Will Hartung Feb 27 '11 at 21:34 – Platypus Lover Feb 27 '11 at 22:26 1 – Platypus Lover Feb 27 '11 at 22:34 Dear Will, your error is that you think that the acceleration $a$ adds distances $s$ linearly in time $t$. In reality, it adds them quadratically via the formula $$s = \frac{1}{2} at^2$$ Many kids know this formula: it's the time $t$ multiplied by the average speed during the interval which is $(v_{initial}+v_{final})/2=at/2$. The graph of the motion is a parabola, a good approximation whenever the acceleration is approximately constant during the short enough time interval. In other words, you have increased the velocity during the first second, but you forgot to increase the velocity by the same amount during the remaining $86,399$ seconds of the first day. After the first second, the velocity changes by $0.0055 m/s$ and the total distance the Earth travels during the first second is $0.0055/2 = 0.00275 m$. However, if you study how this distance changes if you increase 1 second to 86,400 seconds, the total distance doesn't jump 86,400 times. Instead, it jumps $(86,400)^2$ times, to $$0.0275 \times 86,400^2 = 20,528,000 m.$$ You may check that 20,000 kilometers is approximately - within the errors that you introduced - the right amount to keep the Earth on a quasi-circular orbit because $$\frac{155\times 10^9 m}{2}\left(\frac{2\pi}{365.25}\right)^2 = 20,000,000 m$$ or so. In the equation above, I calculated the angle of the rotation around the Sun, squared it, divided by two (that's the approximation of $1-\cos\phi$), and multiplied by the radius. After 86,400 seconds, the final speed in the transverse direction is not $0.0055 m/s$ as you assumed but $86,400\times 0.0055 m/s$. - Your units are wrong. Your acceleration of 0.0055 is m/s^2, and when you multiply by 86400 s/day, you get m/s/day, and not m/day^2, which is what you think you're getting. That's what confused you. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547122716903687, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/6746/boundedness-of-continuous-bilinear-operators
# Boundedness of Continuous Bilinear Operators Let $T:X \times X \to \mathbb{R}$ be a continuous bilinear operator defined on a normed linear space $X$ s.t. $T(\alpha x + \beta y,z) = \alpha T(x,z) + \beta T(y,z)$) and $T(x,y) = T(y,x)$. Does there exist a constant $C$ s.t. $||T(x,y)|| \leq C$ $||x||$ $||y|| \forall x,y$? I know that the result is true if $X$ and $Y$ are complete spaces, by using the uniform boundedness principle on $T$ as a continuous function of x for fixed y (and/or the other way around). However, I'm not sure if completeness is necessary, since it is true that a continuous linear operator $T: X \to \mathbb{R}$ has the property $||T(x)|| \leq C ||x|| \forall x$ on any normed linear space $X$ (although linear and bilinear operators are not exactly the same). - ## 1 Answer I think you can show this using the same argument as in the continuous linear operators. Since T is continuous, then $U=T^{-1}( (-1,1) )$ is open and contains $(0,0)$. find a c>0 small enough such that if $|x|,|y|\leq c$ then $(x,y)\in U$ and then for a general point (x,y) you have $|T(x,y)| = |T(\frac{c x}{|x|} \frac {|x|}{c}, \frac{c y}{|y|} \frac {|y|}{c} )| = \frac {|x||y|}{c^2} |T(\frac{c x}{|x|} , \frac{c y}{|y|})| \leq \frac {|x||y|}{c^2}$ if x=0 or y=0 then T(x,y)=0 so you can use the argument above for $x,y \neq 0$. - Thanks! That looks good to me. – user1736 Oct 14 '10 at 6:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108520150184631, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/175083-what-does-colinear-mean.html
# Thread: 1. ## What does colinear mean? I'm trying to see if I understand the concept of colinearity. Is the following defition correct? Two vectors are colinear if and only if they both form segments of the same ray. Or can two vectors be colinear if they are parallel, because vectors don't really have a "position"? Thanks for the help! 2. Originally Posted by divinelogos I'm trying to see if I understand the concept of colinearity. Is the following defition correct? Two vectors are colinear if and only if they both form segments of the same ray. Or can two vectors be colinear if they are parallel, because vectors don't really have a "position"? Thanks for the help! Collinear if they lie along the same line or are parallel. 3. Originally Posted by divinelogos I'm trying to see if I understand the concept of colinearity. Is the following defition correct? Two vectors are colinear if and only if they both form segments of the same ray. On one level this is a totally meaningless question. Consider the points $A;(1,3),~B<img src=$2,4),~C1,2),~\&~D2,3)" alt="A;(1,3),~B2,4),~C1,2),~\&~D2,3)" /> If one plots those four point it is transparently clear that those four points are not collinear. BUT $\overrightarrow {AB} = \overrightarrow {CD}$! Under any common understanding of the language, don’t you think that a vector ought to be collinear with itself? Do those two vectors form segments of the same ray? NO! This is just a problematic question. The author may not fully understand the mathematical status of vectors A vector is an equivalence class of object having the same direction and same length. Thus two non zero vectors are collinear if and only if they are multiples of each other. 4. Originally Posted by Plato On one level this is a totally meaningless question. Consider the points $A;(1,3),~B<img src=$2,4),~C1,2),~\&~D2,3)" alt="A;(1,3),~B2,4),~C1,2),~\&~D2,3)" /> If one plots those four point it is transparently clear that those four points are not collinear. BUT $\overrightarrow {AB} = \overrightarrow {CD}$! Under any common understanding of the language, don’t you think that a vector ought to be collinear with itself? Do those two vectors form segments of the same ray? NO! This is just a problematic question. The author may not fully understand the mathematical status of vectors A vector is an equivalence class of object having the same direction and same length. Thus two non zero vectors are collinear if and only if they are multiples of each other. "Two vectors are colinear if and only if they form segments of the same way" was to distinguish between two ways of defining the collinearity of two vectors. That is, are two vectors colinear if they "lie on the same line" (as dwsmith said), or can they be parallel as well? Your answer seems to imply, Plato, that two vectors are colinear if and only if the simultaneous "extending" of both vectors creates a line. Thus, if two vectors are parallel, they are not colinear. Is this correct?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346029758453369, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/34892/how-the-spectrum-of-the-hydrogen-atom-is-checked-detected-experimentally?answertab=oldest
# How the spectrum of the hydrogen atom is checked/detected experimentally? When solving the hydrogen atom, as a 2 body problem, we have the motion of the center of mass and the motion relative to the center of mass. The well known energy spectrum, $E_n$, that goes like $1/n^2$ is the one resulting from studying the motion relative to the center of mass. Now the energy of the center of mass of the atom, which is purely kinetic, should be added to $E_n$ to get the full energy. So unless the atom is at rest, there will be an extra (constant) term to $E_n$ My questions are, in practice when people study the hydrogen spectrum experimentally: 1-do people look at the spectrum from 1 atom only? if this is the case, then how can they guarantee that it is not moving (to properly identify the $1/n^2$ behavior)? (very cold atom technology was not known back in the day!) 2-if they are studying a gas of hydrogen, how come the spectrum to be detected is not washed out by the transitions coming from so many atoms in all directions? 3-again for looking at hydrogen gas, how the spectrum can be studied even to see if it goes like $1/n^2$ if the atoms are moving in all directions like crazy? - ## 2 Answers People usually do not measure the energy levels directly; they measure the difference by measuring the frequency of light emitted during transition. The extra kinetic energy terms cancels out in the difference. However, the moving atoms will Doppler shift the frequency of the emitted light, part of the reason why every line on the spectrum has a finite width. The effect is small because thermal motion is much much slower than speed of light. - As Karsus points out, you only ever measure energy differences during transitions between different levels, and it is hard for these to put too much energy into the translational motion. To get a quantitative answer, consider that a photon can only give about $\hbar k$ of momentum to the atom, and this will be weighted down by the total mass ($\approx$ the proton mass), which is far bigger than the reduced mass ($\approx$ the electron mass) which governs the spectrum. The total kinetic energy difference will therefore be of the order of $$\frac{(\hbar k)^2}{2m_p}=\frac{\hbar^2\omega^2}{2m_pc^2}=\frac{(\Delta E)^2}{2m_pc^2}$$ and is therefore smaller than $\Delta E$ by a factor of $\Delta E/2m_pc^2\lesssim7\times10^{-9}$. This effect will therefore cause a broadening of the transition lines by that amount. This is of course independent of how many atoms you're addressing. However, the translational motion does affect the energies you observe by Doppler shifting them, so that if you're observing a single atom the line will move to higher or lower frequency by a factor of $\sim v/c$. If you have many atoms moving in many different directions at different velocities, then each will radiate at its own natural frequency but you will observe a bunch of different Doppler shifts, and therefore the line will be broadened by a factor of $\sim v_\textrm{th}/c$ where $v_\textrm{th}$ is the thermal velocity given by $m v_\textrm{th}^2\approx k_B T$. This is Dopppler broadening and it is dominant if you do nothing to bring it down. It will certainly be dominant in a high-temperature environment like an arc lamp. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540084004402161, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/31750/why-does-a-semiconductor-hole-have-a-mass/31775
# Why does a semiconductor hole have a mass? I have read that holes in semiconductor are nothing but vacancies created by electrons. But how can this vacancy i.e. hole has a mass? - – Steve B Jul 10 '12 at 22:57 ## 3 Answers While the answers above are correct in essence, the inertia of the hole is due to the electrons that must flow to fill up the hole, there is a simple way to understand what the mass of a hole is in a tight-binding model. If you have an electron in a lattice, and it can hop from one site to another with amplitude a, then the time evolution of the wavefunction is by $${\partial_t \psi(x)} = a\sum_{\langle y,x\rangle} \psi(y)$$ where the sum is over nearest neighbors of x (not including x). If you redefine the phase of the wavefunction in a time-dependent way $\psi(x)\rightarrow e^{iNat}\psi(x)$\$ where N is the number of nearest neighbors, you subtract a term from the right hand side which leads to a standard form of the lattice Laplacian, so you have a discretized Schrodinger equation. The continuum limit of this is the normal Schrodinger equation: $${\partial_t \psi} = a\epsilon^2 \nabla^2 \psi$$ Where $\epsilon$ is the lattice spacing. From this, you can read off the effective mass $m={1\over 2a}$. The point is that mass of a quantum excitation is just the inverse of the hopping amplitude, so it is universal to describe any localized excitation that moves coherently with an effective mass. The harder it is to hop, the more massive the object is. The result is in principle independent of the actual mass of the fundamental particles--- if you make an optical lattice and place a metal BEC in each optical trap, the electrons have to tunnel from one trap location to the next in order to flow, and the effective mass can be as large as you like. In real materials, the effective mass is usually close to the electron mass, but sometimes can be hundreds of times bigger. If you fill up all the electron states, and consider one hole, there is an amplitude for the hole to hop to a neighboring location. This gives an effective mass as above, proportional to the inverse hopping amplitude. Unlike in classical systems, you don't have to consider dissipation--- the effective mass description is exact. Further, if the hopping amplitude for a hole is fundamentally the same as the hopping amplitude for an electron to fill the hole from the neighbors. - A quick answer: Imagine an array of billiard balls with one missing in the middle of the array; there is a "hole" where the billiard ball is missing. For this hole to "move", a billiard ball must move into that position, leaving a hole at the ball's previous position. Since, in fact, the hole movement is entirely equivalent to the billiard ball movement, we can speak of the mass of the hole even though it is the billiard ball's mass that is physical. Now, in the case of electrons moving through the unfilled valence band, the effective mass is greater than the mass of a mobile electron so hole mass is typically greater than mobile electron mass. - For a very common practically important classical mechanical equivalent, consider a bubble of air in the water. The apparent inertia of the bubble is on order of the mass of water displaced by the bubble, as the water around the bubble has to move for the bubble to move. This is an important consideration for ships, submarines, torpedoes, fish, etc, that end up having considerably more inertia than their own mass, and end up accelerating slower with the engine on. One time I did mathematics for realistic barrel falling into water in computer game; the apparent 'increase' of the mass, in combination with the conservation of momentum, rather accurately describes a large part of the deceleration of object when it is entering water. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934848427772522, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/11007/can-the-effects-of-gravity-be-broken-by-jumping
# Can the effects of gravity be broken by jumping? I was having a debate the other day with a work colleague where I explained that gravity is a weak force because it is easily broken. Then I remembered a lecture by someone, I forget who, that explained gravity is very weak because you can break its influence just by jumping or lifting a pencil, etc. He countered that with something along the lines of 'that even though the pencil or your body is being moved away from the source of gravity it is still affected by gravity and thus it has weight'. Is jumping a good example of gravity being a weak force? P.S. You can probably tell, my colleague and I are not physicists but we enjoy our little debates, we just need to get our facts straight. - 3 Your colleague is correct that the pencil or your body (or any object) is still affected by gravity even though it may be moving away from the source of gravity. But that doesn't mean that gravity isn't a weak force. – David Zaslavsky♦ Jun 11 '11 at 19:49 aye, but i was wondering if jumping, etc. is a good demonstration of its weakness? – Gary Willoughby Jun 11 '11 at 23:35 ## 4 Answers I think what you heard in that lecture is this argument: Gravitation is by far the weakest of the four interactions. Hence it is always ignored when doing particle physics. The weakness of gravity can easily be demonstrated by suspending a pin using a simple magnet (such as a refrigerator magnet). The magnet is able to hold the pin against the gravitational pull of the entire Earth. Yet gravitation is very important for macroscopic objects and over macroscopic distances for the following reasons. Gravitation: • is the only interaction that acts on all particles having mass; • has an infinite range, like electromagnetism but unlike strong and weak interaction • cannot be absorbed, transformed, or shielded against; • always attracts and never repels. Jumping, or lifting a pencil, is in your example "breaking" the influence of gravity because the electromagnetic interactions between your feet and the ground are able to counteract the gravitational force of the entire planet, thus demonstrating that gravity is a weak force, so I'd say yes, it's a good example. Source: http://en.wikipedia.org/wiki/Fundamental_interaction#Gravitation - Well, keep in mind gravity isn't being "broken" when you jump, it is still exerting a force of approximately F_g=G * m_1 * m_2 / r^2, where G is the gravitational constant (6.677 * 10^(-11)), m_1 and m_2 are the two masses, and r is the distance between them. The idea of gravity being a weak force, relative the electromagnetic force is shown by jumping as the electromagnetic forces moving your muscles are able to overcome (a better word than "broken") the gravity of the entire earth for a period of time, so F_g < F_jump. Since you cannot jump again midair very effectively, it is temporary, as F_g is present still in the air. But assuming you had lots of energy and a space suit you could climb a very long ladder out to where gravities effects would be minimal. Another good example that I like of electromagnetic force > gravity is the fact that when you jump out of a window (first story please!), land on the pavement. There the electromagnetic interactions between your electrons and the pavements electrons are stopping you from going all the way to the center of the earth. - Gravity is a very "weak" force, but there's something special about it in how it is truly cumulative. All matter (as far as I'm concerned) has the same sign of gravity pull. By that I mean the force is "inward", or written as $-\vec{r}/|\vec{r^3}|$, so you could call it "negative". user599884 gave a good reason the force can be called weak, which is true by all means. In fact, two balls containing exactly $1 C$ (Coulomb) of charge placed $1 m$ away will repel each other with $9 GN$ of force, which is something like a million tons of force. BUT, a ball with a $1 C$ charge on it has an imbalance of electrons versus protons of about $10^{18}/10^{23} Z\approx 0.00001$ times the total, or about 0.001%. That is some mighty force created by unthinkably small amount of matter. The fact of life, however, is that we don't experience significant bulk forces from the electromagnetic or nuclear forces. I should clarify, however, that those forces give the form to everything around you on small scales. In fact, before humans started making magnets and electric machines, there was very little bulk E&M force in nature that could exert significant forces on macroscopic objects, even though those forces are so powerful! Why is this? There is a very profound distinction between between gravity and other forces. Regarding electrostatic forces, there are 2 kinds of charges and matter will tend to try to balance those (and does a superb job at it actually). For magnetism, charges will ultimately move as a result of magnetic fields in a way that decreases the strength of the magnetic field. Not so for gravity. Gravity clumps matter together with doesn't degrade it's gravitational strength on large scales, making it truly cumulative. This is why over large time scales gravity "wins", the galaxies, planets, and stars are a result of gravitational clumping. The gravity of $1 g$ you experience around you is the resultant force from every single atom in all of the planet. Just a tiny tiny tiny fraction of all those atoms exerting a different force like E&M or nuclear would be able to easily counter the $1 g$, but, the difference is that those other forces are balanced perfectly and don't give a resultant force that can operate on you. - Balanced to first order , yes the argument holds, but fortunately not perfectly. The electrostatic is what is holding our bodies together by the molecular forces, and keeping us from falling through to the center of the earth, etc. The nuclear is more esoteric, but it is the higher order QCD forces that keep the nucleons together, and without nucleons atoms would not exist, and without atoms we would not exist. – anna v Jun 11 '11 at 17:03 sorry that should be the electromagnetic force is what is holding..., of course – anna v Jun 11 '11 at 17:39 @anna you know, I've always had an uncomfortableness with people saying that E&M explains atoms. Maxwell's equations don't predict atoms at all, Maxwell's equations applied to a quantum wave function does. However, I'm stumped there if asked for an alternative to QM. Say we have 2 particles that can't energetically interact to form a new particle but still attract. What should that lead to? In the case of gravity, absent balance from another force, they continue to attract until they form a singularity. Perhaps we are lucky QM doesn't allow this for electrostatic attraction. – AlanSE Jun 11 '11 at 19:08 Your discomfort seems to come from the difference between necessary conditions and sufficient conditions. In order to have atomic and molecular forces it is necessary to have EM though it is not a sufficient description of the universe. Nevertheless, the necessity is a definite explanation to a statement of "a perfect balance and no forces". There exist the Wan der Waals molecular forces for which EM is a necessary ingredient. It takes many necessary ingredients, a main one is QM, to have a complete theory of the universe, but we are far away from that. – anna v Jun 12 '11 at 4:08 More simply, I think you just need to understand what you were meant by "the effects of gravity broken by jumping". By jumping, that is, using your muscles (which work thanks to electric forces), you can compete with the gravitational effects of the entire Earth. But you are not going to escape from them ; gravity is still acting on you. Gravity is actually the only force that non-scientific people think of; they are rarely conscious that electric forces are either existent and much more strong that gravity, and for this reason, they need to be explained why it can often be neglected. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9614496231079102, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/35479/about-identification-in-a-3-equation-sem
# About Identification in a 3 equation SEM I got this example and I was wondering about a certain statement: $$\begin{aligned} y_1 &= \alpha_{12}y_2 + \alpha_{13}y_3 + \beta_{11}z_1 + u_1 \\ y_2 &= \alpha_{21}y_1 + \beta_{21}z_1 + \beta_{22}z_2 + \beta_{23}z_3 u_2 \\ y_3 &= \alpha_{32}y_2 + \beta_{31}z_1 + \beta_{32}z_2 + \beta_{33}z_3 + \beta_{34}z_4 + u_3 \end{aligned}$$ It is written that in the first equation of the model we could use all the excluded exogenous variables i.e. $z_2, z_3, z_4$ as instruments for the two endogenous regressors $y_2,y_3$. But as I remembered I can't use $z_2$ nor $z_3$ for $y_2$ because these variables already appear in equation 2. In the same sense I cannot use any of those z's for $y_3$. In my understanding I could use $z_2, z_3, z_4$ for $y_1$ but not for $y_2, y_3$ right? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.963113009929657, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=586539
Physics Forums "Weirdness" of polynomial long division algorithm Hello. So, i just started to learn about the polynomial long division. As an introductory example, the book presents the long division of natural numbers, claiming that its basically the same thing. The example: 8096:23 Solution: 8096:23=352 (23 into 80 goes 3 times) -69 (3 times 23 is 69, so we subtract) 119 (23 into 119 goes 5 times) -115 (23 times 5 is 115, subtract) 46 (23 goes into 46 2 times) -46 (23 times 2 is 46, subtract) 0 (end) I understand very well (or so i think) the long division of natural numbers, but this is where it gets awkward: Same problem, different approach: 8096:(20+3)=352 (here i wrote the divisor as a sum of two arbitrary numbers, just to show what is bugging me) or, if we expand, its like: (8*10³+9*10¹+6):(2*10¹+3)=3*10²+5*10¹+2 Now, as i see it, dividing this by the polynomial long division algorithm is pretty different (at least to me) than the standard one that we learned before, where the divisor was always a mononomial (be that a concrete number like 34, or an algebraic expression, like xy³). It was NEVER a sum of terms. And it made perfect sense to me dividing that way. But now, in this problem, for example, im told that i can: 8000+90+6:(20+3)=300+50+6 1)first divide 20 into 8000 to obtain 400, then i multiply 400 by (20+3) and subtract that from 8000+90+3 2)and so the procedure repeats So i find this very confusing, although i can see that it produces the same result as the above mentioned, "standard" approach. I cannot see why are we allowed to divide (8000) by only one member of the sum (20), then multiply that quotient (400) by both summands (20+3) and then subtract it again from the whole dividend. Ive tried it on several examples with concrete numbers, and i know it works.But it seems like magic, an arbitrary rule that just happens to works. And the chain of reasoning behind it is not familiar to me, unlike when the divisor is a single term; then i have a very clear reasons to justify the process.So, i think that if i could justify this method on division of natural numbers then it shouldn't be too much of a problem to generalize procedure to any polynomials. Am i right? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Science Advisor Staff Emeritus I'm a bit confused. What you show is exactly the way most people learn "long division" of numbers. You say "a concrete number like 34" but 34 has two parts: 3x10+ 4. That's not a "single term". To divide 34 into, say, 7854, you would note that 3 divides into 7 twice. Multiplying 2 times 30+ 4 gives 60+ 8 and 78- 68= 10. Bringing the next term, 5, down, we need to divide 34 into 105. 3 divides into 10 three times. 3 times 30+ 4 is 90+ 12= 102. 105- 103= 3 so, bringing down the 3, we have 34. Of course, 34 divides into 34 once so 34 divides into 7954 $2x10^2+ 3x10+ 1= 231$. It really is exactly the same thing. You said "most". That might be true, but not here. I have right in front of me a book which has introduced the division of natural numbers (and thats how i learned it). Not in a single example was there a divisor as a sum of two or more numbers. I know that 34 can be written as a sum, but its pretty different calculation when do not do that. Isnt it? What i really wanted to point out is that i can justify the reasons behind long division when we treat the divisor, say 34, as a single number. And i'll do that with your example to show you how i understand it (or i dont), why it is logical to me, and why this other method, that you say is predominant even in learning the long division of only natural numbers (therefore not only of polynomials) is not logical at all to me. Here's how my silly brain sees it: we divide 7854 by 34: first i notice that 7854 is actually 7000+500+80+4, then i see that 34 goes into 7000 at least 200 times (it goes more, but not enough to change the first numeral of the quotient, which is now 2, to, say, 3 or more), so i can write my quotient as (200+ something). Now i multiply 34 by 200 obtaining 6800 and subtract that from 7000 to obtain 200. Now i add 800 to it and get 1000. Next, 34 into 1000 goes 29 times, so my quotient becomes (200+29+something). Again i multiply 34 by 29 obtaining 986. I subtract that from 1000 and get 14. Now i add 14 and 54, getting 68. 68 divided by 34 is 2, so my quotient becomes (200+29+2)=231. And now, what i really like, i can say to myself: "Hey, this works because (200+29+2)*34=6800+986+68=7000+500+80+4." So, this way i can see how it is connected to multiplication and the distributive property. I see it almost as an "unpacking" of a sort, if you know what i mean. I hope that you can see what is going in my mind. I love the fact that i can logically explain it. But that is not the case when the divisor is written as a sum. I can do the division even then, but i cannot see or explain the logic behind it. I cannot explain why it is logical (and not just a "random" rule) to divide, say 7000 with only 30 (of our 30+4=34), which goes 23 times into 7000, and then multiply 23 by both (30+4) and now subtract that product from 7584. Seems so random, and yet it works. Can you see my trouble? And if so, could you give me a valid, logical explanation, or at least correct my corrupted reasoning, if corrupted it is? Tags division, polynomial Thread Tools | | | | |------------------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: "Weirdness" of polynomial long division algorithm | | | | Thread | Forum | Replies | | | Precalculus Mathematics Homework | 3 | | | Precalculus Mathematics Homework | 6 | | | Academic Guidance | 9 | | | Precalculus Mathematics Homework | 3 | | | Computing & Technology | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305598735809326, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52656?sort=newest
## Can a homotopy inverse of the map from a Lie group to loops on its classifying space be given by holonomy? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be the compact Lie group $SO(n)$. There are some classical constructions of the classifying bundle of $G$ based upon on direct limits of Grassmann and Stiefel manifolds: $$BG \simeq \underset{m \to \infty}{\lim} SO(m+n)/SO(m) \times SO(n)$$ $$EG \simeq \underset{m \to \infty}{\lim} SO(m+n)/SO(m)$$ with the evident $G$-bundle projection $EG \to BG$. One may give an explicit map $i: G \to \Omega BG$ which is a weak homotopy equivalence, by comparison of long exact homotopy sequences. Since $G$ and $\Omega BG$ have the homotopy types of CW complexes, the map $i$ is in fact a homotopy equivalence. I am interested in whether an explicit homotopy inverse $h: \Omega BG \to G$ to $i$ can be given by taking holonomy of loops with respect to some suitably chosen connection on the universal bundle. Naturally the "manifold" $BG$ is infinite-dimensional, but I'm thinking it would suffice to work with the finite-dimensional $G$-bundles $V_m \to G_m$ (Stiefel to Grassmann) which approximate to the direct limits above, provided that the connections on each of the approximating bundles are compatible with respect to inclusion into the next, "compatible" here having an obvious sense in terms of $G$-valued holonomy. Has anyone seen this idea worked out? Naturally I'm curious also about whether a similar idea works out for a general compact Lie group $G$. - $\Omega BG$ being $A_\infty$ homotopy equivalent to G for G(F) is indeed ancient' - see my birth certificate! though it's more subtle than for smooth bundles with connection see my now updated version of parallel transport revisited' at the n-lab so what is it you'ld like to do with it – jim stasheff Jan 24 2011 at 1:44 Jim, the question was not whether $\Omega BG$ is ($A_\infty$) homotopy equivalent to $G$ -- that much I indicated I already knew. It's a question of giving an explicit pair of maps which exhibits the equivalence. When I took algebraic topology as a graduate student, much of this type of thing was left in a black box: one can easily describe an appropriate map $i: G \to \Omega BG$, and give an abstract argument for why this is a homotopy equivalence, but as I've gotten older I like to get more concrete, giving a homotopy inverse explicitly. Thanks for the reference! I'll have a look. – Todd Trimble Jan 24 2011 at 15:04 ## 3 Answers Yes, certainly. The model for BG as you described it has a canonical, universal connection for its $SO(n)$ bundle: just the induced Riemannian connection from Euclidean space. As you move an $n$-dimensional plane in $\mathbb E^{n+m}$, the induced connection is the limit of compositions of orthogonal projections between nearby planes. In the limit, these become isometries. It doesn't matter what is the dimension of the ambient space, as long as the projection is defined. So, a loop in the Grassmanian gives an element of $G$. One example: if you hold a bicycle wheel by its axle, and move it around in a loop, when it comes back, it has rotated by some angle. That's the map. It's a homotopy inverse of the map going the other way, namely, the classifying map for the bundle obtained by the suspension of an element of $G$. The suspension of a homomorphism has a canonical connection; its holonomy is $G$, so one composition of these two maps equals the identity. The other composition is homotopic to the identity, basically because the space of connections is contractible, and the bundle $EG$ over this model of $BG$ has a universal connection: Every connection on every $SO(n)$ bundle over a $CW$ complex $X$ is induced from a map $X \rightarrow BG$ (and this is also true in a relative form). This is a standard fact; I don't have a handy reference, but the proof is "soft". However, to get a more immediate classifying space for connections that works for all Lie groups, just make a simplicial complex whose simplices are connections for a $G-bundles$ over the simplex. The $G$-bundle is specified by in terms of a trivialization associated with each vertex; the data needed is the chart-transition cocyle. In addition, give a connection; this amounts to specifying a connection form. Glue these simplices with $G$-bundles and connections together, to make a model for $BG$. Since the space of connections is contractible, this has the same homotopy type as the more usual model for $BG$ where just the cocycle is specified. The same construction works to give an explicit homotopy inverse in a much more general context, e.g. the group of diffeomorphisms for a manifold. - 1 Thank you, Bill, for walking me through that! Your visual, concrete description in the case SO(n) is quite helpful, and your general description is easier than I was expecting it to be -- I really appreciate it. – Todd Trimble Jan 20 2011 at 22:07 2 @Todd: I've "wasted" a lot of time muddling through formal descriptions of things and trying to come to terms with them intuitively, so I like passing them on if it might help save someone from either muddle or time or both. I'm curious now in what generality the notion of "connection" can work. I think it should work for locally contractible topological groups, but I wonder about topological groups that are not locally contractible. Maybe it works anyway, when the base is a CW complex. – Bill Thurston Jan 20 2011 at 22:55 1 May I remind you, Bill, of Jim Stasheff's “Parallel” transport in fibre spaces, Bol. Soc. Mat. Mexicana (2), 11:68–84, 1966. and Parallel transport and classification of fibrations in Springer's LNM 428. – David Roberts Jan 21 2011 at 1:03 @David: yeah. Stasheff's approach is more-or-less what I write about below. – John Klein Jan 21 2011 at 12:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $\pi:P \to M$ be a smooth principal $G$-bundle on a Hilbert manifold. There exist smooth connections in this situation, so pick one. Pick a point $p \in P$, $x:=\pi(p)$. Let $\Omega_{\infty} M$ be the space of smooth loops based at $x$. Consider the lifting problem $$\xymatrix{ \Omega_{\infty} \times {0} \ar[d] \ar[r] & P \ar[d] \Omega_{\infty} M \times [0,1] \ar[r] \ar[ur]^{l} & M }$$ The bottom map is the evaluation, the top map is the constant map $p$. Now parallel transport along curves defines the lift $l$. Restriction of $l$ to $\Omega_{\infty} M \times {1}$ defines the holonomy $hol:\Omega_{\infty} M \to \pi^{-1}(x) =G$, the last identification depending on the point $p$. Now specialize to a compact Lie group $G$. Take the Grassmann manifold $Gr_n$ of $n$-dimensional subspaces of the Hilbert space $\ell^2$. This is a model for $BSO(n)$ as a Hilbert manifold, the corresponding model for $EO(n)$ is the Stiefel manifold $V_n$ of orthonormal $n$-frames in $\ell^2$; this is a Hilbert manifold as well. If $G$ is compact, we can embed $G$ into $O(n)$ for some $n$ (Peter-Weyl Theorem). Then $V_n \to V_n/G=BG$ is a model for the universal $G$-bundle, in the context of Hilbert manifolds. Now I claim that $hol: \Omega_{\infty} BG \to G$ is a weak homotopy equivalence. Let $f:E \to B$ be a Hurewicz fibration with fibre $F=f^{-1}(x)$. There is the fibre transport map $T:\Omega B \to F$ obtained by lifting the paths. It is not hard to see that $\pi_{n+1}(B) \cong \pi_n (\Omega B) \stackrel{T}{\to} \pi_n (F)$ is the same as the connecting homomorphism in the long exact homotopy sequence of $F \to E \to B$. If the fibration is $EG \to BG$, you get a weak homotopy equivalence $\Omega BG \to G$. The fibre transport is defined only up to homotopy, but above, we have constructed one using a connection. Thus the holonomy is homotopic to the fibre transport and hence a weak homotopy equivalence. I am running out of steam, but I think it can be shown along these lines that $hol$ is also a homotopy inverse to the natural map $G \to \Omega_{\infty} BG$ (which does not look so natural in this setting). - So your use of sub-infty indicates smooth? suggest sub-smooth if you need a decoration as opposed to just saying that's what your Omega means – jim stasheff Jan 24 2011 at 1:38 The use of the sub-infty was only an ad-hoc notation. – Johannes Ebert Jan 24 2011 at 10:03 I'm going to change my original answer, since I interpreted the question wrongly. (I hope that's alright.) Suppose $p: E\to B$ is a Hurewicz fibration, where $F = p^{-1}(\ast)$ is the fiber over the basepoint and $B$ is connected. Then one can cook up a map $\Omega B \times F \to F$ which might be called a "holonomy" in the algbraic topology sense. The idea is this: Let $$\Lambda_p = E\times_B B^I$$ be the space of path lifting problems for $p$ (this is the space of pairs $(e,\lambda)$ where $e\in E$ and $\lambda$ is a path starting at $p(e)$. There is a map $$q: E^I \to \Lambda_p$$ by sending path $\lambda$ in $E$ to $(\lambda(0), p\circ \lambda)$. Then the condition that $p$ be a Hurewicz fibration is tantamount to saying that $q$ has a section. A choice of section might be regarded as parallel transport along a path in the algebraic topological sense. Choose such a section. This gives a way of associating to each path in $B$, starting at $x$ and ending at $y$, a map $E_x \to E_y$, where $E_x$ is the fiber at $x$. This map is a homotopy equivalence. (When $p$ is a fiber bundle, one can choose the section in such a way that each parallel transport is a homeomorphism of fibers.) Evaluating the section when $x=y$ is the basepoint gives the holonomy operation $\Omega B \times F \to F$, or adjointly as $\Omega B \to G(F)$, where $G(F)$ is the topological monoid of self homotopy equivalences of $F$. If $p$ is a fiber bundle with structure group $G$, then the transport operation described above can be factored as $$\Omega B \to G\to G(F) .$$ If we choose a basepoint in $F$, then the value of the operation on the basepoint gives a map $$\Omega B \to F .$$ This map is well-known: it's the map sitting in the homotopy fiber sequence $$\Omega B \to F \to E .$$ (this should be in any reasonable text on the subject). So, in the particular case when $p: EG \to BG$ and $F = G$, then map $\Omega BG \to G$ will be a homotopy equivalence, using the above homotopy fiber sequence, since $E = EG$ is contractible. We have also seen this map as decribed by the orbit of a point in $G$ under the holonomy operation $\Omega BG \times G \to G$ as given above. - 1 Thank you, John. I follow you fine except for the point where you say that the transport operation can be factored through G, which I don't see since the preceding only used the fact that the bundle is a Hurewicz fibration. (Perhaps that particular point doesn't matter if all one is setting out to do is get the homotopy inverse I inquired about, but classical holonomy of a connection would give the extra information that the map $\Omega BG \to G$ takes path composition to group multiplication.) – Todd Trimble Jan 21 2011 at 16:36 If the fibration is actually a fiber bundle with structure group G, then the section of $E^I \to \Lambda_p$ can be chosen so that the parallel transport along the path is a homeomorphism and is given by the action of $G$. How does one chose it? By the classical holonomy of course. In any case, the space of sections of the lifting problem is contractible, so whatever section you chose is homotopic to mine and so classical holonomy $\Omega B \to G$ followed by the map in $G(F)$ is homotopic to mine. Lastly, the map I constructed is a homomorphism in the $A_\infty$ sense. – John Klein Jan 21 2011 at 20:19 Actually, I'm not sure the map I constructed is apriori an $A_\infty$ homomorphism. I will think about how to rectify that. – John Klein Jan 21 2011 at 22:27 Yes, sure. I was simply saying that "the transport operation described above factors through G" does not hold for every choice of section (as "described above"). Otherwise I'm fine with what you had written (and thanks again; it was a good answer). – Todd Trimble Jan 21 2011 at 23:31 @Todd, regarding my last comment: the issue of why the map $\Omega B \to G(F)$ is multiplicative is discussed in Whitehead's book. – John Klein Jan 23 2011 at 2:26 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 119, "mathjax_display_tex": 8, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391817450523376, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-applied-math/177921-integral-equation-solvable-if-so-how.html
# Thread: 1. ## Is this integral equation solvable? If so, how? Hello, I'm new to the forums (actually, just found you via google) and I have 2 questions (with the other I'll post in a separate thread a little later)! My questions is this, can the following equation be solved for : and t sufficiently large that transient/initial conditions don't matter. We also have charge conservation: I tried approaching this by using a Fourier transform type solution but when I pursue this I seem to get multiple exponential integrals and things don't seem to simplify. Any ideas? Are there additional boundary conditions I need to impose? I have almost no experience with integral equations so all suggestions welcome! 2. No takers? One can also try this problem with defining new variable q(x,t) such that and choosing q(0)=q(a)=0. Then the equation above can be written as Then perhaps we can assume solution of the form and so the equation becomes Of course, I'm not sure how to evaluate Edit: Mathematica gives that But I don't see how I can get from that... as the expression becomes the incredibly ugly sum: Somehow I would suspect that the terms with x have to all go away (or my guess of solution is incorrect, but exponential should for complete set and the boundary conditions are well defined so it should be a sum). Another thing is to try expansion in terms of bessel functions instead of exponential... Also, perhaps summation has to be not from n=1 but from n=- infinity... but q_0 = 0, for certain (no constant component). 3. It looks like an intergration by parts problem to me for some reason. 4. Originally Posted by Bwts It looks like an intergration by parts problem to me for some reason. Wait, what? The exponential integral cannot be simplified.... Also, by assuming simple exponential time dependence (which I'm pretty sure is correct, since our driving function is a simple oscillatory exponential) we can make this into $i \omega \hat{q} \left(x\right) = \sigma \left( E_0 +\displaystyle {{\int_{0}}^a}\frac{d\hat{q} \left(x'\right)/dx'}{\left(x-x' \right)^{2} }dx' \right)$ Integrating this by parts (is that what you meant?) will give you $i \omega \hat{q} \left(x\right) = \sigma \left( E_0 -2\displaystyle {{\int_{0}}^a}\frac{\hat{q} \left(x'\right)}{\left(x-x' \right)^{3} }dx' \right)$ Which I don't think is particularly better... but if someone wants to give a try from this spot.. may be.. this is the most compact formulation of the problem.. that's true.. Edit: We can find an approximate solution by assuming that q doesn't vary rapidly with x. Then all q contribution will come from the pole at x'=x. We have $\displaystyle {{\int_{0}}^a}\frac{\hat{q} \left(x'\right)}{\left(x-x' \right)^{3} }dx' \approx \hat{q} \left(x\right)\displaystyle {{\int_{0}}^a}\frac{1}{\left(x-x' \right)^{3} }dx' \right)= \hat{q} \left(x\right) \frac{1}{2} \left(\frac{1}{\left(x-a \right)^{2}}-\frac{1}{x^{2}}\right)$ Which means that $i \omega \hat{q} \left(x\right) = \sigma \left( E_0 -2\hat{q} \left(x\right) \frac{1}{2} \left(\frac{1}{\left(x-a \right)^{2}}-\frac{1}{x^{2}}\right) \right)$ Simplifying we get $i \omega \hat{q} \left(x\right) / \sigma+2\hat{q} \left(x\right) \frac{1}{2} \left(\frac{1}{\left(x-a \right)^{2}}-\frac{1}{x^{2}}\right)= E_0$ or $\hat{q} \left(x\right) \approx \frac{ E_0}{i \omega/ \sigma+\left(\frac{1}{\left(x-a \right)^{2}}-\frac{1}{x^{2}}\right)}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558295607566833, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/76030?sort=oldest
## Estimating a sum of gauss sums ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hey guys, I'm concerned with bounding the following sum of gauss sums from above $$\sum_{p\leq x}~{\frac{1}{(p-1)^2}}\sum_{m=1}^{p-1}~\sum_{\chi~(p)}~\sum_{a=1}^{p-1}{~\chi^m(a)e\left(\frac{a}{p}\right)},$$ where $p$ runs through the primes $\leq x$, $\chi$ runs through the multiplicative characters modulo $p$ and $e\left(\frac{a}{p}\right)=\exp\left(\frac{2\pi ia}{p}\right)$. By using orthogonality relations of characters one gets $$\sum_{m=1}^{p-1}~\sum_{\chi~(p)}~\sum_{a=1}^{p-1}{~\chi^m(a)e\left(\frac{a}{p}\right)}=(p-1)\sum_{a=1}^{p-1}~{e\left(\frac{a}{p}\right)\frac{p-1}{ord_pa}},$$ where $ord_pa$ denotes the multiplicative order of $a$ modulo $p$. The right side can be bounded trivially by $$(p-1)\sum_{a=1}^{p-1}~{\frac{p-1}{ord_pa}}=(p-1)^2\sum_{d\mid p-1}{\frac{\varphi(d)}{d}},$$ $\varphi(d)$ denoting Euler's totient function. Using $\varphi(n)\leq n$ one gets the estimate $$\left|\sum_{p\leq x}~{\frac{1}{(p-1)^2}}\sum_{m=1}^{p-1}~\sum_{\chi~(p)}~\sum_{a=1}^{p-1}{~\chi^m(a)e\left(\frac{a}{p}\right)}\right|\leq\sum_{p\leq x}{\tau(p-1)},$$ where $\tau(n)$ is the number of divisors of $n$. The latter sum can be shown to be asymptotically equivalent to a positive constant times $x$. I would like to know if there is a way to show that the sum is $o(x)$. - ## 2 Answers There should be a bunch of cancellation. Here is an idea. You need to relate your sums $\sum_a e(a/p)/ord_p a$ to the sums $\sum_{ord_p a | m} e(a/p)/m$. Now, if $mr = p-1$, $\sum_{ord_p a | m} e(a/p) = (1/r)\sum_{n=1}^{p-1} e(n^r/p) = O(p^{1/2})$ by the Weil bound. This will deal with the elements of large order, I believe. There is work to do, but this should get you going. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a comment rather than an answer, but it is too long. Let $g$ be some generator of the multiplicative group. Then $$\sum_{a=1}^{p-1}e\left(\frac{a}{p}\right)\frac{p-1}{ord_{p}a}=\sum_{k=1}^{p-1}e\left(\frac{g^{k}}{p}\right)\gcd,\left(p-1,k\right).$$ Rearranging yields $$\sum_{d|p-1} \phi(d) \sum_{k\leq\frac{p-1}{d}}e\left(\frac{g^{dk}}{p}\right)$$ so that the entire sum is $$\sum_{p\leq x}\frac{1}{p-1}\sum_{d|p-1}\phi(d)\sum_{k\leq\frac{p-1}{d}}e\left(\frac{g^{dk}}{p}\right).$$ My hope in posting this is that there are existing bounds on sums of the form $\sum_{k\leq\frac{p-1}{d}}e\left(\frac{g^{dk}}{p}\right)$. It might be strange to deal with, as it is a sum over elements chosen for their multiplicative properties. Essentially, we would need a theorem regarding how these multiplicative elements are distributed among the residue classes, and that it cannot be "too far from uniform". - 2 You should look at Bourgain et al's recent work on exponential sums over small multiplicative subgroups. One exposition I found is math.kth.se/~kurlberg/eprints/short_expsum . – Greg Martin Sep 21 2011 at 22:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223575592041016, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/12400/operatornamestabx-1-in-textaut-fx-1-ldots-x-n
# $\operatorname{Stab}(x_1)$ in $\text{Aut}( F(x_1,\ldots,x_n ) )$ Let $F_n$ be an $n$-generator free group with a free basis $x_1,\ldots,x_n.$ Is it true that the stabilizer of $x_1$ in $\mathrm{Aut}(F_n)$ is generated by all left and right Nielsen moves $\lambda_{ij}$ and $\rho_{ij}$ such that $i \ne 1$ and by the element of order two $\epsilon_n$ such that $\epsilon_n(x_n)=x_n^{-1}$ while other elements of the basis remain fixed. Let $i \ne j$ and $1 \le i,j \le n.$ The left Nielsen move $\lambda_{ij}$ takes $x_i$ to $x_j x_i$ and the right Nielsen move $\rho_{ij}$ takes $x_i$ to $x_i x_j;$ both $\lambda_{ij}$ and $\rho_{ij}$ fix all $x_k$ with $k \ne i.$ - You are ignoring some Nielsen moves, no? For example, sending x_i to its inverse, or swapping two elements. – Steve D Nov 29 '10 at 22:58 You are right. Subgroup generated by all Nielsen moves is of index two in Aut(F_n). Thus I need also an element of order two, say $\epsilon_n$ which inverts $x_n$ and fixes other $x_k.$ I am editing the question accordingly, thanks. – krof Nov 29 '10 at 23:09 aren't you also missing x1 -> x1x2 -> x1x2^-1 -> x1 (last one is right multiplication by x2)? – Alon Amit Nov 30 '10 at 5:10 ## 1 Answer Curiously, this question is a generalisation of this question which I asked a wee while ago (but long after you asked this one). I was asking about the 2-generator case, you are wanting to know about the $n$-generator case. So, for the $2$-generator case, F(a, b), the stabiliser of $a$ consists of the automorphisms $\phi: a\mapsto a, b\mapsto a^iba^j$, and a proof (or two, or maybe three...) can be found in the question I linked too. This translates as, $\phi\in\operatorname{Stab}(a)$ if and only if $\phi\in \langle\beta, \gamma, \beta^{\alpha}, \gamma^{\alpha}\rangle$, where $\gamma$ is the Dehn twist, $$\gamma: a\mapsto a, b\mapsto ba$$ $$\alpha: a\mapsto a^{-1}, b\mapsto b$$ $$\beta: a\mapsto a, b\mapsto b$$ and $g^h=g^{-1}hg$ (but here $\alpha^{-1}=\alpha$). For the general case stuff gets more complicated so I'm just not sure. Sorry! (I suspect your idea is correct though.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936407744884491, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/48030/what-is-the-value-of-a-quantum-field?answertab=votes
# What is the value of a quantum field? As far as I'm aware (please correct me if I'm wrong) quantum fields are simply operators, constructed from a linear combination of creation and annihilation operators, which are defined at every point in space. These then act on the vacuum or existing states. We have one of these fields for every type of particle we know. E.g. we have one electron field for all electrons. So what does it mean to say that a quantum field is real or complex valued. What do we have that takes a real or complex value? Is it the operator itself or the eigenvalue given back after it acts on a state? Similarly when we have fermion fields that are Grassmann valued what is it that we get that takes the form of a Grassmann number? The original reason I considered this is that I read boson fields take real or complex values whilst fermionic fields take Grassmann variables as their values. But I was confused by what these values actually tell us. - ## 1 Answer When physicists say that a quantum field $\phi(x)$ is real-valued, they are usually referring to Feynman's path integral formulation of quantum field theory, which is equivalent to Schwinger's operator formulation. The values of a field $\phi(x)$ in the path integral formulations are numbers. E.g.: • If the numbers are real, we say that the field $\phi(x)$ is real-valued. (Such a field $\phi(x)$ typically corresponds to a Hermitian field operator $\hat{\phi}(x)$ in the operator formalism.) • If the numbers are complex, we say that the field $\phi(x)$ is complex-valued. • If the numbers are Grassmann-odd, we say that the field $\phi(x)$ is Grassmann-odd. (The numbers in this case are so-called supernumbers. See also this Phys.SE post.) - Thank you very much :). – Siraj R Khan Dec 31 '12 at 21:28 1 Incidentally, was my initial interpretation of a quantum field (as a linar combination of creation/annihilation operators at every point) correct as well, or are there subtleties that I'm missing? – Siraj R Khan Jan 1 at 3:01 You are essentially correct, though there are difficulties due to interactions in perturbation theory. Typically the fields in the interacting case have a nonzero amplitude to produce multi-particle states from the vacuum. You could think of it like this: the bare electron field operator creates a bare electron, which is a superposition of a real "dressed" electron and a bunch of other states. Renormalisation rescales the field so that the single-particle amplitude is simple, and you do extra things (LSZ reduction) to remove the multi-particle contributions from physical amplitudes. – Michael Brown Jan 1 at 3:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247723817825317, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/electron?sort=frequent&pagesize=15
# Tagged Questions Negatively charged particle with spin 1/2. A component of mundane terrestrial matter, and part of all neutral atoms and molecules. It has a mass about 1/1800 that of a proton. Its antiparticle is the positron. 2answers 1k views ### How does electricity propagate in a conductor? On a systems level, I understand that as electrons are pushed into a wire, there is a net field and a net electron velocity. And I've read that the net electron drift is slow. But electricity ... 2answers 392 views ### What is the mass density distribution of an electron? I am wondering if the mass density profile $\rho(\vec{r})$ has been characterized for atomic particles such as quarks and electrons. I am currently taking an intro class in quantum mechanics, and I ... 4answers 2k views ### Why do electron and proton have the same but opposite electric charge? What is the explanation between equality of proton and electron charges (up to a sign)? This is connected to the gauge invariance and renormalization of charge is connected to the renormalization of ... 1answer 330 views ### Why is the value of spin +/- 1/2? I understand how spin is defined in analogy with orbital angular momentum. But why must electron spin have magnetic quantum numbers $m_s=\pm \frac{1}{2}$ ? Sure, it has to have two values in ... 4answers 2k views ### Why is the charge naming convention wrong? I recently came to know about the Conventional Current vs. Electron Flow issue. Doing some search I found that the reason for this is that Benjamin Franklin made a mistake when naming positive and ... 3answers 301 views ### Current in a simple circuit I was going over my notes for an introductory course to electricity and magnetism and was intrigued by something I don't have an answer to. I remember my professor mentioning, to the best I can ... 4answers 1k views ### Spontaneous pair production? So I've been looking into particle-antiparticle pair production from a gamma ray and don't understand one thing. Let's say I have a 1,1 MeV photon and it hits a nucleus - electron-positron pair with ... 1answer 606 views ### Why doesn't orbital electron fall into the nucleus of Rb85, but falls into the nucleus of Rb83? Rb83 is unstable and decays to Kr-83. Mode of the decay is electron capture. Rb85 is stable. The nuclei Rb83 and Rb85 have the same charge. Rb85 is heavier than Rb85, but gravitation is too weak to ... 3answers 449 views ### What was missing in Dirac's argument to come up with the modern interpretation of the positron? When Dirac found his equation for the electron $(-i\gamma^\mu\partial_\mu+m)\psi=0$ he famously discovered that it had negative energy solutions. In order to solve the problem of the stability of the ... 2answers 2k views ### How does electron move around nucleus? I need to get a nice picture about how electron moves around nucleus? I find concept of probability and orbitals quite difficult to understand? 2answers 298 views ### Why do electrons around nucleus radiate light according to classical physics As I navigate through physics stackexchange, I noticed Electron model under Maxwell's theory. Electrons radiate light when revolving around nucleus? Why is it so obvious? Note that I do not know ... 4answers 672 views ### Why photons transfer to electrons perpendicular momentum? Linear antenna directed along z, photons (EM waves) propagate along x. Momentum of photons have only x component. Why electrons in antenna have z component of momentum? 2answers 324 views ### Is energy exchange quantized? In the photoelectric effect there is a threshold frequency that must be exceeded, to observe any electron emission, I have two questions about this. I) Lower than threshold: What happen with lesser ... 1answer 2k views ### Are all electrons identical? Why should two sub-atomic (or elementary particle) - say electrons need to have identical static properties - identical mass, identical charge? Why can't they differ between each other by a very ... 4answers 3k views ### How fast do electrons travel in an atomic orbital? I am wondering how fast electrons travel inside of atomic electron orbitals. Surely there is a range of speeds? Is there a minimum speed? I am not asking about electron movement through a conductor. 3answers 136 views ### Do electrons in multi-electron atoms really have definite angular momenta? Since the mutual repulsion term between electrons orbiting the same nucleus does not commute with either electron's angular momentum operator (but only with their sum), I'd assume that the electrons ... 1answer 150 views ### Stability of a rotating ring of multiple electrons at relativistic speeds There was a time when physicists where concerned about electron internal structure. The rotating ring model was one of the proposals to explain how a charge density could become stable against ... 3answers 951 views ### What is the difference between a neutron and hydrogen? Differences? They are both an electron and a proton, since the neutron decays to a proton and an electron, what's the difference between a neutron and proton + electron? so is it just a higher binding ... 4answers 957 views ### Bohr's model of an atom doesn't seem to have overcome the drawback of Rutherford's model We, as high school students have been taught that-because Bohr's model of an atom assigns specific orbits for electrons-that it is better than Rutherford's model. But what Rutherford failed to explain ... 3answers 1k views ### Electron Positron annihilation Feynman Diagram I am having some trouble understanding this fenyman diagram, it seems to indicate that the electron produces the positron, as the arrow of the positron is pointing from the electron. Additionally ... 1answer 147 views ### Positive test charge Protons have positive charge on them. Protons aren't mobile. So how can a positive test charge move from the negative terminal of a cell to the positive terminal and gain electric potential energy? ... 2answers 655 views ### Electron behavior changes when observed? I saw this video of the double slit experiment by Dr. Quantum on youtube. Later in the video he says, the behavior of the electrons changes to produce double bars effect as if it knows that it is ... 2answers 642 views ### Active gravitational mass of the electron In PSE here electrons are added to a sphere and gravitational modifications are expected. My question is: Is there any experiment that show that a negatively charged object is source of a stronger ... 1answer 293 views ### Measuring the magnitude of the magnetic field of a single electron due to its spin Is it possible to measure the magnitude of the magnetic field of a single electron due to its spin? The electron's intrinsic magnetic field is not dependent upon the amount of energy it has does it? ... 2answers 71 views ### Transfer of electron energy to atoms (heating up of matter by absorption of photons) If an electron absorbs a photon to get exited to a higher energy level, it should either come back to same state or any other lower state by emitting the required photon. How then can there be a net ... 2answers 339 views ### Electron model under Maxwell's theory I was not able to recall my memories, so: What is the formula that states the frequency of electrons revolving around nucleus is equal to the frequency of light (or photon) emitted (or radiated)? (I ... 1answer 976 views ### Which derivation of drift velocity is correct? In the derivation of drift velocity I have seen two variations and want to know which one's correct. $s=ut+\frac{at^2}{2}$ Assume that the drift velocity of any electron in any conductor is : ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400366544723511, "perplexity_flag": "middle"}
http://travelinglandsbeyond.com/tag/statistics/
# Traveling Lands Beyond "Beyond what?" thought Milo as he continued to read. ## Weak assumptions This post may be easier to read if you have some comfort with financial mathematics. Thousands of people across the history of finance have dutifully memorized one of the most famous results in financial mathematics, the Black-Scholes formula for pricing a European option. For the sake of completeness (skip ahead if you like), here is the formula for pricing a European call (C) or put (P) on a non-dividend-paying asset, which you can also find in countless textbooks and on countless websites: $C = SN(d_1) - Ke^{-rt}N(d_2)$ $P = N(-d_2)Ke^{-rt} - SN(-d_1)$ where $d_1 = \frac{\ln(S/K) + (r + \sigma^2/2)t)}{\sigma \sqrt{t}}$ $d_2 = \frac{\ln(S/K) + (r - \sigma^2/2)t)}{\sigma \sqrt{t}}$ $(d_2 = d_1 - \sigma \sqrt{t})$ and S is the underlying asset price, K is the strike price of the option, t is the time to option expiry, r is the interest rate out to time t, σ is the volatility of the underlying asset, and N() represents the cdf of a standard normal distribution. It is important to remember that while this is a ubiquitous formula used to price options, so much so that option prices are thought of by many traders in terms of their Black-Scholes volatility rather than their dollar price, it is only a mathematical model and is only correct insofar as its assumptions are met. And as with all models, real life matches the model assumptions imperfectly. You could come up with another option pricing model based off of different assumptions and in some sense it would be no more “right” or “wrong” than Black-Scholes; the area of debate would be how well those assumptions fit reality. For example, let’s say that you had an option on a small pharmaceutical company that was awaiting FDA approval on its only product, a drug upon which the entire firm’s fortunes rested. If the FDA approved, the stock would go to \$100, and if not, the stock would go to \$0. In this case Black-Scholes’s assumptions about the dynamics of the stock price are very poorly met, and it would not be a great model to use. Some financiers who are particularly dutiful have also memorized formulas for the basic Black-Scholes greeks. For example, the deltas (sensitivities to underlying asset price) of a call and a put are $\frac{\partial C}{\partial S} = N(d_1)$ $\frac{\partial P}{\partial S} = N(d_1) - 1$ The relationship between the delta of a call and a put of the same strike and expiry is therefore: call delta – put delta = 1. The formulas for the deltas are strictly Black-Scholes; you can get them by taking the derivative of the Black-Scholes pricing formula, and they might not be accurate under a different option pricing model. But the relationship between the two is not, depending solely on put-call parity. Put-call parity states that the price of a call minus the price of a put equals the discounted present value of the asset price minus the strike price. It is a much weaker assumption than those that underlie Black-Scholes. You don’t need to say anything about volatility, or Brownian motion, or continuous-time hedging. Not only that, it’s very intuitive and logical: if you have the right to buy a stock above \$100 at some point in the future, and someone has the right to sell a stock to you below \$100 at that same point in time, you essentially have a forward agreement to buy the stock at \$100, which at that point in time will be worth the expected value of the stock less \$100, and which today will be worth the stock price less the discounted value of \$100 at expiry. It’s much harder to imagine scenarios in which put-call parity would be violated than in which Black-Scholes assumptions are violated (in fact Black-Scholes assumptions imply put-call parity). What this means is that any options model that accepts the weak and almost always realistic assumption of put-call parity must have the same relationship between call delta and put delta. Let’s look at another slightly trickier example, regarding vega (sensitivity to volatility) and theta (sensitivity to the passage of time). The Black-Scholes formulas for vega and theta of a call are: $\frac{\partial C}{\partial \sigma} = SN'(d_1) \sqrt{t}$ $-\frac{\partial C}{\partial t} = SN'(d_1) \frac{\sigma}{2 \sqrt{t}} - rKe^{-rt}N(d_2)$ (The negative sign in the theta is there because I have represented t as time to expiry, and theta is typically thought of as how value changes as time moves forward, in which case t would be decreasing.) Let’s further assume that the interest rate is zero, so that the theta simplifies to: $-\frac{\partial C}{\partial t} = SN'(d_1) \frac{\sigma}{2 \sqrt{t}}$ In this case, the relationship between vega and theta is: $\frac{\partial C}{\partial \sigma} \frac{\sigma}{2t} = -\frac{\partial C}{\partial t}$ This relationship, though under the further assumption of a zero interest rate, holds under a weaker assumption than Black-Scholes: it requires that your volatility parameter (however you define that) and your time to expiry are used in the price solely in the form of an intermediate parameter σ * sqrt(t). To see this mathematically, let’s write the call price as some unspecified function of this intermediate parameter: $C = f(\sigma \sqrt{t})$ Then if we take derivatives with the chain rule: $\frac{\partial C}{\partial \sigma} = f'(\sigma \sqrt{t}) \sqrt{t}$ $-\frac{\partial C}{\partial t} = f'(\sigma \sqrt{t}) \frac{\sigma}{2 \sqrt{t}}$ and you can see that the relationship holds. If interest rates are zero, Black-Scholes does satisfy this weaker assumption; if we define V = σ * sqrt(t), the d1 and d2 terms can be rewritten as: $d_1 = \frac{\ln(S/K) + V^2}{2V}$ $d_2 = d_1 - V$ We might call V “total” volatility. The intuition behind tying σ and t together is that an option price depends on the probability distribution of the asset out to time t, which in turn depend on a) the value of t is and b) how “innately” volatile the asset is, represented by σ. A high-volatility asset will have a wider distribution than a low-volatility asset over the same time frame, but the low-volatility asset will have a wider distribution at some point if you examine it over a sufficiently longer time frame than the high-volatility asset. Combining the two parameters as V = σ * sqrt(t) is to say that you’ve defined your σ as a per-root-time measure of volatility, or, more simply, you’ve defined σ2 as a per-time measure of volatility. For those who have taken some stochastic math, you’ll know that this is indeed true of standard Brownian motion: variance at time t is σ2t. Why might you be interested in this (which otherwise seems like a small mathematical exercise to kick at financial interview candidates)? Of course, the fewer assumptions your models need, the better, and we can more broadly and confidently apply any aspects of our modeling framework that depend on only a subset of the full assumptions. It’s not simply that we need to worry that much less about matching assumptions and reality, but also that these aspects of the model will be robust to changes in a real-world environment. In times of financial crisis, certain assumptions that were a very strong fit to reality for a long time may suddenly fall apart. Rather than either relying on violable assumptions or throwing out a model that does actually work most of the time, we can assess what aspects of our models rely on exactly what assumptions and be aware of what will and will not hold up in a changing environment. Written by Andy Fri 24 Aug 2012 at 1:27 am Posted in Uncategorized Tagged with black-scholes, finance, modeling, normal distribution, statistics ## The Heritage Health Prize I started work on a second data competition, the Heritage Health Prize, which is well-known in the community as it has a very large purse, \$3 million to the winning team. The objective of this competition is to predict hospitalizations for patients, given health insurance claims data for those patients in previous years. It is a tremendous application of data analysis, as I think healthcare is extremely fertile ground for increasing efficiency by being smarter about care and prescription and procedure. I may be off-and-on with this one, working on it for a while and then letting it sit for a while; as before, my objective is to learn as much as I can, not realistically to win, and if I feel like I’m spinning my wheels I’ll drop it for a while. What I particularly like about this competition is the “Milestone Prizes” that the organizers also award. The competition will last for two years, and every 6 months the top 2 entrants win a much smaller but not insubstantial prize, in the five-digit dollar range. In order to claim the Milestones, the winning teams must submit a write-up of their methodology, to the organizers’ satisfaction. Here are links to the Milestone 1 and Milestone 2 papers. (You can only read those PDFs if you are in the competition, unfortunately, and I don’t intend to re-share them if the organizers don’t want them to be shared.) Two Milestones have passed, with the third coming up in a few weeks. The papers have been tremendously helpful in getting started; my initial approach has been a highly simplified version of their procedures, and it’s good enough to get to 211th place out of 1268 (though only 818 entries right now clear a naïve-ish benchmark where every entry is predicted at an optimized constant value, and I say “naïve-ish” because the method for deducing that optimized constant is thoughtful). Unfortunately my efforts at sophisticating my models along the lines of the papers have not yielded much improvement beyond my initial go, but hopefully I’ll figure something out. Although two Milestones have passed, it is helpful to read the first Milestone papers first, because the later ones build on/make reference to the previous ones. I was surprised by the similarity of the papers’ structure, despite being written independently: Features: from the raw data supplied by the competition, what variables became the input into your prediction algorithms? In some cases, there is no transformation; you feed the competition data right through. In other cases, the papers calculated per patient averages, minimums and maximums, etc. and fed those through. Algorithms: in general, strong entries use more than one (see “Ensembling” below). This is where some of the ornery mathematics comes into play, and to really do a good job here you need to read some academic papers. But many of the established statistical models have already been implemented in languages such as R, so if you simply want to get an entry on the leaderboard you actually don’t need to know too much about the models; download them and run them as a black box. (I’m still learning about these models and yet I’ve managed to write implementations that use them.) R in particular has strong community development of these statistical models and is what I’ve been using. The algorithms that are new to me that I’ve been trying to learn so far are called gradient boosting and random forests. Feature selection: a model is a combination of an algorithm and a subset of the available features. You might run the same algorithm on two different subsets of the features and call those two separate models. Models with the same algorithms may benefit less from the ensembling step (see below) because they may perform similarly well or similarly poorly on a given data point, but the papers both seem to employ this strategy to generate better predictions. Ensembling: it seems the established way to get a strong overall model is to harness many different prediction models and ensemble them with a top-level algorithm that weights the models accordingly. The idea is that different models may perform well on different subsets of the data (for whatever reason; the “why” may not be well understood), so if you can combine them in a manner that uses the best suited model for each data point, you’ll have a very strong predictor. I actually find the papers to be a little sparse on some details here (maybe because I’m inexperienced) but I think the procedure followed by the Milestone winners is to run what’s called a ridge regression to calculate weightings for each model and for the final prediction to be a linear combination of the models. Miscellany: One of the Milestone papers interestingly pointed out that the distribution for one feature changed sharply in the last year of available data. In finance we’d call this a “regime change.” The authors decided to toss that feature entirely as a result. They illustrated what clearly does appear to be a change in the nature of the feature’s statistical distribution but did not provide a concrete quantitative test for it, and my own efforts to write such a screen haven’t been successful so far. The issue is that you may not worry about a change in the mean or variance or even a few higher-order moments of a feature’s statistical distribution, but you may be worried if the variable’s family of distributions changed; if something used to be normally distributed and suddenly becomes uniformly distributed, that’s a real problem. There was little attempt to impose a real-world interpretation on the raw data. The winners generally didn’t try to say something about why their models do what they do with drug prescriptions, hospitalization locations, etc. With minor exceptions, they focused on getting good data and good data mining algorithms. To some degree the selection of features induces some kind of interpretation – why did you calculate this feature? why are you picking this subset of features? – but that is not explained in much depth, and I interpret that to mean that it was not done on the basis of heavy thinking about real-world meaning of the data. Having already been through a rookie stumbling phase with Amazon EC2, I am pleased to say I’m using it a bit more efficiently now. I’ve already got a “base” snapshot of a Linux install (Ubuntu) sitting around, and I’ve done all my work on a separate EC2 volume. If I ever want to cease work for a while, I can just detach the drive and stop the instance, and only pay for storage. If I want a lot of computing power or I want to try more than one thing in parallel, I can duplicate the volume, create some new higher-powered instances, attach the volumes to the new instances, and go. It’s pleasantly easy at this point to get started. Written by Andy Tue 21 Aug 2012 at 9:20 pm Posted in Uncategorized Tagged with amazon ec2, data, healthcare, heritage health prize, kaggle, machine learning, prediction, statistics ## Big data autodidacticism The aforementioned Facebook data mining contest ends today. The contest was, given a directed graph with missing edges and a list of nodes, to predict up to 10 new edges for each node in the list to point to. This is the first time I’ve tried a Kaggle competition. I picked it up as a way to teach myself about machine learning and data analysis techniques. I’ve also done a bit of reading from Toby Segaran’s Programming Collective Intelligence (I also have Drew Conway and John Myles White’s Machine Learning for Hackers but haven’t really gone beyond the intros yet). And I’ve also been trying out a machine learning course from Coursera, given by Stanford professor Andrew Ng, which is just finishing up as well. On Kaggle I’m somewhere around the 75th to 80th percentile, although I’m afraid to say my solution is essentially the same as one posted (possibly against the rules?) in the discussion forums, so not really an original idea on my part. For an early description of my attempts, see the previous post. As it turns out, those attempts all fared worse than a PageRank-like algorithm that operated as follows, given a node for which you want to predict outgoing edges: 1. Every other node is initially scored zero. 2. Send out a value of 1/(# of edges) out along each edge to each neighbor, both on outgoing and incoming edges. So both nodes that point to and are pointed to this node will receive this value, and a neighbor node that both points to and is pointed to by the node in question will receive 2x this value. 3. Add the value received by each neighboring node to its score. 4. Repeat steps 2 and 3 recursively twice, going out to the neighbors’ neighbors and the neighbors’ neighbors’ neighbors, but in these cases, if sending a value across an incoming edge (in the reverse direction that the edge points), do not add the value received by the neighbor to its score. Note that this is not a probability distribution across nodes. I avoided looking at the forum-posted solution and implementation for a while, then finally when I thought I was kind of spinning my wheels I read it through and punted around a few random improvements, but none of them really worked. (I did re-implement the solution in my own code framework, of course.) Prior to starting on Kaggle, I had been sort of following along and plugging away at the examples in Segaran’s book, reproducing the code, running the examples myself, etc. I was learning, but I think it really helps to have some kind of project or target to go after. It’s the difference between, say, learning music by listening to lots of songs and reading scores and charts and theory, and learning by actually picking up an instrument and playing. (During this time I actually picked up guitar as well – it’s not a bad change of pace when you need one, and it’s nice to fiddle around with one while a slow-moving program is running.) I do still plan to return to his book and continue along with more examples, hopefully with a better appreciation and faster learn rate now that I’ve tried a project. Participating in the competition was definitely educational, but as mentioned, it does lend itself to some wheel spinning. When submitting predictions, the competition does compute your overall score (using a metric publicly defined in the rules), but no details about what you did right and what you did wrong, as you might actually have in a real-life situation. Obviously they have to do this so that people don’t just submit a solution that is overfit to the test data. But this does mean, I think, that you’re just going to learn at that much of a slower pace. Kaggle did give me the chance to use Amazon EC2 for what is ostensibly its “real” purpose, which is to purchase computing power by the hour. The algorithm described above is slow (at least my implementation of it was slow, maybe someone out there has a smarter and speedier version), and would take hours and possibly days to run on my laptop (a MacBook Air). Once I started getting to the point where my algorithms were taking this long, I took it to the cloud, spinning up a high powered Linux instance, uploading the code, and running it there. It still would take a few hours by the end, but that’s a bearable runtime. To take full advantage of the multiple cores on the high-end EC2 instances I had to rewrite the code to support multithreading, which was something I hadn’t done before, and which was in my opinion generally a frustrating experience, lending itself to unpredictable crashes and more challenging debugging. A word or two about Coursera, whose machine learning course I’m finishing up now: I liked it enough to try some more courses, but at times it felt like I was just following along the motions. To extend my music analogies, it felt like I was indeed actually playing guitar, but someone was sitting behind me holding my hands making me strum and finger all the chords. I’m not positive how much I will retain and how much will slide out my ears within the coming weeks. The slides and the presentations are good reads, but the programming exercises aren’t all that. The benefits you get from taking an in-person, structured class is that you also have close contact and cooperation with classmates; maybe you realistically can’t do Courseras unless they’re coupled with Meetups. Written by Andy Tue 10 Jul 2012 at 2:17 pm Posted in Uncategorized Tagged with data, facebook, graph theory, kaggle, machine learning, statistics ## Edge prediction with one comment Recently I’ve been working on a Kaggle competition sponsored by Facebook. Kaggle is a website onto which firms and organizations can upload their own data mining competitions open to the public. They will provide some sort of input/output data set, named a training set in the lingo of machine learning, which competitors use to create their predictive algorithms. They will also provide a test set of inputs and some metric for scoring predicted output versus true output. Competitors submit their predictions and Kaggle scores them against the true outputs and ranks the leaders. I don’t have a realistic hope of winning this competition – it’s my first time trying and there are pro data scientists working on this stuff – but it has been a good way to learn about the design of machine learning algorithms. Additionally, while it’s not a truly Big Data set (the uncompressed training data set is 142 MB), it’s big enough that you can’t go with brute force methods; you need to be thoughtful about what you do and do not spend time computing. The Facebook competition is an edge prediction problem. Facebook provides a data file describing some kind of social network (it isn’t the Facebook graph, and obviously it’s anonymous, with graph nodes represented by numbers; someone in the forums put up a decent guess that it’s Instagram) that has had some of its edges deleted. The graph is directed, meaning that every connection is from one node and to another node; A can connect to B independently of B connecting to A. Facebook provides a list of nodes and asks you to make 0 to 10 ranked recommendations as to what other nodes it should follow, or in other words, what missing edges you would recommend drawing from that node to the rest of the graph. The training set consists of about 1.86 million nodes connected by about 9.44 million edges. There are no self-connections; an edge always connects two distinct nodes. Theoretically you want to be able to assess a score to every pair of nodes (from-node, to-node) and grab the top several pairs for which edges do not already exist. However this requires a couple trillion score calculations, which for any computationally costly score calculation will become infeasible, and in any case will often produce a poor score that is subsequently discarded. So you have to cut your scope down; for each node, you might consider only nodes within a certain number of connections. (In fact my highest-ranking effort at present writing only attempts to connect node A to node B if node B is already connected to node A; perhaps this implies that my more sophisticated attempts are super lame, but hey, it’s currently 64th percentile, so you could do worse.) There are two papers I’ve found informative for the same reason, namely, that they provide a broad overview of edge prediction methodology. Liben-Nowell and Kleinberg’s “The Link Prediction Problem for Social Networks” I found to be more readable. Cukierski, Hamner, and Yang’s “Graph-based Features for Supervised Link Predictions” I found to be drier, but it specifically addresses directed graphs and it directly recounts the authors’ successful entry into a similar competition for Flickr (in fact Hamner now works for Kaggle). My first attempts (before really reading the above papers) were based off of a simple tip in the Kaggle forums. He proposed simply suggesting every connection A -> B for which B -> A (if A not already -> B). This actually would already get you to the 30th percentile as of the present writing, though this figure will of course drop over time. My best result is still a refined version of this approach, which simply ranks these predictions in a more intelligent fashion. Subsequent attempts at something “smarter” have not yielded improvements in score. The general approach I’m taking is to define a relevant neighborhood for each node from-node in the test set and then assess a score on each potential edge (from-node, to-node). In the brute-force case each node’s relevant neighborhood would be the entire graph; in the aforementioned strategy of completing bilateral connections, the relevant neighborhood would be any parent nodes of from-node. If you’re just computing one feature, you can just rank the nodes by score, optionally truncate the list based on some kind of cutoff, and return the top 10 nodes as your recommendations (or fewer if there are fewer than 10). The determination of the cutoff is a problem with an unclear answer; I think you have to do some kind of analysis on the distribution of scores, but even then you’re ultimately drawing a line in the sand. Alternatively, particularly if you’d like to combine more than one feature into your analysis, you could run a logistic regression, which is what I’ve been doing. Briefly, a linear regression attempts to fit a linear equation to a set of input variables to predict the value of an outcome variable; this can give distorted results if the outcome variables all fall within a band, such as if you’re trying to predict a 0-or-1 outcome. A logistic regression transforms the outcome variables from the range [0,1] to the full number line using a function called the logistic function; you would then invert it on any predictions back to the range [0,1] to get a meaningful number. In our case we can say we’re trying to predict the probability that an edge has been deleted between two nodes, and score node pairs based on this predicted probability. If you are only using one feature to predict, the logistic regression will be trivial, since the logistic function is monotonic; if one pair scores higher than another then it’ll still score higher after being passed through the logistic function. But you can run a regression on multiple features, such as if you wanted to use both two nodes’ common neighbors and their combined number of neighbors, and you can also add square and cube terms and cross terms and all the usual jazz that people do with regressions. Viewing the ranking score as a probability also gives you some intuition behind where you might set a cutoff. The highest score I’ve gotten so far involved plugging the nodes into a regression based on the numbers of children and parent connections on both the from-node and the to-node. There are a bunch of other methodologies in the above papers that I’d like to try – I’m currently working on a PageRank-based calculation, PageRank being the algorithm underlying how Google ranks web query relevancy. Written by Andy Wed 20 Jun 2012 at 3:44 pm Posted in Uncategorized Tagged with data, facebook, graph theory, kaggle, machine learning, prediction, statistics ## Statistically translating phrases with unusual translations Google Translate, according to Wikipedia and my own empirical observations, is based on the statistical machine translation paradigm. Rather than constructing its translations by learning dictionaries and rules of grammar, a statistical machine translator will analyze texts for which it has known good translations in multiple languages and will learn how to translate new phrases from them. Statistical translation is descriptivist, reflecting how people actually write and speak rather than how rules of syntax dictate they should write and speak. (To the extent that the source texts themselves reflect how people actually write and speak, of course.) Consequently, phrases that are in practice translated in a manner that differs strongly from their literal translations are translated as done in practice, not in the literal sense. Idioms are certainly one type of phrase that match this criterion: • L’habit ne fait pas le moine (French) translates to The clothes do not make the man (English), although the literal translation is The robe does not make the monk. (What is quite interesting is if you start with a lower case, l’habit ne fait pas le moine, you get a non-idiomatic translation, appearances can be deceiving.) • I’m pulling your leg (English) translates to Yo estoy tomando el pelo (Spanish), with pulling your leg translated to a phrase that in Spanish literally means pulling your hair (but has the same meaning as the English idiom). • I guess either Google did not source the news about the Costa Concordia disaster or faced too much diversity of translation when sourcing it, because the infamous phrase Vada a bordo, cazzo (Italian) is translated as Go on board, fucking (English), which is sort of broken and clearly was not parsed as a single phrase. This phrase was shouted by an Italian Coast Guard officer at the boat’s captain when the captain proved unwilling to go back and help the rescue; from what I’ve read it seems the right translation might be Get on board, dammit (what the press said) or Get the fuck on board (I suspect this is more unbowdlerizedly accurate, it sounds like the kind of thing a seafaring officer would have said in the stress of that situation if he were speaking English) or Get on board, you dick (apparently more literal, but I think the second phrase sounds slightly more natural). Another kind of phrase falling into this category is titles. 千と千尋の神隠し (Japanese), a beautiful 2001 animated movie directed by Hayao Miyazaki, translates to Spirited Away (English), which was how the studios translated its title when releasing it to English-speaking countries. The same title also translates to Voyage de Chihiro (French), which was its title in French-speaking countries (almost; it was more precisely Le voyage de Chihiro, and I wonder if there’s a non-statistical rule at play on Google’s side that made it drop the article?). I don’t speak a word of Japanese, but I found this article regarding the translation of the title, which more directly translates it into English as “Sen and Chihiro’s (experience of) being spirited away.” (In the film Chihiro is at one point renamed Sen, which has significance in her need to hold on to her identity.) Hence both of the above “official” translations differ from the literal translation, and in different ways; the English translation drops most of the title but retains the “spiriting away,” and the French translation drops Sen and converts the “spiriting away” into “the voyage.” This poses a bit of a problem when you actually want a literal translation. On my tumblr I recently referenced the fact that the Chinese title of Infernal Affairs, the Hong Kong movie upon which The Departed is based, apparently more directly translates to “the non-stop path,” a reference to Buddhist/Chinese hell. But when you feed 無間道 into Google Translate, you get The Departed in English (Infernal Affairs is actually listed as an alternate translation and not the first choice! Interesting that that happened). To underscore Google’s proper-noun interpretation of this phrase, French and Spanish also translate this to The Departed, which I guess means that most of Google’s source text in these languages reused the untranslated English title. (Translating to Portuguese, on the other hand, produces Os Infiltrados, which according to IMDB was the title under which the film was released in Brazil.) In any case you can’t get any other English translation from Google on this count. I do greatly prefer the data-driven, descriptivist approach of statistical translation over a rules-based approach (and the success of Google Translate is a testament to the validity of the statistics); this is a small but interesting area where it falls a little short. You’re only as good as your data. Written by Andy Thu 7 Jun 2012 at 7:02 pm Posted in Uncategorized Tagged with data, statistics, translation ## Correlated cancer treatments I’ve been reading a really great book, Siddhartha Mukherjee’s The Emperor of All Maladies, which is a history (or as the subtitle calls it, a “biography”) of cancer. I’m far from the first person to praise it; in fact, I started reading it on the basis of a strong recommendation from Marginal Revolution. I will say that Mukherjee is particularly good at distilling the history of oncology into meaningful themes: the ancient theory of humors, aggressive amputation as treatment, rivaling schools of thought on carcinogenesis, and so on. Bad history writing becomes a list of then-this, then-this, while good history writing finds cohesive narrative threads; this is good history writing. At one point the book brings up a point I hadn’t thought about: since cancer cells divide far more rapidly than normal cells, they will also evolve far more rapidly in response to the selective pressure of medication. As bacteria evolve resistance to antibiotics, so too can cancer cells evolve resistance to chemotherapy. I’m no doctor and so I cannot comment on how important of a consideration this is in cancer treatment (perhaps it is actually quite minor?), but I do find it an interesting though unfortunate example of evolution at a sub-organism level. The book also discusses the approach developed in the 1960s of treating cancer with an aggressive combination of chemotherapeutic drugs. One chapter describes oncologists trying first two, then three, then four cytotoxic drugs at once, seriously endangering patients’ lives in the hopes of eliminating every last trace of their cancers. (Many cancer treatments are harmful to healthy human cells in addition to cancerous ones, making treatment potentially lethal; at the same time there had been cases of cancer returning after having been reduced to undetectable levels, encouraging doctors to pursue forceful medication even after outward signs of the disease had disappeared.) Mukherjee describes a synergistic effect of combinative treatment: “Since different drugs elicited different resistance mechanisms, and produced different toxicities in cancer cells, using drugs in concert dramatically lowered the chance of resistance and increased cell killing.” (p. 141) An important consideration in combinative treatments, then, is the correlation between the probabilities of cells evolving resistance against them. A really ideal pair of treatments would be two treatments where resistant mutation against one necessarily produced non-resistance against the other. For example, if one treatment’s chemical pathway relied on the presence of a certain protein and the other relied on its absence, the two in combination would be immune to a mutation that toggled production of that protein. Correspondingly, I would guess that it would be easier for cancers to evolve resistance against two chemotherapies with similar chemical pathways. Written by Andy Thu 5 Apr 2012 at 11:01 pm Posted in Uncategorized Tagged with cancer, statistics ## Knowing versus understanding I was watching a little bit of spring training baseball and at one point the announcer mentioned that the infield fly rule was in effect. For whatever reason, the infield fly rule is sometimes held up as a baseball obscurity known only by devoted fans. It’s actually extremely logical and easy to remember. The problem is in how you remember it: if you simply remember the rule word by word, it will seem like a piece of arcana, but if you understand why it’s in place it’s quite simple. The infield fly rule states that if there are fewer than two outs and runners on first and second (a runner may or may not be on third as well), any easy pop fly to the infield is an automatic out. The reason it exists is to prevent cheap double plays. If such a ball is not an automatic out, then the fielders can wait under the ball and watch the runners. If either runner strays from his base, the fielders can catch the ball and throw to the vacated base for an easy two outs. But if both runners stay near their bases, they can intentionally drop the ball, quickly pick it up, and get easy force outs at third and second. To remember the infield fly rule, just remember that it applies any time there might be a cheap double play off of a pop fly. A good mathematical example of this sort of knowing versus understanding is the normal distribution. The formula for its probability distribution function φ(x) is as follows: $\phi (x) = (\sigma \sqrt{2\pi})^{-1} e^{-(x-\mu)^2/(2\sigma^2)}$ where (μ, σ) are the mean and standard deviation of the distribution, respectively. To someone unfamiliar with statistics, this seems like a painful thing to memorize. But it’s much easier if you break it into comprehensible pieces. The formula, currently with respect to x, can be re-expressed in terms of a “standardized” x, where you subtract the mean and then divide by the standard deviation. This variable will have a mean of 0 and a standard deviation of 1, since means are additive (if you add n to a random number, its mean will increase by n) and standard deviations scale multiplicatively (if you multiply a random number by n, its standard deviation will also scale up by n). So if we denote our standardized variable as x with a bar over it (a mathematical convention), we get: $\bar{x} = (x-\mu)/\sigma$ $\phi (x) = (\sigma \sqrt{2\pi})^{-1} e^{-\bar{x}^2/2}$ Now what about that term on the left? Well, the integral of a probability distribution function from -∞ to +∞ must have a value of 1, so that the total probability of all its possible outcomes sums to 1. The purpose of that term is simply to normalize the curve so that this condition is met. As it turns out: $\int_{-\infty}^\infty e^{-x/2} dx = (\sqrt{2\pi})^{-1}$ This is perhaps something that you do need to memorize straight out unless you want to solve this integral every time you write down the normal distribution. But it’s at least a pretty cool relationship to memorize, relating two major mathematical constants with a bit of calculus. If we take this formula and use a change of variables we get: $\int_{-\infty}^\infty e^{-\bar{x}/2} dx = (\sigma\sqrt{2\pi})^{-1}$ That’s why the normalizing factor on the left is what it is. Let’s call it A; since we know that it is for the single semantic purpose of normalizing the integral, we should be comfortable shoehorning it into a variable this way. We can now express the normal distribution as follows: $\phi (x) = Ae^{-\bar{x}^2/2}$ This is the normal probability distribution function down to its bare bones. It’s simply the curve e^(-x^2/2), adjusted by the desired mean and standard deviation, and then normalized by a factor so that the total area under the curve is 1. If you understand this, not only will it be a lot easier for you to remember the formula, but you’ll have a much better comfort level with the function and will be able to apply it more readily elsewhere. Written by Andy Tue 20 Mar 2012 at 2:37 pm Posted in Uncategorized Tagged with baseball, normal distribution, statistics ## Layman’s explanation of PCA (I started writing a post related to principal components analysis, and tried to write a brief layman’s explanation of it at its start. But I wasn’t able to come up with something short that was still adequate for the purposes of understanding the post. So I expanded my layman’s explanation to a full post, and will write my originally intended post next.) Principal components analysis (PCA) is a statistical method in which you re-express a set of random data points in terms of basic components that explain the most variance in the data. For the layman, I think it is easiest to understand with an example data set. Below is some basic World Bank 2009 data for the G20 countries (19 data points, since one of the G20 “countries” is the EU): | | | | | |----------------|--------------------|-------------------------|------------------------| | Country | GDP per capita (\$) | Life expectancy (years) | Forested land area (%) | | Argentina | 7,665 | 75 | 10.7 | | Australia | 42,131 | 82 | 19.4 | | Brazil | 8,251 | 73 | 61.4 | | Canada | 39,644 | 81 | 34.1 | | China | 3,749 | 73 | 22.2 | | France | 40,663 | 81 | 29.1 | | Germany | 40,275 | 80 | 31.8 | | India | 1,192 | 65 | 23.0 | | Indonesia | 2,272 | 68 | 52.1 | | Italy | 35,073 | 81 | 31.1 | | Japan | 39,456 | 83 | 68.5 | | Mexico | 7,852 | 76 | 33.3 | | Russia | 8,615 | 69 | 49.4 | | Saudi Arabia | 13,901 | 74 | 0.5 | | South Africa | 5,733 | 52 | 4.7 | | South Korea | 17,110 | 80 | 64.1 | | Turkey | 8,554 | 73 | 14.7 | | United Kingdom | 35,163 | 80 | 11.9 | | United States | 45,758 | 78 | 33.2 | Each data point (GDP per capita, life expectancy, forested land area) can be expressed in terms of a linear combination of vectors (1,0,0), (0,1,0) and (0,0,1), which I’ll refer to as components. For example, Argentina’s data can be represented as 7665 * (1,0,0) + 75 * (0,1,0) + 10.7 * (0,0,1). Using these components as our “basis” is very straightforward, since the coefficients simply correspond to the values of the data points. However, it is an algebraic fact that we could have used any three linearly independent vectors as our components (“linearly independent” vectors cannot be expressed as a sum of multiples of each other). For example, if our vectors had been (1,1,0), (1,0,1), and (0,1,1), then we could also have represented Argentina as 3864.65 * (1,1,0) + 3800.35 * (1,0,1) – 3789.65 * (0,1,1). These coefficients are not especially intuitive, but the components do work; we could re-express all of the countries’ data points in terms of this basis instead. PCA provides us with a way of finding basis vectors that explain the largest amount of variance in the data. For example, as you might expect, GDP per capita and life expectancy are correlated. Therefore a basis vector like (10000,4,0) would be useful because variation in its coefficient would explain a lot of the variation in the overall data. PCA produces a set of component vectors where the first vector is the one that explains the most variance possible, the second vector explains the most variance after accounting for the variance explained by the first vector, and so on. We often standardize the data by its standard deviation first, to avoid overweighting numerically larger data points; for example, we wouldn’t want to give undue weight to GDP per capita over life expectancy just because GDP figures are in the thousands and life expectancy figures are all below 100. (This gives us vectors whose lengths are all equal to 1.) Running a standardized PCA on the data in R (using the function `prcomp()`) above yields the following three component vectors: | | | | | |-------------------------|-----------|-------------|------------| | component | PC1 | PC2 | PC3 | | GDP per capita (\$) | 0.6539131 | -0.35020818 | -0.6706355 | | Life expectancy (years) | 0.6925541 | -0.07977085 | 0.7169418 | | Forested land area (%) | 0.3045760 | 0.93326980 | -0.1903749 | Variation in the coefficients of the first vector explains 60.3% of the variance of the data; when you add the second vector you can explain an additional 31.5%, and when you add the third you explain the remaining 8.2%. (Since as we discussed, the data can be fully re-expressed with three vectors, the variance should be fully explained by the time we include the third vector.) This analysis tells us that the most important explanatory axis is that of GDP per capita and life expectancy, although forested land area is also correlated with these two to a weaker extent. You can see this by the fact that the first principal component has positive numbers for all three but very similar numbers for GDP per capita and life expectancy. If we had to simplify our data down to one single number per country while losing the least amount of information, the coefficient of the first principal component would be it. The second principal component tells us that the variation that remains after the first component can be best explained with variation in forested land area, with some negative weight given to GDP per capita. This is as we might expect; once variation along the GDP-life expectancy axis is accounted for, the remaining variation is mostly in forested land area. (I included it specifically to be poorly correlated with the other two.) The fact that GDP per capita has a negative value on the second component suggests that it is less correlated with forested land area than the first component alone would suggest. This is indeed true; forested land area in our data set has a 28% correlation with life expectancy but only an 8% correlation with GDP per capita. The third component shows that the remaining variance is mostly how life expectancy and GDP per capita differ beyond that which is predicted by variation in the first two components. Keep in mind, though, that by the time we’re here we have already explained 91.8% of the data variance; it is less valuable to read into the meaning of the least significant principal components. Written by Andy Mon 6 Feb 2012 at 2:00 am Posted in Uncategorized Tagged with layman's explanations, principal components, statistics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463252425193787, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/105116-solved-using-graphing-calculator-solve-equation.html
# Thread: 1. ## [SOLVED] Using a graphing calculator to solve the equation I have a TI-89 Titanium. We have been using the calc to solve the equations by setting everything = 0 and putting in the Y= screen. We then graph, set the value to 0, and find where the line intersects the x-axis for the solution. On the 89 you can also do solve() and it brings you to the same answer. However, in the problem ln (2x) - -x+3 I do not see an intersection of the x axis when graphing this, but when using solve() i get x=0.2429601 Any ideas if the answer is actually no solution or do I go with the 0.2429601 ? 2. "the problem ln (2x) - -x+3", i don't understand this . . . please clarify 3. Originally Posted by spoken428 in the problem ln (2x) - -x+3 I do not see an intersection of the x axis when graphing this, but when using solve() i get x=0.2429601 When simplified, your expression is actually $ln(2x)+x+3$. When you plug 0.2429601 into that expression the result is 2.5212492, which is not zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881085515022278, "perplexity_flag": "middle"}