url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/algebra/116583-combination-problem.html
# Thread: 1. ## Combination problem Ok, so I'm doing a project and I've run into a problem. I have 5 rows of items. 3 items in the first row (1.1, 1.2, 1.3) 4 items in the second row (2.1, 2.2, 2.3) 5 items in the third row (3.1, 3.2, etc.) 4 items in the fourth row and 5 items in the fifth row If I was allowed to pick one item from each row, how many possible combinations of items would there be? (eta: Yeah, I know I could count, but I bet there's an easier way to do it) More importantly, what is this kind of problem called? I'm planning to do a bit more with this and I've love to know if there's an easy way to figure out what each of those combinations would be. 2. Originally Posted by quixotecoyote Ok, so I'm doing a project and I've run into a problem. I have 5 rows of items. 3 items in the first row (1.1, 1.2, 1.3) 4 items in the second row (2.1, 2.2, 2.3) 5 items in the third row (3.1, 3.2, etc.) 4 items in the fourth row and 5 items in the fifth row If I was allowed to pick one item from each row, how many possible combinations of items would there be? (eta: Yeah, I know I could count, but I bet there's an easier way to do it) More importantly, what is this kind of problem called? I'm planning to do a bit more with this and I've love to know if there's an easy way to figure out what each of those combinations would be. You simply multiply the number of items in each row times each other. This is an application of combinations. Take $_{n}C_{r}$ for each individual row, where n is the number of items and r is the number you are choosing. r=1 for all of these so the combinations just the number of elements in the row. If you want the ways of independent situations to both occur, you multiply them. 3. Ah. Thank you. Somehow I didn't think it would be that easy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534397125244141, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/tagged/longest-common-substring
# Tagged Questions The longest-common-substring tag has no wiki summary. After I learned how to build a suffix array in $O(N)$ complexity, I am interested in discovering the applications of the suffix arrays. One of these is finding the longest common substring between two ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132282137870789, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/207790-relationships-form-y-kx-n.html
# Thread: 1. ## relationships of the form y=kx^n I have a set of data: x 1 2 3 5 10 y 0.02 0.32 1.62 12.53 199.80 And I am told that the suspected relationship between x and y is of the form y = kx^n I need to explain how a graph of log(x) against log(y) tells me whether this is a good model for relationship? I know that taking logs of x and y and plotting them on a graph will give me a straight line, but how does this tell me that the model (y=kx^n) is appropriate for the relationship? P.S. What does the straight line of log x and log y shows? Is it like a line of best fit? Would be grateful for any relevant explaination or a hint, I am trying to figure this out for the past 2 hours Thanks. 2. ## Re: relationships of the form y=kx^n $log{y} = \log(kx^n)$ $\log{y} = \log{k} + \log{x^n}$ $\log{y} = \log{k} + n\log{x}$ now compare each term above to the slope-intercept form of a linear equation ... $y = b + mx$ the y-intercept is $\log{k}$ and the slope is $n$, the exponent of your power function. using your data, I get an logarithmic linear regression of about (coefficients rounded) $y = -1.7 + 4x$ which translates to $y = 0.02 \cdot x^4$ 3. ## Re: relationships of the form y=kx^n Originally Posted by skeeter using your data, I get an logarithmic linear regression of about (coefficients rounded) $y = -1.7 + 4x$ which translates to $y = 0.02 \cdot x^4$ That is the part where I don't understand, what did you mean by y = –1.7 + 4x translates to y = 0.02x^4 ? 4. ## Re: relationships of the form y=kx^n Originally Posted by LoneWolf That is the part where I don't understand, what did you mean by y = –1.7 + 4x translates to y = 0.02x^4 ? $y = kx^n$ $k = 10^{-1.7} = .02$ $n = 4$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189291596412659, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/inequality?page=1&sort=votes&pagesize=30
Tagged Questions Questions on proving and manipulating inequalities. 3answers 601 views Prove $(\dfrac{2}{5})^{\frac{2}{5}}<\ln{2}$ Inadvertently, I find this interesting inequality,But this problem have nice solution? prove that $$\ln{2}>(\dfrac{2}{5})^{\frac{2}{5}}$$ This problem have nice solution? Thank you. ago,I find ... 7answers 1k views Inequality: $(a^3+a+1)(b^3+b+1)(c^3+c+1) \leq 27$ Let be $a,b,c \geq 0$ such that: $a^2+b^2+c^2=3$. Prove that: $$(a^3+a+1)(b^3+b+1)(c^3+c+1) \leq 27.$$ I try to apply $GM \leq AM$ for $x=a^3+a+1$, $y=b^3+b+1,z=c^3+c+1$ and \displaystyle ... 4answers 771 views AM-GM-HM Triplets I want to understand what values can be simultaneously attained as the arithmetic (AM), geometric (GM), and harmonic (HM) means of finite sequences of positive real numbers. Precisely, for what points ... 3answers 1k views Inequality for expected value A colleague popped into my office this afternoon and asked me the following question. He told me there is a clever proof when $n=2$. I couldn't do anything with it, so I thought I'd post it here and ... 6answers 835 views A sub-additivity inequality In trying to understand a result of D. Rider (Trans. AMS, 1973) I've got stuck on a lemma that he uses. At one point he makes a step without comment or explanation, but I can't see why it works. Here ... 6answers 2k views What is the larger of the two numbers? What is the larger of the two numbers? $$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$ I solved this, and I think that is an interesting elementary problem. I want different points ... 3answers 843 views A combinatorial proof of $n^n(n+2)^{n+1}>(n+1)^{2n+1}$? The statement is, of course, simply that the sequence $\left(1+\frac{1}{n}\right)^n$ is increasing. Since the numbers $n^m$ have quite natural combinatorial interpretations, it makes me wonder if a ... 3answers 835 views Random Variable Inequality Doing a little reading over the break (The Probabilistic Method by Alon and Spencer); can't come up with the solution for this seemingly simple (and perhaps even a little surprising?) result: (A-S ... 4answers 1k views Which is bigger? In my classes I sometimes have a contest concerning who can write the largest number in ten symbols. It almost never comes up, but I'm torn between two "best" answers: a stack of ten 9's (exponents) ... 6answers 1k views Proving : $\bigl(1+\frac{1}{n+1}\bigr)^{n+1} \gt (1+\frac{1}{n})^{n}$ How could we prove that this inequality holds $$\left(1+\frac{1}{n+1}\right)^{n+1} \gt \left(1+\frac{1}{n} \right)^{n}$$ where $n \in \mathbb{N}$, I think we could use the AM-GM inequality ... 2answers 378 views An “AGM-GAM” inequality For positive real numbers $x_1,x_2,\ldots,x_n$ and any $1\leq r\leq n$ let $A_r$ and $G_r$ be , respectively, the arithmetic mean and geometric mean of $x_1,x_2,\ldots,x_r$. Is it true that the ... 2answers 972 views I'm not sure about this inequality (how to prove or disprove it?) For $a_1,...,a_n,b_1,...,b_n>0,\quad$ define $a:=\sum a_i,\ b:=\sum b_i,\ s:=\sum \sqrt{a_ib_i}$. Is the following inequality true?: {\frac{\Bigl(\prod a_i^{a_i}\Bigr)^\frac1a}a \cdot ... 1answer 396 views Computing the best constant in classical Hardy's inequality Classical Hardy's inequality (cfr. Hardy-Littlewood-Polya Inequalities, Theorem 327) If $p>1$, $f(x) \ge 0$ and $F(x)=\int_0^xf(y)\, dy$ then \tag{H} \int_0^\infty ... 2answers 359 views Trigonometric Inequality. $\sin{1}+\sin{2}+\ldots+\sin{n} <2$ . How can I prove the following trigonometric inequality : $$\sin1+\sin2 +\ldots+\sin n <2$$ with $n \in \mathbb{N}^{*}$. The problem is that I don't know how to start this problem, I try to ... 26answers 3k views How can I prove that $xy\leq x^2+y^2$? How can I prove that $xy\leq x^2+y^2$? 8answers 718 views Comparing $2013!$ and $1007^{2013}$ I have to compare the following two numbers: $$2013! \text{ and } 1007^{2013}$$ where $n! = 1 \times 2 \times \cdots \times (n-1) \times n$. I tried in different ways to group the \$1 \times 2 ... 6answers 828 views $m!n! < (m+n)!$ Proof? Prove that if $m$ and $n$ are positive integers then $m!n! < (m+n)!$ Given hint: $m!= 1\times 2\times 3\times\cdots\times m$ and $1<m+1, 2<m+2, \ldots , n<m+n$ It looks simple but ... 3answers 396 views An Inequality Involving Bell Numbers: $B_n^2 \leq B_{n-1}B_{n+1}$ The following inequality came up while trying to resolve a conjecture about a certain class of partitions (the context is not particularly enlightening): $$B_n^2 \leq B_{n-1}B_{n+1}$$ for \$n \geq ... 2answers 677 views Geometric proof for inequality While on AOPS, I saw this interesting problem. I was wondering how many different approaches could be used to tackle the problem. In other words I am looking for interesting and unique ways to solve ... 1answer 587 views Do inequalities that hold for infinite sums hold for integrals too? Let $\mathbb{R}_{\geq0}$ denote the set of non-negative reals and $+\infty$, and $\mathbb{Z}^+$ denote the set of positive integers. I will also let $\lambda$ denote the Lebesgue measure on ... 6answers 427 views $\log_9 71$ or $\log_8 61$ I am trying to know which one is bigger :$$\log_9 71$$ or $$\log_8 61$$ how can i know without using a calculator ? 2answers 419 views Inequality on the side lengths of a triangle: $\left| \frac{a}{b} + \frac{b}{c} + \frac{c}{a} - \frac{a}{c} - \frac{b}{a} - \frac{c}{b} \right| < 1$. This problem is taken from the Kosovo Mathematical Olympiad for Grade-$10$ students. Let $a$, $b$ and $c$ be the lengths of the edges of a given triangle. How can one prove the following ... 6answers 831 views Simple proof that $8\left(\frac{9}{10}\right)^8 > 1$ This question is motivated by a step in the proof given here. \$\begin{align*} 8^{n+1}-1&\gt 8(8^n-1)\gt 8n^8\\ &=(n+1)^8\left(8\left(\frac{n}{n+1}\right)^8\right)\\ &\geq ... 4answers 255 views Prove $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$ Let $a,b,c$ are non-negative numbers, such that $a+b+c = 3$. Prove that $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$ Here's my idea: $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$ ... 1answer 198 views $x^3-3x-3=0$, prove that $10^x<127$ $x$ is the real root of the equation $$3x^3-5x+8=0,\tag 1$$ prove that $$e^x>\frac{40}{237}.$$ I find this inequality in a very accidental way,I think it's very difficult,because the actual value ... 4answers 605 views Purely “algebraic” proof of Young's Inequality Young's inequality states that if $a, b \geq 0$, $p, q > 0$, and $\frac{1}{p} + \frac{1}{q} = 1$, then $$ab\leq \frac{a^p}{p} + \frac{b^q}{q}$$ (with equality only when $a^p = b^q$). Back when I ... 2answers 409 views How to show that $\frac{\pi}{5}\leq\int_0^1 x^x\,dx\leq\frac{\pi}{4}$ Show that: $$\frac{\pi}{5}\leq\int_0^1 x^x\,dx\leq\frac{\pi}{4}$$ All I've got so far is that the minimum of $x^x$ is $e^{-1/e}$. At this point I could compare $\pi/5$ to $e^{-1/e}$ but I'm ... 4answers 219 views CSB inquality: is $\|x\|^2\|y\|^2 - \langle x,y \rangle^2$ a square in any obvious way? Suppose $x=(x_1,x_2),y = (y_1,y_2) \in \mathbb{R}^2$. I noticed that \begin{align*} \|x\|^2 \|y\|^2 - \langle x,y \rangle^2 &= x_1^2y_1^2 + x_1^2 y_2^2 + x_2^2 y_1^2 + x_2^2 y_2 ^2 - (x_1^2 y_1^2 ... 1answer 179 views Prove $\frac{1}{2\sqrt{2}+1}+\frac{1}{3\sqrt{3}+2\sqrt{2}}+\cdots+\frac{1}{100\sqrt{100}+99\sqrt{99}}<\frac{9}{10}$ What would you suggest for the following inequality? $$\frac{1}{2\sqrt{2}+1}+\frac{1}{3\sqrt{3}+2\sqrt{2}}+\cdots+\frac{1}{100\sqrt{100}+99\sqrt{99}}<\frac{9}{10}$$ Thanks in advance! Sis. EDIT: ... 2answers 600 views Sum inequality: $\sum_{k=1}^n \frac{\sin k}{k} \le \pi-1$ I'm interested in finding an elementary proof for the following sum inequality: $$\sum_{k=1}^n \frac{\sin k}{k} \le \pi-1$$ If this inequality is easy to prove, then one may easily prove that the sum ... 4answers 847 views An Inequality problem relating $\prod\limits^n(1+a_i^2)$ and $\sum\limits^n a_i$ Let $(a_1,\space a_2,\space \cdots, \space a_n) \in \mathbb R^n_+$ such that $\displaystyle \prod^n_{i=1 }a_i = 1$. Prove that \displaystyle \prod^n_{i=1} (1+a_i^2) \le \cfrac ... 1answer 293 views How to prove this inequality in Euclidean space? Prove that $$\begin{align*}&|a+b||a+c|+|a+b||b+c|+|a+c||b+c|\\ \leq &(|a|+|b|+|c|) \cdot |a+b+c|+|a||b|+|a||c|+|b||c|\end{align*}$$ in Euclidean space $\mathbb{R}^n$. I have been ... 1answer 155 views How to prove $\frac{\pi^2}{6}\le \int_0^{\infty} \sin(x^{\log x}) \ \mathrm dx$? I want to prove the inequality $$\frac{\pi^2}{6}\le \int_0^{\infty} \sin(x^{\log x}) \ \mathrm dx$$ There are some obstacles I face: the indefinite integral cannot be expressed in terms of ... 1answer 243 views Trace inequality for real matrices Is there any general result characterizing real matrices $A$ such that $$[\mathrm{tr}(A)]^2\leq n\mathrm{tr}(A^2)?$$ I can see that the inequality holds if: all eigenvalues of $A$ are real (by the ... 3answers 345 views Inequality for cosines Is the following inequality in a triangle known? $$4(\cos A + \cos B + \cos C) \le 3 + \cos \left(\frac{B-C}{2}\right) + \cos \left(\frac{C-A}{2}\right) + \cos \left(\frac{A-B}{2}\right)$$ It looks ... 3answers 778 views Stuck trying to prove an inequality I have been trying to prove (the left half of) the following inequality: $$\underbrace{\sum_i \sum_j |x_i| \le \sum_i \sum_j |x_i + x_j|}_\textrm{?} \le 2 \sum_i \sum_j |x_i|$$ (All $x_i$s are ... 4answers 438 views Showing that $|\cos x|+|\cos 2x|+\cdots+|\cos 2^nx|\geq \dfrac{n}{2\sqrt{2}}$ For every nonnegative integer $n$ and every real number $x$ prove the inequality: $$\sum_{k=0}^n|\cos(2^kx)|= |\cos x|+|\cos 2x|+\cdots+|\cos 2^nx|\geq \dfrac{n}{2\sqrt{2}}$$ 1answer 281 views On the equality case of the Hölder and Minkowski inequalites I'm following the book Measure and Integral of Richard L. Wheeden and Antoni Zygmund. This is the problem 4 of chapter 8. Consider $E\subseteq \mathbb{R}^n$ a measurable set. In the following all the ... 3answers 411 views Proving that $\sum_{i=0}^{n}\binom{n}{i}i^{n-i}(n-i)^{i}\le\frac{1}{2}n^n$ How can we prove that $$\displaystyle\sum_{i=0}^{n}\binom{n}{i}i^{n-i}(n-i)^{i}\le\dfrac{1}{2}n^n$$ where $\displaystyle\binom{n}{i}=\dfrac{n!}{i!(n-i)!}$. This inequality is very interesting. I ... 1answer 366 views Proving a complicated inequality involving integers Let $a,b,c,d$ be integers such that $$\left( \begin{matrix} a & b \\ c & d \end{matrix} \right) = \left( \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right) \mod 2$$ $$ad-bc =1$$ ... 6answers 808 views Proving the inequality $e^{-2x}\leq 1-x$ How do I prove the inequality $e^{-2x}\leq1-x$ for $0\leq x\leq1/2$? 5answers 2k views Inequality: $(x + y + z)^3 \geq 27 xyz$ Edit: $a,b,c$ and $x,y,z$ are positive, real numbers. Since $(a-b)^2 \geq 0~$, $a^2 + b^2 - 2ab\geq0~$ and $a^2 + b^2 \geq 2ab~$. Similarly, $a^2 + c^2 \geq 2ac~$ and $b^2 + c^2 \geq 2bc~$. ... 3answers 592 views Motivation for triangle inequality Triangle inequality is used in one context or the other in analysis. To list a few $$\|x+y\| \leq \|x\| + \|y\|$$ $$d(x,y) \leq d(x,z) + d(z,y)$$ $$\mu(A \cup B) \leq \mu(A) + \mu(B)$$ What ... 2answers 417 views Complex-number inequality $| z_1 z_2 \ldots z_m - 1 | \leq e^{|z_1 - 1| + \ldots + |z_m - 1|} - 1$ Let $z_1, z_2 \ldots z_m$ be complex numbers, $m \in \mathbb{N}$. Can anybody tell me how to prove the following inequality? $| z_1 z_2 \ldots z_m - 1 | \leq e^{|z_1 - 1| + \ldots + |z_m - 1|} - 1$ ... 2answers 527 views the least value for :$\frac{a}{b^3+54}+\frac{b}{c^3+54}+\frac{c}{a^3+54}$ For every $a,b,c$ non-negative real number such that:$a+b+c=1$ how to find the least value for : $$\frac{a}{b^3+54}+\frac{b}{c^3+54}+\frac{c}{a^3+54}$$ 1answer 245 views Chebyshev: Proof $\prod \limits_{p \leq 2k}{\;} p > 2^k$ How do I prove the following: $$\prod_{p \leq 2k} \; p > 2^k \text{ with } p \in \mathbb{P}$$ I tried induction, but I didn't know how to go on because I don't have a look at all numbers. ... 2answers 464 views Inequality. $\sqrt{\frac{11a}{5a+6b}}+\sqrt{\frac{11b}{5b+6c}}+\sqrt{\frac{11c}{5c+6a}} \leq 3$ Let $a,b,c$ be positive numbers . Prove the following inequality: $$\sqrt{\frac{11a}{5a+6b}}+\sqrt{\frac{11b}{5b+6c}}+\sqrt{\frac{11c}{5c+6a}} \leq 3.$$ What I tried: I used ... 2answers 118 views Proving the inequality $\tan(1)\le\sum_{k=1}^{\infty} \frac{\sin(1/k^2)}{\cos^2 (1/(k+1))}$ How am I supposed to prove this inequality? $$\tan(1)\le\sum_{k=1}^{\infty} \frac{\sin\left(\frac{1}{k^2}\right)}{\cos^2 \left(\frac{1}{k+1}\right)}$$ Jordan inequality might be an option but led me ... 1answer 489 views Combinatorial proof of arithmetic geometric mean inequality It is a well known fact that for positive reals $x_1, x_2, \dots, x_n$, their arithmetic mean is no less than their geometric mean: \frac{x_1 + x_2 + \dots + x_n}{n} \ge \sqrt[n]{x_1 x_2 \dots ... 1answer 234 views Proving $\pi(\frac1A+\frac1B+\frac1C)\ge(\sin\frac A2+\sin\frac B2+\sin\frac C2)(\frac 1{\sin\frac A2}+\frac 1{\sin\frac B2}+\frac 1{\sin\frac C2})$ Let $\Delta ABC$, prove that \pi\left(\dfrac{1}{A}+\dfrac{1}{B}+\dfrac{1}{C}\right)\ge \left(\sin{\dfrac{A}{2}}+\sin{\dfrac{B}{2}}+\sin{\dfrac{C}{2}} \right) ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 31, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187210202217102, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/8841/how-can-i-plot-the-direction-field-for-a-differential-equation/8848
How can I plot the direction field for a differential equation? I'd like to plot the graph of the direction field for a differential equation, to get a feel for it. I'm a novice, right now, when it comes to plotting in Mathematica, so I'm hoping that someone can provide a fairly easy to understand and thorough explanation. My hope is that I will become fairly proficient at understanding plotting in Mathematica, as well as differential equations. I'm a little more familiar with differential equations, but very far from what I'd consider to be an expert. I do have an equation in mind, taken from this question from Math.SE: $$y'=\dfrac{y+e^x}{x+e^y}$$ I ran `DSolve` on it, and after a minute it was unable to evaluate the function. So perhaps this could make for an interesting exploration for others as well. I'm wondering what experience in Mathematica has taught others about what can be done in Mathematica - I'm hoping someone can offer some useful tips and demonstrations. I'm really interested in learning about what can be done with differential equations, so I think that other equations will suffice if they serve as a better example. - 4 Answers For a first sketch of the direction field you might use `StreamPlot`: ````f[x_, y_] = (y + E^x)/(x + E^y) StreamPlot[{1, f[x, y]}, {x, 1, 6}, {y, -20, 5}, Frame -> False, Axes -> True, AspectRatio -> 1/GoldenRatio] ```` - If you wish to explore the solutions to an equation I'd suggest the `EquationTrekker` package. Have a look at the documentation. ````Needs["EquationTrekker`"] EquationTrekker[y'[x] == (y[x] + Exp[x])/(x + Exp[y[x]]), y, {x, -5, 5}] ```` - Here is something you can do quickly and gives a nice understanding of the behavior for positive and negative initial conditions: ````s[r_?NumericQ] := NDSolve[{D[y[x], x] == (y[x] + E^x)/(x + E^y[x]), y[0] == r}, y, {x, 0, 5}] Plot[Evaluate[ y[x] /. s[#] & /@ Union[Range[-2, 0, .1], Range[-.10, 10, 1]]], {x, 0, 15}, PlotRange -> Full] ```` - No need to solve the differential equation to generate a direction field. According to the Wikipedia lemma on slope fields you can plot the vector `{1, (y + Exp[x])/(x + Exp[y])}`: ````VectorPlot[{1, (y + Exp[x])/(x + Exp[y])}, {x, 0, 2}, {y, 0, 2}] ```` or perhaps you could use a stream plot: ````StreamPlot[{1, (y + Exp[x])/(x + Exp[y])}, {x, 0, 2}, {y, 0, 2}] ```` - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540092349052429, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=24da7ff5f02d21fb2311c006d026b091&p=3941228
Physics Forums Probability density versus radial distribution function Okay, this is a really basic question. I'm just learning the basics of QM now. I can't wrap my head around the idea that the radial distribution function goes to zero as r-->0 but that the probability density as at a maximum as r-->zero. How can this be? Thanks! PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus What are you referring to exactly? The hydrogen ground state wave function? As you say, the probability density is a maximum at r=0. Perhaps the point you've missed is that the volume element (r^2 sin(theta))dr dtheta dphi goes to zero at r=0? Recognitions: Science Advisor Ah, the 1s orbital. It's a common source of confusion, that. Well, the radial wavefunction has a maximum at r=0. This is the point, i.e. infinitesimal volume element that has the highest probability. It's the volume of space the electron's most likely to be in. The radial distribution function, on the other hand, is the sum of probabilities over a given r. It's not the probability of just one spot, but of every spot on the surface of an infinitesimally thin sphere of that radius. Since a sphere with zero radius has zero area, it means the radial distribution is 0 at r=0 no matter what the wave function's value there. An analogy I once thought up: Imagine you paint a bunch of different-sized spheres, and where the thickness of the paint is $$e^{-r}$$.. The sphere with the thickest coat of paint is not the same thing as the sphere with the most paint on it! Probability density versus radial distribution function Thank you. Yes, I'm referring to the 1s H-atom. Mathematically it makes perfect sense: if r=0 then the radial distribution function is zero. So let's say we take a concentric sphere with the inner surface being the surface of the nucleus, and the outer surface at a distance dr from the surface of the nucleus. Saying that the radial distribution function is zero means that electrons will never be found within that volume. On the other hand, probability density (psi^2) tells us that there is some finite probability that an electron will be found in that very same region of space. Doesn't it? It's only right at the origin that the radial distribution function goes to zero. Since the 1s wave function is spherically symmetric, the probability of finding the electron between r and r+dr is give by the expresssion: $$4\pi r^2 \psi(r) \psi^*(r) dr$$. This will be nonzero (but very small) when r=r(nucleus), as you asked. The reason it will be very small is that the volume of space at r=r(nucleus) is very small. Try plotting the functions $$4\pi r^2 \psi(r) \psi^*(r)$$ and $$\psi(r) \psi^*(r)$$, and I think you will see why it is this way. So r=0 is defined as the center of the nucleus? Yes. Where else would it be? Okay, but then how can there be any electron density inside the nucleus? Why can't the electron be inside the nucleus? Elementary particles appear to be pointlike as far down as we can measure. So even inside the nucleus it appears to be mainly empty space between the quarks and gluons that make it up. We know the electron definitely penetrates inside the nucleus because there are types of radioactive decay (called electron capture) where the nucleus basically captures one of the atomic electrons and transmutes into a nucleus with atomic number reduced by one. Electron capture occurs in large nuclei; however, the proton of the hydrogen atom cannot capture an electron yielding a free neutron. Quote by rlduncan Electron capture occurs in large nuclei; however, the proton of the hydrogen atom cannot capture an electron yielding a free neutron. No, of course not. I didn't mean to say that it could. I was just saying that this is evidence that the electron wave function does penetrate the nucleus. Do you think the electron wave function does not penetrate the nucleus of a hydrogen atom? If the radial wave function represents the probability that an electron will be contained within an infinitesimal volume at some specified radius, how can it be then that some plots of the radial wave functions have negative values, would this not imply a negative probability at these distances? For example, what does the radial wave function tell us at the distances where minima occur in the 2s, 3s and 3p plots? Thanks. Quote by Scott Gray If the radial wave function represents the probability that an electron will be contained within an infinitesimal volume at some specified radius, how can it be then that some plots of the radial wave functions have negative values, would this not imply a negative probability at these distances? For example, what does the radial wave function tell us at the distances where minima occur in the 2s, 3s and 3p plots? Thanks. It's not the wave function itself that represents the probability of finding the electron at a given location. It is the square of the wave function ψψ* . This quantity is always non-negative, although it can be zero. Thread Tools | | | | |------------------------------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: Probability density versus radial distribution function | | | | Thread | Forum | Replies | | | Quantum Physics | 4 | | | Advanced Physics Homework | 3 | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 4 | | | Set Theory, Logic, Probability, Statistics | 24 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917493462562561, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/13237/examples-where-an-ill-behaved-function-leads-to-surprising-results/13241
Examples where an ill-behaved function leads to surprising results? In mathematical derivations of physical identities, it is often more or less implicitly assumed that functions are well behaved. One example are the Maxwell identities in thermodynamics which assume that the order of partial derivatives of the thermodynamic potentials is irrelevant so one can write. Also, it is often assumed that all interesting functions can be expanded in a Taylor series, which is important when one wants to define the function of an operator, for example $$e^{\hat A} = \sum_{n=0}^\infty \frac{(\hat A)^n}{n!}.$$ Are there some prominent examples where such assumptions of mathematically good behavior lead to wrong and surprising results? Such as... an operator $f(\hat A)$ where $f$ cannot be expanded in a power series? - I'm wikifying this since it's a list question without a single correct answer. – David Zaslavsky♦ Aug 6 '11 at 16:08 5 Answers I think, the most transparent example is phase transition: by definition it is when some thermodynamic value does not behave well. AFAIK when Fourier showed that non-continuous function may be presented as an infinite sum of continuous, he had a hard time convincing people around that he is not crazy. That story might partially answer your question: as long as any not-so-well-behaved function may be presented as a sum of smooth ones, there is no much difference as long as good formulated laws are linear. Functions which are really bad behaved usually do not appear in real problems. If they do, there is some significant physics behind it (as with phase transition, shock wave, etc.) and one can not miss it. For an operator it is better (for physicist) to think of function from operator as a function acting on its eigenvalues (if it is not diagonalizable, in physics it is bad behaviour). This is equivalent to power series definition, but works for any function. - I have had a surprizing result due to the wave function having different left and right derivatives at a point (see Chapter 2.1 and Appendix 3). Generally this article contains more surprizing results just due to implicit assumptions being wrong. - Well, I know that when one solves the 1D Schrödinger equation for a potential $-\gamma \delta(r-a)$, then the left- and right-derivatives of the wavefunction $\Psi(r)$ differs by $\gamma \Psi(a)$ at that point. Is that what you're referring to? – Lagerbaer Aug 7 '11 at 15:10 @Lagerbaer Yes, to some extent. My perturbation is like $\delta(z-z_1) \frac{d}{dz}$. – Vladimir Kalitvianski Aug 8 '11 at 13:11 Well, I don't know if you want to count that, but QFT is full with functions that have poles which I'd call not well-behaved, and it does have lots of physical effects. If you're talking about observables only, you can approximate any discontinuous function to arbitrary precision with a continuous function, and you can push the difference below measurement precision. The reason one sometimes uses 'ill-behaved' functions (delta, heaviside, etc) is that they're easier to deal with. - But the poles in QFT are probably something people are fully aware of? I am thinking more about identities where in the proof an assumption about well behaved functions is made that then can get overlook when one just plugs some function into it – Lagerbaer Aug 8 '11 at 15:34 This principle fails in the most startling way in second order phase transitions. This is a particularly clean example, because Landau predicted the critical exponents of second-order phase transitions using only the principle that the thermodynamic functions are analytic. His argument is as follows: given a magnet going through the Curie point, where it loses its magnetization smoothly, the equilibrium magnetization should be the solution of some thermodynamic equation with the derivative some thermodynamic potential is set to zero. $F(T,m)=0$ At temperatures lower than T_c, the magnetization is nonzero, and at temperatures higher than Tc, the magnetization is 0, and it goes to 0 in a continuous way. How does it go to zero? Note that the magnetization m and -m are related by rotational symmetry. Shifting T_c to 0 by translating $f(t,m)= F(T_c - t,m)$, you get a new thermodynamic function, which has the property that f has only the trivial solution m=0 for negative t, and has two small nontrivial solutions in m for positive t. Because m=0 is a solution at t=0, the function $f$ has no constant term in a Taylor expansion. By the symmetry of $m\rightarrow -m$, only even powers of m contribute to its Taylor series. $f(t,m) = At + Bm^2 + Ct^2 + Dt^3 + E t m^2 ...$ Assuming that $f(t,m)$ is generic, A and B are not exactly zero. So for small enough t, for temperatures close enough to the critical point, you get that $m \propto \sqrt{t}$ Further, this scaling only fails if one of the coefficients is zero. If A=0, $m \propto |t|$ But m is then nonzero on both sides of the transition. If B=0, you get $m \propto t^{1\over 4}$ and m is zero or if A,B,C are zero, in which case you get $m \propto |t|^{3/4}$ And each of these cases requires fine tuning of parameters. So Landau predicted that the critical behavior of the magnetization will be as the square root of the temperature at the critical point, and that this behavior will be universal, it won't depend on the system, just on the existence of the phase transition. The Ising model should have the same critical exponent as the physical magnet, a square root dependence of the magnetization on the temperature, and the liquid gas transition will also have a bend in the curve of the density vs. temperature at the critical pressure which goes as the square root. The exponent turned out to be universal, it was equal for the gas and liquid, and for the Ising model. But it wasn't 1/2, but more like .308 in three dimensions, and .125 in two dimensions, It only turned into Landau's 1/2 in 4 dimensions or higher. This means that Landau's argument fails, and that the thermodynamic function is conspiring to be non-analytic at exactly the place where Landau was expanding. Understanding why it is non-analytic exactly at the phase transition led to the modern renormalization theory. In mathematics, Rene Thom proposed that a version of Landau's argument is a complete theory of the types of allowed phase transitions in nature. He called the phase transitions "catastrophes", because they showed a sudden change in behavior, and he predicted, based on catastrophe theory all sorts of scaling laws for natural transitions. This was the most ambitious attempt to exploit the observation that naturally occuring functions are nice. This fails for the same reason as Landau's argument: functions describing changes in the critical behavior of interesting systems at a transition point are rarely analytic at this point. - A nice example arises for the "rigorous coupled wave analysis" (RCWA) method (also called Fourier Modal Method), which is used as a Maxwell solver for diffraction gratings. The normal component of the electric field is discontinuous in the normal direction of a material interface. This leads to convergence problems of the RCWA method for TM polarization, because the discontinuous electric field component is expanded into a Fourier series and multiplied by another discontinuous function representing the grating geometry. Many modifications of the RCWA method to overcome this convergence problem where proposed, but the "correct" modification was only discovered in 1996 (by P. Lalanne and M. Morris?). Even so Lifeng Li didn't discover that "correct" modification, he wrote the famous paper "Use of Fourier series in the analysis of discontinuous periodic structures" (also in 1996) which analyzed mathematically what goes wrong (multiplication of "approximations of" discontinuous functions is dangerous) and why the latest proposed modification to the RCWA methods finally solved the convergence problem. Today, the Fourier Modal Methods are the most efficient and accurate for many types of grating problems. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401377439498901, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/21756/irreducible-representation-in-physics
# Irreducible representation in physics Group theory books written for physicists say that any reducible representation can be decomposed in terms of irreducible representations (so correct me if I am wrong, to me irreducible representations are like the unit vectors i j k in terms of which any 3D vector can be expanded, or they are like sines and cosines in terms of which any periodic function can be Fourier expanded.) Now at the same time they say that any bigger representation of a group can be built out of irreducible ones. What is unclear to me is the physical motivation for each direction. Of course those books contain physical applications but the big picture is never obvious (they get to applications after 200-300 pages of abstract details). If someone could answer the following questions I would be really appreciated: 1-what is the physical motivation to write a representation in terms of irreducible ones? 2-what is the physical motivation to build bigger representations using irreducible ones? - ## 2 Answers I guess you mean unitary representations, as for nonunitary reps, your introductory statement is wrong. Arbitrary representations exists in scores, while irreducible representations are fairly few. For many groups one knows and understands in detail the irreducible representations. If one knows the decomposition of a representation under study into irreps, one can usually answer questions about this rep by using the knowledge about the irrep. For example if a Hamiltonian is invariant under a symmetry group, spectral calculations simplify by looking at the (often easy to determine) spectrum of the irreducible parts. If a symmetry group acts on a space (a very typical situation), one knows that it is some rep of the group, but to know which one, it must be build from the irreducible ones. But building reps from smaller reps can be done in a number of ways. For example, by taking tensor products of irreps one gets reducible reps, and the reducible parts may be new irreps. For example, in the simplest case U(2), all irreps can be obtained from the defining 2D rep by taking tensor products followed by splitting these into the irreducible reps. Thus bulding up and splitting are complementary, and benefit each other. In the applications, one needs to understand arbitrary reps. To understand them one breaks them down into irreducible ones and studies these first (like the factorization of integers reduces integers to primes). After having answered in the irreducible case the questions that usually arise one can go back to the general situation and see how much information one can lift from the irreps to the general case. (Usually everything of importance.) Thus the natural emphasis is on studying the properties of irreps first, and then looking at what this implies for the remaining reps. - the distinguishing feature of true, deep knowledge is the ability to cast itself in simple terms, without losing the insight of its real complex facets. I wish there was more people with your domain of math and language on this site, and more importantly, with your desire to share that knowledge with us, mere mortals. Fantastic, crisp answer. +1, and welcome to the site! – lurscher Mar 3 '12 at 3:05 The physical motivation is pretty simple. In quantum mechanics this means that if Hamiltonian is invariant under all $g\in G$ you may use the fact that the solutions of this Hamiltonian form a (reducible) representation of this group (as any other full system). Here comes representation theory. It is pretty easy to show that the energies of the states corresponding to the same irreducible representation are equal. Furthermore, there is Wigner-Eccart theorem which allows one to reduce number of values which describe the system by using its symmetry properties. Thus, knowing irreducible representations of $G$ one may say something about levels degeneracy, selection rules, and so on. As a result, it turns out that it is constructive to classify states of the system in accordance with irreducible representations of this system symmetry group. It is exactly how energy levels in atoms are classified. This idea may be easily generalized from $SO(3)$ to any other group of symmetry. For the second question, I do not really get it. Usually one has the number of the states which form a basis of the reducible representation given by number of particles in the system, etc. So it is not very natural to build up reducible representations by purpose. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935666024684906, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/20165/how-to-calculate-the-electric-field-at-a-point-in-space?answertab=active
# How to calculate the electric field at a point in space Let's say I have a uniformly-charged wire bent into a semi-circle around the origin. How can I find the electric field (magnitude and direction) I'm not even sure if I should use Coulomb's or Gauss' law either. Kind of stuck setting up a solution and starting somewhere. - Is this point generic? Or for example on the same plane of the wire? – Rik Poggi Jan 29 '12 at 9:45 Did you mean that you have to find the field at the origin? – Debangshu Jan 29 '12 at 10:12 A wire can only stay uniformly charged if bent to a full circle. – leftaroundabout Jan 29 '12 at 13:32 @Debangshu Currently looking for the field at the origin but I would like to know how to do it for any arbitrary point as well. – MaxMackie Jan 29 '12 at 18:11 ## 2 Answers Let us say you have a charged unit semi-circle ($\rho=1,-\frac{\pi}{2}\leq\varphi\leq\frac{\pi}{2}$) and are trying to calculate the potential in a point with coordinates $(x,y,z)$. You get (I neglect all kinds of constant factors): $V(x,y,z)=\int^{\frac{\pi}{2}}_{-\frac{\pi}{2}}d\varphi\frac{1}{\sqrt{(x-\cos(\varphi))^2+(y-\sin(\varphi))^2+z^2}}=\int^{\frac{\pi}{2}}_{-\frac{\pi}{2}}d\varphi\frac{1}{\sqrt{x^2+y^2+z^2+1-2 x\cos(\varphi)-2 y\sin(\varphi)}}$. I guess this may be an elliptic integral. To get the field, you need to take a gradient of this integral. EDIT: To answer MaxMackie's question in the comment: a point charge in a point $\overrightarrow{r'}=(x',y',z')$ creates a field described by potential $V(\overrightarrow{r'},\overrightarrow{r})=\frac{1}{|\overrightarrow{r}-\overrightarrow{r'}|}$, where $\overrightarrow{r}=(x,y,z)$ is the field point (again, constant factors are neglected). If you have a charge distribution $\rho(\overrightarrow{r'})$, the new potential $V_\rho(\overrightarrow{r})=\int d\overrightarrow{r'}\rho(\overrightarrow{r'})V(\overrightarrow{r'},\overrightarrow{r})$. To get the electric field, take a gradient of the potential with respect to $\overrightarrow{r}$. - What's the general rule to something like this though? Like the formula to use in this situation that can be applied to any problem. – MaxMackie Jan 29 '12 at 18:10 See the addition in my answer – akhmeteli Jan 29 '12 at 18:41 If you are just looking for the field at the origin, it does have a closed form. The situation is somewhat as follows: We choose a differential element which subtends as angle $d\theta$ at the origin. Correspondingly, we choose an element in the diametrically opposite quadrant i.e a element which subtends angle $d\theta$ at the origin, but that differential angle is $-\theta$ away from the negative $x$-axis. So, the components of the forces along the $x$-axis cancel out each other (the cosine components) and the sine components exist. I mean, if the differential element produce a field $E$ at the origin, then the field from two opposite elements which are at angle $\theta$ away from the positive and negative $x$-axis gives a field of $2E\sin \theta$ along the negative $y$-axis. If, the charge density is $\rho$, then, \begin{equation} E = \frac{1}{4\pi \epsilon_{0}} \frac{\rho R d\theta}{R^2} \end{equation} Hence, the net electric field is along the negative $y$-axis and the magnitude is \begin{equation} E= \frac{1}{4\pi \epsilon_{0}} \frac{2\rho}{R} \int_{0}^{\pi/2}\sin \theta d\theta =\frac{\rho}{2\pi \epsilon_{0}R} \end{equation} Now, finding the field at an arbitrary point may not be that easy. In fact, akhmetali has given an answer but it seems that the integral is elliptic in nature. Neither, could I see any obvious symmetry in the problem (spherical, azimuthal etc.). There are standard methods to deal with difficult situations like method of images, solving the Poisson Equation etc. These are discussed at length in J.D. Jackson's book. I hope you can find something there regarding your problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114143252372742, "perplexity_flag": "head"}
http://theoryofcomputing.org/articles/v007a006/
http://theoryofcomputing.org     ISSN 1557-2862 Endorsed by ACM SIGACT Volume 7 (2011) Article 6 pp. 75-99 Testing Linear-Invariant Non-Linear Properties Published: March 29, 2011 We consider the task of testing properties of Boolean functions that are invariant under linear transformations of the Boolean cube. Previous work in property testing, including the linearity test and the test for Reed-Muller codes, has mostly focused on such tasks for linear properties. The one exception is a test due to Green for “triangle freeness:” a function $f:\mathbb{F}_{2}^{n}\to \{0,1\}$ has this property if $f(x),f(y),f(x+y)$ do not all equal $1$, for any pair $x,y\in \mathbb{F}_{2}^ {n}$. Here we extend this test to a more systematic study of testing for linear-invariant non-linear properties. We consider properties that are described by a single forbidden pattern (and its linear transformations), i.e., a property is given by $k$ points $v_{1},\ldots,v_{k}\in\mathbb{F}_{2}^{k}$ and $f:\mathbb{F}_{2}^{n}\to \{0,1\}$ has the property that if for all linear maps $L:\mathbb{F}_{2}^{k}\to\mathbb{F}_{2}^{n}$ it is the case that $f(L(v_{1})),\ldots,f(L(v_{k}))$ do not all equal $1$. We show that this property is testable if the underlying matroid specified by $v_{1},\ldots,v_{k}$ is a graphic matroid. This extends Green's result to an infinite class of new properties. Part of our main results was obtained independently by Král', Serra, and Venna [Journal of Combinatorial Theory Series A, 116 (2009), pp 971--978]. Our techniques extend those of Green and in particular we establish a link between the notion of “$1$-complexity linear systems” of Green and Tao, and graphic matroids, to derive the results.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945890069007874, "perplexity_flag": "head"}
http://mathoverflow.net/questions/111367?sort=oldest
Maximal spectrum of a complex, unital and commutative Banach-algebra Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $A$ be a complex, unital and commutative Banach-algebra. Question: Is the maximal spectrum $Max(A)$ of $A$ endowed with the topology induced by the prime spectrum $Spec(A)$ of $A$, Hausdorff? Background: By Gel'fand-Mazur, there is a continuous bijection from the spectrum $Sp(A)$ of $A$ (defined as the set of characters of $A$ endowed with the weak*-topology) onto $Max(A)$. The map sends a character to its kernel. I would like to know whether this bijection is a homeomorphism and this is equivalent to asking whether $Max(A)$ is Hausdorff. If $A$ is a $C^*$-algebra, then the answer is 'yes', because then $A$ is the ring of complex, continuous functions on $Sp(A)$ (by Gel'fand-Naimark) and one can use partition of unity to get the Hausdorff property of $Max(A)$ - Could you remind us of the definition of the topology on Spec(A)? – Yemon Choi Nov 3 at 17:35 Is it the same as what analysts call the hull-kernel topology? If so, then the answer to your question is negative in general – Yemon Choi Nov 3 at 17:37 @Yemon The topology on $Spec(A)$ has the sets $D(f)=\{p\vert f\notin p\}$, where $f\in A$, as open (sub-)basis. I think it is indeed called the hull-kernel topology in functional analysis. If you have an example or a reference for the negative answer, could you post it as an answer? Thanks. – Marcus Nov 3 at 18:46 1 Answer Well, it's the Zariski topology, so why should it be Hausdorff in general? For a specific example, take the Banach algebra $H(D)$ of functions that are holomorphic inside the unit disk and continuous on its closure. Its maximal spectrum is the closed disc, and the closed sets are locally finite in its interior, so it's certainly not Hausdorff. - @Alexander Thanks for the hint. I found a reference with all details: T. Palmer, Banach algebras and the general theory of ∗-algebras. Vol. I, page 332 – Marcus Nov 4 at 20:49 Small point for other readers, this algebra is also often denoted by A(D) – Yemon Choi Nov 4 at 21:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293122887611389, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/191692-how-many-ways-can-type-12-card-hand-chosen.html
Thread: 1. How many ways can this type of 12 card hand be chosen? How many ways can 12 cards be chosen from a standard deck of 52 cards such that the same number of cards from each suit are in the 12? The way I saw it was that initially, you had to chose 1 of the 4 suits, i.e., $4\choose{1}$. Then within the chosen suit you needed to choose 3 cards, i.e., $13\choose{3}$, then repeat the process with the remaining 3 suits, so my solution was: $4\choose{1}$ $13 \choose{3}$ $3\choose{1 }$ $13\choose{3}$ $2\choose{1}$ $13\choose{3}$ $1\choose{1}$ $13 \choose{3}$ The answer provided says: $13 \choose{3}$ $13 \choose{3}$ $13 \choose{3}$ $13 \choose{3}$ Why is the choice of suit not included? 2. Re: How many ways can this type of 12 card hand be chosen? Originally Posted by terrorsquid How many ways can 12 cards be chosen from a standard deck of 52 cards such that the same number of cards from each suit are in the 12? The way I saw it was that initially, you had to chose 1 of the 4 suits, i.e., $4\choose{1}$. Then within the chosen suit you needed to choose 3 cards, i.e., $13\choose{3}$, then repeat the process with the remaining 3 suits, so my solution was: $4\choose{1}$ $13 \choose{3}$ $3\choose{1 }$ $13\choose{3}$ $2\choose{1}$ $13\choose{3}$ $1\choose{1}$ $13 \choose{3}$ The answer provided says: $13 \choose{3}$ $13 \choose{3}$ $13 \choose{3}$ $13 \choose{3}$ Why is the choice of suit not included? 13 choose 3 indicates you are picking one suit. You are doing that 4 times i.e. all 4 suits are being considered. 3. Re: How many ways can this type of 12 card hand be chosen? I realize this is a few days old and dwsmith covered it quite succinctly. However, just in case: You are mixing up probability with choices. There is no probability in this question at all. Its 100% likely you are going to get 12 cards, in which each suit has 3 cards. Imagine the whole deck is laid out in front of you face up. You can pick any 3 hearts you want, any 3 spades you want ,etc. How many different ways can you do that? dwsmith has the answer right above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662690758705139, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/282918/how-to-find-an-analytic-solution-to-lim-x-to-infty-fracx-sinx2-sq?answertab=oldest
# How to find an analytic solution to $\lim_{x \to +\infty} \frac{x+\sin(x^2)}{\sqrt{x^2+1}}$ To solve this limit: $$\lim_{x \to +\infty} \space \frac{x+\sin(x^2)}{\sqrt{x^2+1}}$$ At the beginning I didn't know how to start. Then I thought, no matter the value that $x$ takes, $sin(x^2)$ will always be between $-1$ and $1$. So for large values of $x$, $sin(x^2)$ is insignificant. One can rewrite: $$\lim_{x \to +\infty} \space \frac{x}{\sqrt{x^2+1}}$$ And now it's easy to find the limit: $$\lim_{x \to +\infty} \space \frac{x}{\sqrt{x^2+1}} = \lim_{x \to +\infty} \space \frac{\sqrt{x^2}}{\sqrt{x^2+1}} = \lim_{x \to +\infty} \space \sqrt{\frac{x^2}{x^2+1}} = \lim_{x \to +\infty} \space \sqrt{\frac{x^2}{x^2}}=1$$ But I know that the justification that allowed me to find the limit this way is not an analytic justification. How can I find this limit on an analytic basis? Thanks - Your title and problem do not match, is $n$ supposed to be $x$? Regards – Amzoti Jan 20 at 18:57 Yes!My mistake. Thanks to edit – João Jan 20 at 18:59 1 The only justification you need is to say "By the Squeeze Theorem". That is what allows you to bound $-1\leq\sin(x^2)\leq1$. – Clayton Jan 20 at 18:59 ## 3 Answers Write, for $x>0$ $${x+\sin(x^2)\over\sqrt{x^2+1}}= {{1\over x}\cdot(x+\sin(x^2))\over{1\over x}\sqrt{x^2+1}}= {{1+{\sin(x^2)\over x} } \over\sqrt{1+{1\over x^2}}}.$$ - If $h(x)\leq f(x)\leq g(x) \forall x\in \Bbb R\implies \lim_{x\to\infty}h(x)\leq \lim_{x\to\infty}f(x)\leq \lim_{x\to\infty}g(x)$ Here, $$h(x)=\frac{x-1}{\sqrt{x^2+1}},f(x)=\frac{x-\sin(x^2)}{\sqrt{x^2+1}})$$ and $$g(x)=\frac{x+1}{\sqrt{x^2+1}}$$ - You know that $\displaystyle \lim_{x \to +\infty} \space \frac{x+sin(x^2)}{\sqrt{x^2+1}}= \lim_{x\to +\infty}\Bigl(\frac{x}{\sqrt{x^2+1}}+\frac{sin(x^2)}{\sqrt{x^2+1}}\Bigr)$, now if both $\displaystyle \lim_{x\to +\infty}\Bigl(\frac{x}{\sqrt{x^2+1}}\Bigr )$ and $\displaystyle \lim_{x\to +\infty}\Bigl(\frac{sin(x^2)}{\sqrt{x^2+1}}\Bigr)$ exist, then $$\lim_{x\to +\infty}\Bigl(\frac{x}{\sqrt{x^2+1}}+\frac{sin(x^2)}{\sqrt{x^2+1}}\Bigr)=\lim_{x\to +\infty}\Bigl(\frac{x}{\sqrt{x^2+1}}\Bigr )+ \lim_{x\to +\infty}\Bigl(\frac{sin(x^2)}{\sqrt{x^2+1}}\Bigr)$$ You've proved that the first one on the RHS exists and equals $1$. To show that the second one exists and equals $0$ it suffices to prove that $\displaystyle \lim_{x\to +\infty}\Bigl(\biggl | \frac{sin(x^2)}{\sqrt{x^2+1}} \biggr |\Bigr)=0$ and that's a consequence of the squeeze theorem because for all $x\in \mathbb{R}$ $$0\leq \biggl | \frac{sin(x^2)}{\sqrt{x^2+1}} \biggr |\leq \biggl | \frac{1}{\sqrt{x^2+1}} \biggr |$$ and $\displaystyle \lim_{x\to +\infty}\Bigl(\biggl | \frac{1}{\sqrt{x^2+1}} \biggr |\Bigr)=0$. Therefore $$\lim_{x\to +\infty}\Bigl(\frac{x}{\sqrt{x^2+1}}+\frac{sin(x^2)}{\sqrt{x^2+1}}\Bigr)=\lim_{x\to +\infty}\Bigl(\frac{x}{\sqrt{x^2+1}}\Bigr )+ \lim_{x\to +\infty}\Bigl(\frac{sin(x^2)}{\sqrt{x^2+1}}\Bigr)=1+0$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219964146614075, "perplexity_flag": "head"}
http://mathoverflow.net/questions/63724?sort=votes
## Does this norm inequality hold for projections onto the range of a sum of matrices? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Although it's simply stated, this is neither a homework problem or trivial (I think, but I'd be happy to be proven wrong :) ). Let $A,B$ be matrices and $x$ be a vector. Is it true that $$\|P_{A+B} x\| \geq \|P_A x\| - \|P_B x\|,$$ where $P_A$ is the projection onto the range space of $A$? (or is it true if you square the norms?) I'm having difficulty even figuring out how to attack this: every attempt I've made falters on the facts that the range space of $A + B$ is not simply related to those of $A$ and $B$ and that the projection is nonlinear. Random instances haven't yet provided counterexamples to the inequality. - 1 If I understand your notation, the matrices are square, or might as well be...in that case, if all three of $A,B,A+B$ are full rank, then you are just comparing versions of the norm of $x$ itself, and anything you can thing of is trivially true. In the answers below, not full rank. – Will Jagy May 2 2011 at 19:59 ## 2 Answers Let $$A=\pmatrix{1&0\cr 0&0}, B=\pmatrix{0&0\cr 1&0}, x=(x_1,x_2).$$ Then $P_Ax=(x_1,0)$, $P_Bx=(0,x_2)$, $P_{A+B}x=((x_1+x_2)/2,(x_1+x_2)/2)$. Thus you are asking if $$|(x_1+x_2)/\sqrt{2}|\ge |x_1|-|x_2|.$$ Clearly, this is false in general. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. by substituting $A$ with $A-B$ and $B$ by $-B$ your condition will be equivalent to the following: `$||P_{A+B}x||\leq ||P_Ax||+||P_Bx||.$` the last one is not true in general (for non-positive matrices), for example it does not hold for $A=((0,1),(0,0))$, $B=((1,1),(1,1))$ and $x=\frac{1}{\sqrt{2}}(1,-1)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419715404510498, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/23806-proportions-print.html
# Proportions Printable View • November 29th 2007, 05:57 PM OzzMan Proportions 1. 3/x=x/? 2. 3/?=?/10 3. 13/z=z/? 4. ?/y=y/10 5. 10/?=?/13 6. 13/?=?/3 These proportions are very strange to me, but its the way the book has it layed out. Directions say use theorems 7.2 and 7.3 which make no sense to me, but here they are: 7.2 In a right triangle, the length of the altitude to the hypotenuse is the geometric mean between the lengths of the two segments on the hypotenuse. 7.3 In a right triangle with an altitude to the hypotenuse, each leg is the geometric mean between the sum of the squares of the lengths of the legs. • November 29th 2007, 06:37 PM OzzMan i guess my question is how do you find the ? for a problem like 3/x=x/? once i know how to find the ? i can take it from there • November 29th 2007, 07:15 PM topsquark Quote: Originally Posted by OzzMan i guess my question is how do you find the ? for a problem like 3/x=x/? once i know how to find the ? i can take it from there I would replace the ? with a variable, say y: $\frac{3}{x} = \frac{x}{y}$ Multiply both sides by xy: $3y = x^2$ <--Otherwise known as "cross-multiplication." $y = \frac{x^2}{3}$ You can do this kind of thing with all of them. -Dan • November 29th 2007, 09:56 PM OzzMan Wow another easy problem. Thanks for the help. The book was dumb imo to have ? marks instead of lettered variables. • November 29th 2007, 11:54 PM OzzMan nvm these problems arent done by cross multiplication. i checked the answer in the back of the book and they got non variable answers for both variables. so i really have no idea now. • November 29th 2007, 11:59 PM janvdl Quote: Originally Posted by OzzMan nvm these problems arent done by cross multiplication. i checked the answer in the back of the book and they got non variable answers for both variables. so i really have no idea now. What answer does the book give? • November 30th 2007, 08:08 PM OzzMan the books answer for the first one is 10, x=square root of 30 which im assuming the 10 is for the ? but how are these answers correct. i see no logical way of them being correct. anyone know ? • November 30th 2007, 11:46 PM janvdl Quote: Originally Posted by OzzMan the books answer for the first one is 10, x=square root of 30 which im assuming the 10 is for the ? but how are these answers correct. i see no logical way of them being correct. anyone know ? No i cannot see why it would be those answers specifically either. • December 1st 2007, 02:05 PM OzzMan yes biggest question is how did they get 10 for the ? makes no sense. • December 1st 2007, 06:58 PM OzzMan Sorry to post again on this post but if anyone knows how to do this, could you help me out. I'm really lost with this. Mostly how they solve for the ? in the first one. • December 2nd 2007, 02:18 AM janvdl Quote: Originally Posted by OzzMan Sorry to post again on this post but if anyone knows how to do this, could you help me out. I'm really lost with this. Mostly how they solve for the ? in the first one. You'll get an infraction for bumping :eek: No, honestly I cannot see how they got those values. These equations can be satisfied with just about any value. (Although it seems it has to be only positive values) • December 2nd 2007, 06:34 PM OzzMan Well this is my brothers homework and its from his geometry book. I'm just curious. • December 3rd 2007, 06:19 AM topsquark Quote: Originally Posted by OzzMan Well this is my brothers homework and its from his geometry book. I'm just curious. If the solution for #1 does not contain an x then there is information missing. These cannot be done as stated. (As several people have now informed you.) -Dan All times are GMT -8. The time now is 03:26 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955302894115448, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/102508?sort=votes
## Are there formulas for the derivatives $\zeta_{F}^{(n)}(0)$ of Dedekind zeta functions? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $F/\mathbb{Q}$ be a number field. I'm interested in knowing if there are formulas for the values of the derivatives $\zeta_{F}^{(n)}(0)$ of the Dedekind zeta function of $F$ at zero. Maybe if in the general case for an arbitrary number field there are no results, are there any results for particular types of number fields, like quadratic number fields or cyclotomic fields? I would also appreciate any references you can provide. Thank you for any help. PS: I would also be interested if anything is known only for the first values, say for $n = 1, 2, 3$ or so. - 3 As you might know, the order of vanishing at $s=0$ is $r_1+r_2−1$ and the special value is related to class number and regulator (See, for instance, Neukirch's "Algebraic Number Theory", VII.5.11). Also, for abelian fields, you can factor the $\zeta$ function as product of $L$ functions, so for instance for real quadratic fields you have ``$\zeta_F(s)=\zeta(s)\sum_{m\geq 1}\big(\frac{d}{m}\big)m^{-s}$`, see Heilbronn's paper in Cassels and Frohlich's book "Algebraic Number Theory", end of Section 2. – Filippo Alberto Edoardo 0 secs ago – Filippo Alberto Edoardo Jul 18 at 3:09 ## 2 Answers The value at $s=0$ or $s=1$ is known by the Class number formula. Here are some answers for the first derivative: see this http://mathoverflow.net/questions/87873/dedekind-zeta-function-behaviour-at-1 for the value at $s=1$. Since for abelian $L$ functions, the functional equation is very wellknown it doesn't matter whether you look at $s=0$ or $s=1$. After I have studied the references, I was ready to believe that not much is known at least for the first derivative beyond estimates. - Thanks for the links Mrc. – Adrián Barquero Jul 18 at 16:17 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. $\zeta'(0)=-\frac12\log 2\pi$ - 3 Is there a motivic regulator razzle-dazzle "reason" for this yet? – David Hansen Jul 18 at 15:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108123779296875, "perplexity_flag": "head"}
http://mathoverflow.net/questions/67483/is-there-ramsey-theorem-for-infinitary-tuples
Is there Ramsey Theorem for infinitary tuples? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm wondering if there's any sort of Ramsey relation that allows for the tuples to be of arbitrary infinite size $\mu$? This $\mu$ is below some strongly compact cardinal, so I'm not worried about large cardinal hypotheses. - 1 There are some generalizations of the Galvin-Prikry Ramsey theorem to higher cardinals. The experts here can no doubt point to them. One source I found with a quick google search is R.J. Watro's paper, "On partitioning the infinite subsets of large cardinals", J. Symbolic Logic 49 no. 2, June, 1984. – Bill Johnson Jun 11 2011 at 4:57 2 Answers Infinite exponent partition relations are inconsistent with the axiom of choice, so in ZFC, this phenomenon does not exist, but nevertheless, in the context of $ZF+\neg AC$ there is a robust theory. See for example Andres Caicedo's discussion, this Kleinberg article, and the items in this Google search. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As emphasized in Joel Hamkins' answer, the generalization of Ramsey's theorem for infinite (unordered) tuples contradicts the axiom of choice [Erdős-Hajnal, 1966], and is a line of investigation that has close ties to large cardinals. The classical Erdős-Hajnal proof uses the axiom of choice - in the guise of a well-ordering of the power set of $\Bbb {N}$ - to construct a "wild" coloring $C$ of infinite subsets $[\Bbb{N}]^\omega$ of $\Bbb{N}$ into two colors such that there is no infinite monochromatic set for $C$. In contrast, Galvin and Prikry showed that for Borel colorings $C$ of $[\Bbb{N}]^\omega$, an infinite monochromatic subset for $C$ always exists. Silver then extended this result to analytic colorings. Note that $[\Bbb{N}]^\omega$ inherits a natural topology from $P(\Bbb{N})$, which is itself topologized via an identification with the product space $2^\Bbb{N}$. The Galvin-Prikry paper appeared in 1973, but that of Silver appeared in 1970 (this is not a typo!). This work was simplified and extended by Ellentuck in 1974. The metamathematics of Ramsey theory, including Galvin-Prikry type theorems, has been vigorously investigated in reverse mathematics. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127689599990845, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/15793/non-unitarity-of-wave-function-collapse/15796
# Non-unitarity of wave function collapse Why the wave function collapse corresponds to a non-unitary quantum operation? - – Andyk Oct 16 '11 at 18:52 ## 2 Answers Unitary operators are operators that satisfy some conditions. Among other things, they have to be linear: http://en.wikipedia.org/wiki/Unitary_operator The operation (or "an operation") that maps any $\psi(x)$ to $\delta(x-x_0)$ where $x_0$ is the random position resulting from a measurement can't be associated with any linear operator. It's easy to see why. Take two functions $\psi_1(x)$ and $\psi_2(x)$ that have different supports: for example, the first one is localized in the vicinity of Boston while the other sits near New York. Linearity of the collapse operator $C$ requires $$C (\psi_1 + \psi_2) = C(\psi_1) + C(\psi_2).$$ However, the first term of the right hand side is a delta-function localized somewhere near Boston while the second term of the right hand side is a delta-function localized near New York. Their sum therefore can't be a multiple of a single delta-function, so the left hand side can't be a "collapsed wave function", proving that an operator that maps anything to a single delta-function can't be linear. There are of course other ways to prove that it can't be a unitary operator – which is a very strong condition. Of course, the right resolution of this non-unitarity problem is that there's nothing such as the collapse of a wave function. The wave function is not a real wave: it's a set of complex amplitudes whose squared absolute values don't describe "the reality" but rather just the probabilities of "different realities". The probability distributions mean that you always get just one outcome and the values of the probability distribution just tell you what the probabilities of different outcomes are. Nothing has to "collapse" because the wave wasn't a "real observable wave" to start with. The idea that the wave function has to "collapse" and one has to look for a "mechanism" how it collapses is an artifact of a misinterpretation of the wave function. - 1 Hi Lubos, I don't believe in the validity of this interpretation of measurement as a merely notational artifact; intermediate measurements make probabilities behave different. Collapse, be it an effective result of complex phenomena or something fundamental, it is a real process. In any case, even if measurements are not linear and hence non-unitary by the book definition, they still conserve the norm of eigenstates – lurscher Oct 16 '11 at 20:51 Dear @lurscher, the fact that the wave function is just a set of numbers to calculate probabilities from, and not a real observable, is an experimentally proven fact, not a matter for beliefs or disbeliefs. By the way, your second statement is also impossible. One can't define a prescription for such a collapse so that it would conserve the norm if we also require the physics to be local: one has to artificially "renormalize" the wave function after the collapse and it's clearly a non-local procedure because it depends on the magnitude of the wave function at other "places". – Luboš Motl Oct 17 '11 at 18:58 Dear @ANKU, I don't know how you proved that the inner product of $C\psi_i$ and $C\psi_j$ is the particular Kronecker-delta. I think that there's no operator that would satisfy your condition for every $\psi_i$, $\psi_j$. Moreover, I don't understand what's the difference between $\psi_j$ and $j$. What you wrote is just very confusing. – Luboš Motl Oct 17 '11 at 19:01 @Lurscher, let me also mention that if you want the post-collapse wave function to be proportional to a delta-function in the position representation, such an outcome would 1) have a very sick normalization because you need $\psi(x)=\sqrt{\delta(x-x_0)}$ for the squared wave function to have the right integral; the square root of a delta function isn't really an element of the Hilbert space; 2) if the wave function is strictly proportional to the delta-function, it carries an infinite average kinetic energy: the momentum is totally undetermined and the expectation value of $p^2$ diverges. – Luboš Motl Oct 17 '11 at 19:03 @lubos, regarding non-locality; measurements in the most basic formulation given in QM courses always assume that you are able to measure the state in a non-local way. To be more physically realistic, you need to understand that your measurement actually doesn't discriminate between far-away states, it just gives you a range of projection operators for local positions near the range of the measurement apparatus. In all these cases, the "renormalize" step is just taking the norm of the part of the state function that is in the range. – lurscher Oct 17 '11 at 19:25 show 11 more comments I'm not sure what would be an answer to the why question. I can only comment on the known structure of the measurement as a operator. Operators of a Hillbert space take vectors into vectors; they are an endomorphism of the Hillbert space. When you take a single measurement of a observable on a arbitrary vector, you obtain a random eigenvector of the observable. This is, conventionally speaking, a mapping between a vector and a vector. However the mapping is not deterministic, so there is not a single operator. In fact, there is one such operator for each eigenstate. They are better known as projection operators: $$M_{i} = | \psi_{i} \rangle \langle \psi_{i} |$$ When a measurement "happens" we can a posteriori state what specific projection operator took place on the system. We cannot use the operator formalism to describe the overall process in general. Well, not quite. We have a formalism to describe classical probabilities distributions; its called mixed states. Mixed states are anything that is not a pure state, which can always be described with a density matrix whose entries are of the following form factor: $$\rho_{ij} = \psi_{i} \psi^{*}_{j}$$ as a density matrix, a classical probability distribution is seen as a purely diagonal matrix, whose entries are the probabilities of each eigenstate so a measurement can be seen as the following map: $$\psi_{i} \psi_{j}^{*} \longrightarrow \psi_{i} \psi_{j}^{*} \delta_{ij}$$ So what the measurement does in general is kill all off-diagonal components of the density matrix, and only leave the diagonal, that represent classical, actual probabilities. Even if this map is not linear (as clearly stated by @Lubos), it preserves the trace of the density matrix. In other words, even if its not unitary, it is still a isometric transformation. - 1 "So what the measurement does in general is kill all off-diagonal components of the density matrix": you wanted to say "So what decoherence does...", right? Your description of what measurement does isn't valid. The measurement, as understood in the incorrect interpretation of QM that you're promoting here, is not only bringing the density matrix to a diagonal form: it also sets to zero all the diagonal entries except for the chosen one. – Luboš Motl Oct 17 '11 at 19:09 Because the text of your answer makes it clear that you actually don't want to pick the "measured outcome" or explain how it is done - you're really trying to explain decoherence and not collapse (which is why your answer has no relevance to the original question which was about the collapse, but let's discuss your answer anyway) - you're talking about decoherence. But decoherence produces a map on the space of density matrices, not on the Hilbert space only. – Luboš Motl Oct 17 '11 at 19:12 So it makes no sense to ask whether this operation (elimination of off-diag. entries) is an isometry on the Hilbert space itself: it's not a map on the Hilbert space at all, it's a map on the space of density matrices (roughly speaking the tensor product of the Hilbert space and its conjugate copy) only. And on this space, the elimination of the off-diagonal elements is clearly not an isometry, either. So whatever way you look at your statements about the collapse's being an "isometry", they're invalid. – Luboš Motl Oct 17 '11 at 19:14 To show that the "decoherence map" (elimination of off-diagonal entries) isn't an isometry on the space of matrices, just consider what this map does with matrices $((a,b),(b,c))$ for different values of $b$. These matrices are clearly very far from each other in the natural metric on the space of matrices, especially if you pick a large $b$. But all these matrices get mapped to $((a,0),(0,b))$ so the distance of the values of the map is zero. ;-) – Luboš Motl Oct 17 '11 at 19:17 the question is about measurements on the global state vector of a system, the outcome is by definition isometric (the final state has the same norm as the original state vector) – lurscher Oct 17 '11 at 19:33 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393900036811829, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/271508/revision-tracking-graph
# Revision Tracking Graph Define the Revision Tracking Graph (RTG), which is an oriented graph (without circles) where each node x has a set C(x) associated with it, which contains all edges leading into it on all paths from a node 0 (node with empty set). Each edge can be in a set exactly once! You can also describe this data structure by the rules for its growth: 1. Start with node 0 with associated empty set `C(0) = {}` 2. For any node x create a new node y where (x,y) is oriented edge from x to y and `C(y) = C(x) union { (x,y) }` 3. For any nodes f and t, create node r, where (f,r) and (t,r) are oriented edges and `C(r) = C(f) union C(t) union { (f,r), (t,r) }` Those rules can be described by words as 1) creating new object 2) branching and versioning 3) merging. You can see that the only difference between branching and versioning is whether there already is an edge leading from a node or not. You can further differentiate the graph by naming branches (paths in the graph). Base node is defined as a node b, such as `C(b) = C(f) intersect C(t)`. For short: `Result=(From-Base)+To`. In other words, if you remove all edges in C(b) from C(f) and add edges from C(t), the resulting set would have every edge exactly once and all edges from C(f) and C(t) would be present in C(r). I have actually several questions pertaining to this data structure: 1. Prove there are no circles in the graph 2. Prove that for certain graphs there are such nodes F,T for which base cannot be found as specified in simple equation. 3. Prove that for each graph and each pair of nodes F and T, there is a set of n pairs of nodes Fi,Bi where Fn = F and `R = T + Sum(i=1..n) of (Fi-Bi)`. 4. Create algorithm to find B in simple case where n = 1. 5. Create algorithm to find Fi,Bi for i=1..n where n > 1. I know the answers to 1,2 and 4, but I put them here to get you in the mood of working with this data structure. Have fun with it, pose additional problems and find answers. Any new answers could significantly advance the theory behind revision control systems. - 2 "Have fun with it" is not really a question... are you posing these questions as a puzzle/research problem? – Douglas S. Stones Jan 6 at 12:54 Explanation: Edges represent changes, Nodes represent versions and C(x) is a set of all changes contained in version x. (2) Proves 3-way merge is not always sufficient and simple base sometimes cannot be found in complex RTG. Finding base to solve the equation makes possible for 3-way merge tool to merge a change. – Jiri Klouda Jan 6 at 12:59 Well, there are 5 questions in there. I'm mainly putting it here so I don't have to repeat all the definition when talking with various mathematician friends and to have a single place for discussing the problem and in hopes someone on this site who I don't know can come up with the answers to questions 3 and 5. – Jiri Klouda Jan 6 at 13:02 2 Definition of base is not clear. What are $f$, $t$ and $r$ there? Are $+$ and $-$ same as set operations union and difference? – polkjh Jan 6 at 16:13 1 – Jukka Suomela Jan 9 at 12:53 show 10 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439787268638611, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/185423-proving-one-g-x-f-1-x-ending-up-lxl-both-compositions.html
# Thread: 1. ## proving one G(X)=F^-1(X), ending up with lXl, for both compositions. I just got through a lession @ purple math about functions. The last example, in the proving that two functions are inverses of each other gives a specific example where F(g(X)) would equal lxl, except that a domain restriction forces the answer to be greater or equal to zero, so the absolute value can be removed. And G(f(x))=lxl so, F(G(X))=x, and G(F(X))=lxl and are therefore not inverses of each other. My question is, what if the domain restriction forcing F(G(X)) to be greater than or equal to 0, had not existed? You would end up solving for lxl for both compositions. They would still not be inverses of each other right? 2. ## Re: proving one G(X)=F^-1(X), ending up with lXl, for both compositions. The condition that a function is the inverse of itself is equivalent to write... $y^{'}= \frac{1}{y^{'}} \implies y^{'\ 2}=1$ (1) The (1) is a [very symple...] differential equation the solutions of which, with the condition $y(0)=0$ are $y=x$ and $y=-x$. Both these solutions are defined in the whole domain of x and no more functions exist that are inverse of themselves... Kind regards $\chi$ $\sigma$ 3. ## Re: proving one G(X)=F^-1(X), ending up with lXl, for both compositions. Yes, that is correct. In order that F and G be inverses, we must have F(G(x))= x and G(F(x))= x, not |x|. For example, take $F(x)= x^2$ and $G(x)= \sqrt{x}$ if $x\ge 0$, $-\sqrt{-x}$ if x< 0. F and G are both defined for all x. Now, for all real numbers, x, $F(x)\ge 0$ so G(F(x))= \sqrt{x^2}= |x|. If $x\ge 0$, $F(G(x))= (\sqrt{x})^2= x= |x|$ and if x< 0, $F(G(x))= (-\sqrt{-x})^2= -x= |x|$. But G and F are not inverse functions since $F(G(-3))= (-\sqrt{3})^2= 3$, not -3. (What chisigma says is true but I don't see how it is related to this question.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109423160552979, "perplexity_flag": "middle"}
http://motls.blogspot.com/2012/06/why-feynmans-path-integral-doesnt.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Thursday, June 21, 2012 ### Why Feynman's path integral doesn't contradict the uncertainty principle I just decided to write a short (but not too short) text elaborating on the April 2011 Physics Stack Exchange question by Jane. Are the Lagrangians in the Feynman path integrals operators? And if they're not, why doesn't it conflict with the basic fact of quantum mechanics that observables are represented by noncommuting objects, or $$q$$-numbers, to use Dirac's terminology, rather than $$c$$-numbers? These questions and several related ones are what I want to be clarified by this text although no one can guarantee that it will. The answer to the first question is that all the variables we integrate over are $$c$$-numbers: observables are represented by numbers, the same ones as the numbers in classical physics. And the Lagrangian that appears in the exponent is a $$c$$-number-valued functions of these $$c$$-number variables; it is not a genuine operator here, despite its similarities with the Hamiltonian. This fact is OK because we don't really use these quantities to directly make statements about the world, at least not about the final state. Feynman's approach to quantum mechanics never says that $$x$$ in the final state is equal to a particular value. Instead, $$x(t)$$ for all intermediate values between the initial ($$i$$) and final ($$f$$) moments are dummy variables we integrate over. We shut up and integrate over them in order to calculate different quantities, the probability amplitudes, which may be easily converted to probabilities that the final state will have various properties. So if we consider such a probability amplitude\[ {\mathcal A}_{i\to f} = \int {\mathcal D} x(t)\cdot \exp(iS[x(t)]/\hbar) \] where the integral goes over all continuous (but not necessarily smooth – this will be our big point later) curves $$x(t)$$ connecting $$x(t_i)=x_i$$ and $$x(t_f)=x_f$$, we are probing all conceivable histories but the result of the calculation is $${\mathcal A}_{i\to f}$$ rather than $$x(t_f)$$. I decided to preserve the undergraduate reduced Planck constant $$\hbar=h/2\pi$$ in this entry; all the comments may be translated to the $$\hbar=1$$ units of mature physicists by simply erasing $$\hbar$$ everywhere. It may be proved that the complex probability amplitudes that result from Feynman's path integral are the same ones as those you may obtain by solving Schrödinger's equation with the initial wave function at $$t=t_i$$\[ \psi_i(x) = \delta(x - x_i). \] The proof is usually explained in the introductions to the path integral. Feynman's integral formula for the evolution in a finite period of time may be separated to smaller moments; the integral respects the "transitivity" of the evolution operator. So it's enough to prove the equivalence for the evolution over very short intervals of time. And for the infinitesimal intervals, the equivalence may be proved rather easily. This point is presented in the basic texts so I won't dedicate it too much time here. However, what is not often explained is why this path integral approach doesn't contradict the commutator behind the Heisenberg uncertainty principle,\[ x(t)p(t)-p(t)x(t)=i\hbar. \] Imagine that $$x(t),\,p(t)$$ are the values of the position and momentum somewhere in the middle of the interval whose evolution we study. Because both $$x(t)$$ and $$p(t)$$ are represented by ordinary $$c$$-numbers in Feynman's path integral expression, isn't it inevitable that they commute with one another, i.e. $$xp=px$$? This would contradict the uncertainty principle. The answer is that it is not guaranteed but the way in which Feynman's path integral formula avoids the trap is kind of clever and subtle; it has something to do with the short-time (and short-distance, in quantum field theory) behavior of the trajectories that contribute to the path integral. The fact that the short-time behavior is "unusual" and the fact that it may lead to conclusions we wouldn't expect in the classical reasoning may be viewed as a "demo" of tons of similar short-distance subtleties that quantum field theory, a more realistic subset of quantum theories, is literally full of. OK, so why does Feynman's expression agree with the usual Heisenberg's nonzero commutator? There are two types of Feynman's path integral for mechanics. One of them integrates over the phase space; one of them integrates over the configuration space. In plain mathematical English, the former is $$\int{\mathcal D}x\,{\mathcal D}p$$ while the latter is just $$\int{\mathcal D}x$$. The relationship between them is pretty much straightforward: we may simply integrate over $$p(t)$$ first and eliminate it: such a step transforms any phase-space path integral to a configuration-space one. When we discretize the time to "visualize" the functional integral as a finite-dimensional one (and I invite you to interpret this procedure merely as a way to present the smooth objects in Nature – who has no problems with the functional integrals etc. – to us humans who prefer finite-dimensional integrals because we're just stupid animals, especially some of us, and because many of us are lazy and want to use computers that really don't want to compute infinite-dimensional integrals "mechanically"), we describe the trajectory $$x(t)$$, defined as a function of a continuous $$t$$, to a finite collection of numbers\[ x(t_i), \, x(t_i+\epsilon), \, x(t_i+2\epsilon),\,\dots\,, x(t_f). \] We divided the interval from $$t_i$$ to $$t_f$$ to many short intervals whose duration is $$\epsilon$$. Let me just remind you that in this treatment, the natural phase-space path integral uses momenta in the middle of the short intervals,\[ p(t_i+\frac\epsilon 2), \, p(t_i+\frac{3\epsilon}{2}), \, \dots ,\,, p(t_f-\frac{\epsilon}{2}). \] This pattern is natural because the momentum $$p(t_i+\epsilon/2)$$ is related to the velocity $$v(t_i+\epsilon/2)$$ which may be approximated by $$(x(t_i+\epsilon)-x(t_i))/\epsilon$$ and the latter expression uses the values of $$x(t)$$ at two nearby moments: one of them may be naturally chosen to come "before" the argument of $$v(t_i+\epsilon/2)$$, the other one is symmetrically "after" that moment. Although the phase-space path integral could be employed to produce a more general path-integral proof of $$xp-px=i\hbar$$, one that would work for complicated Lagrangians mixing positions and velocities in pretty general ways, it is probably more pedagogical to explain where the nonzero commutator comes from in the configuration-space path integral. After all, the configuration-space path integral – where we only integrate over the coordinates but not the momenta – is the version of the path integral that becomes natural and preferred in quantum field theory, the framework where Feynman's approach becomes really useful. A reason to prefer the configuration-space path integral is that the set of counterparts of $$x(t)$$, namely $$\phi(t,x,y,z)$$, if I mention a Klein-Gordon field, are relativistic covariant (Lorentz transformations only transform the arguments $$(t,x,y,z)$$) while the momenta or velocity $$\partial\phi/\partial t$$ would have to pick a preferred reference frame and its preferred coordinate $$t$$, thus making the Lorentz invariance less obvious. In other words, the phase-space path integrals are close to the Hamiltonian formalism which is less natural for Lorentz-invariant theories in which the Hamiltonian (energy) is just one component or projection of the energy-momentum vector. Getting started with the calculation Because of these disclaimers, we finally want to solve the following problem: Calculate the nonzero Heisenberg commutator from the configuration-space Feynman path integral. I will restrict the proof to the "ordinary" actions having the form\[ S=\int\dd t\,L(t),\quad L(t)=\frac{mv(t)^2}{2} - V[x(t)] \] where the energy is separated to the kinetic and potential energy, kinetic energy has the usual form, and the potential energy only depends on $$x$$. What we want to compute is the counterpart of the operator $$x(t)p(t)-p(t)x(t)$$ for some moment $$t$$ inside the interval whose evolution operator was considered. The ordering matters for operators but it seems not to matter for the Feynman integration variables. So shouldn't the commutator be zero? No. We must appreciate what it means for the operator $$p(t)$$ to be on the right side of $$x(t)$$ in the product $$x(t)p(t)$$, for example. In the operator language, it means that it acts on the ket vector $$\ket\psi$$ before $$x(t)$$ does. Let's only consider ket vectors. Off-topic, soccer: Czechia is playing quarterfinals of Euro against Portugal tonight. We must show the door to Ronaldo and similar boys. Of course, we can do that. The video above is from the (similar?) Czech-Portuguese quarterfinals in 1996: we won 1-to-0 by Karel Poborský's astronautic goal (see above). At the end, we played the finals where we lost to Germany. A New York Times blog previews the match tonight; the author is so impressed by our guys that he completely forgets about the Portuguese! :-) Between 1930 and 1989, Czechoslovakia won 3 times, lost 3 times, and tied 3 times with Portugal. This word "before" may look like a mathematical abstraction that has nothing to do with "before" that encodes the ordering in the actual time $$t$$. The ordering of some mathematicians' steps in time – the mathematician is computing a matrix product – may seem to be completely independent from the chronology of events and positions $$x(t)$$ that define a trajectory. However, this ordering is actually the very same one. An operator acting on the ket vector is a sort of an event in quantum mechanics. (This is one of the ways to see that a logical arrow of time is intrinsically imprinted into the basic framework of quantum mechanics. The past and the future don't play the same role in a quantum theory for the same reason why it matters whether one operator/event is closer to a ket vector or further from it. Microscopic laws and/or Schrödinger's equation may have a time-reversal or CPT symmetry but this fact doesn't imply that the actual events and the physical relationships between them will respect the symmetry. They don't as thermodynamics makes flagrantly obvious.) So the commutator $$x(t)p(t)-p(t)x(t)$$ in quantum mechanics must really be replaced by\[ x(t+\frac\epsilon 2) p(t) - p(t) x(t-\frac\epsilon 2) = \dots \] where the magnitude of the infinitesimal $$\epsilon$$ shouldn't matter but it should be a positive infinitesimal number. Conveniently enough, we may choose the size of this $$\epsilon$$ to exactly agree with the spacing between the moments into which we discretized the trajectory. In this way, the calculation will only involve a one-dimensional integral but believe me that if you tried to compute it with a different $$\epsilon$$, you would obtain the same resulting commutator. The nonzero commutator only boils down to a single discontinuity. Also, I chose the (infinitesimal and therefore irrelevant) shifts of the arguments (time) in such a way that both terms of the commutator included the same $$p(t)$$. The most recent displayed formula may obviously be written as \[ \dots = p(t)\cdot [ x(t+\frac\epsilon 2) - x(t-\frac\epsilon 2) \approx p(t) \cdot \epsilon v(t) = \dots ] \] where I approximated the difference of the two values of $$x$$ by the derivative (the velocity) multiplied by the separation of the times. At any rate, if we use the configuration-space variables, the expression for $$[x,p]$$ is simply\[ \dots = \epsilon\cdot mv(t)^2. \] We want to calculate what the value of this expression "is" for an infinitesimal positive $$epsilon$$ according to the Feynman path integral. The way how the path integral answers such questions is that it tells us to add this factor as a factor to the path integral. If the path integral itself jumps by a certain resulting multiplicative factor for every history (one that doesn't try to squeeze additional factors/events/measurements to the moment $$t$$), it means that the inserted factor should be said to be equal to the resulting factor. Note that naively, $$m,v$$ are finite so when we multiply it by $$\epsilon$$ and send the latter to zero, we should get zero. But we won't because Nature isn't naive. Fine. Our next step has to be as follows: instead of considering the original path integral (which may already have some insertions, hopefully at mostly different moments), we consider a path integral with an extra insertion\[ \int{\mathcal D}x(t) \,\exp[\frac{imv^2}{2\hbar}-\frac{iV(x)}{\hbar}]\,\cdots\, \epsilon mv^2. \] In the discretized trajectories, we may consider the integral over $$x(t-\epsilon/2)$$ and $$x(t+\epsilon/2)$$: assume that these are among the allowed "lattice sites" in the discretized time. The integral over the "earlier" $$x(t-\epsilon/2)$$ may be kept and the integral over the later one may be replaced by $$\int \dd v(t)$$ with the right factors (which cancel and may therefore be ignored because the commutator we want to get is the ratio of the path integral with an extra insertion and without an extra insertion). What we're inserting only depends on $$v(t)$$ so the path-integral version of the value of $$x(t)p(t)-p(t)x(t)$$ which was converted to $$\epsilon mv(t)^2$$ is simply\[ "[x,p]" = \frac{\int \dd v(t) \exp(\frac{i\epsilon mv^2}{2\hbar}) \cdot \epsilon mv(t)^2} {\int \dd v(t) \exp(\frac{i\epsilon mv^2}{2\hbar})}=\dots \] The Gaussian is the exponential of $$i\epsilon mv^2/2$$. Complex calculus allows you not to worry about the fact that it is actually a phase. You may see that if the $$i$$ were absent, the width of the Gaussian would be, because $$\epsilon mv^2\hbar\sim 1$$, of order $$\Delta v\sim \sqrt{\hbar/2\epsilon}$$. It's important to realize that if $$\epsilon\to 0$$, the width goes like $$\epsilon^{-1/2}$$ and diverges in the limit. At any rate, you see that up to factors such as $$2,i,\hbar$$, the insertion is exactly what appears as the exponent in the exponential we inherited from the path integral i.e. from the action. With the simple substitution $$V=\sqrt{m\epsilon/\hbar}\cdot v$$, the integral above becomes\[ \frac{\dd V \,\exp(iV^2/2)\cdot \hbar V^2}{\int \dd V\, \exp (iV^2/2)} \] Well, it's annoying that the integrand has a constant absolute value – it is a phase - but you may define it by another natural limit which adds a modest true Gaussian suppression at infinity. I don't want to justify all these things because their legitimacy and "rightness" really boils down to one's proper intuition about how physics works – it works in such a way that many things are analytic and if they seem ill-defined and may be fixed by an analytic continuation, one should definitely do it – and if you don't see this point, you're just missing some innate aptitudes for theoretical physics and I won't be able to convince about them, anyway. Instead, let me assume that the reader has no problem with such continuations etc. The right way to compute the right ratio – which is mathematically the same task as a task with ordinary real Gaussians – is to define another variable $$W$$ such that $$iV^2 = -W^2$$ i.e. $$V^2 = iW^2$$ so that we get a nice Gaussian. The ratio $$\dd W/\dd V$$ from the substitution cancels in the ratio we calculate so the $$xp-px$$ commutator we wanted to compute finally boils down to\[ \frac{\dd W \,\exp(-W^2/2)\cdot i\hbar \cdot W^2}{\int \dd W\, \exp (-W^2/2)}. \] The integrals go from $$-\infty$$ to $$+\infty$$. But up to the universal factor $$i\hbar$$, this is nothing else than the expectation value of $$W^2$$ for a normal distribution with a unit standard deviation: this is nothing else than the $$\exp(-W^2/2)$$ distribution. But this expectation value is one – feel free to calculate the integral by your favorite tools. So we have just calculated in the Feynmanian way that\[ xp-px = i\hbar. \] It may be useful to return a little bit and see how it was possible for us to obtain such a "paradoxical" result. The reason was that despite the overall factor of $$\epsilon$$ in the formula $$\Delta x(t) \sim \epsilon v(t)$$, we got a nonzero result because the commutator boiled down to the expectation value of $$v(t)^2$$ and $$v(t)$$ was normal-distributed with a width that became infinite as the spacing of time, $$\epsilon$$, was sent to zero. Paths must be unsmooth This "infinite width" of the distribution for the velocity (as it implicitly appears in the path integral) is really the point. This is the place that stores the weapons that make the surprising result (the nonzero commutator even though we superficially deal with $$c$$-numbers all the time) possible and true. I must repeat this sentence once again because it's important: The reason why Feynman's path integral agrees with the nonzero commutators i.e. with the uncertainty principle is that the standard deviation of the velocity goes to infinity as we approach the continuum limit $$\epsilon=\Delta t \to 0$$. If we were thinking that we're only integrating over differentiable, smooth trajectories, we would still be able to derive that the commutator has to vanish. It is extremely important for the path integral to get a contribution from non-differentiable trajectories in which $$v(t)$$ is effectively divergent. In fact, "almost all" trajectories that contribute to the path integral are non-differentiable in this sense: the differentiable trajectories form a "subset of measure zero" and may actually be ignored! To deny that the non-differentiable trajectories (or, in quantum field theory, spacetime configuration/histories of the fields) are paramount contributors to the path integral of any consistent quantum theory means to deny the uncertainty principle! Although quantum mechanics has superficially nothing to do with the requirement that important trajectories in the path integral description must be non-differentiable, the uncertainty principle is actually the same thing! How much non-differentiable the trajectories are? We've seen that the typical value of $$v(t)^2$$ was scaling like $$1/\epsilon$$ i.e. $$v(t)$$ was proportional to $$1/\sqrt{\epsilon}$$. You may translate it to $$\Delta x$$, the change of $$x$$ over the unit of time into which we divide the trajectory. We get $$\Delta x\sim \epsilon v\sim \epsilon / \sqrt{\epsilon}\sim \sqrt{\epsilon}$$. What does it mean? Well, if the typical distance you move after time $$\epsilon$$ scales like $$\sqrt{t}$$, it's nothing else than the Brownian motion! So when it comes to the power law that determines the dependence on the velocities (and position changes) on the period of time, the typical trajectories contributing to the Feynman path integral resemble the Brownian motion. They look like random walks! This shouldn't be shocking even at the "linguistic level" because the Feynman path integral does integrate over random walks because quantum mechanics says that particles walk in random ways. The unsmoothness of these random walks is actually another way to formulate the uncertainty principle. The precise commutators of the observables are encoded in the precise shape of the "infinitely wide" distributions for the velocities etc. I must mention that in quantum field theory in $$d$$ dimensions – we have done quantum mechanics so far which is quantum field theory in $$d=1$$ (time is the only spacetime variable on which the degrees of freedom depend) – the power laws will be different. If we still use $$\epsilon$$ for the lattice spacing, the action will be discretized to boxes of volume $$\epsilon^d$$. This tiny factor will multiply the Lagrangian density at each lattice site so the velocities (derivatives of fields...) of typical trajectories will scale like $$1/\sqrt{\epsilon^d}$$. Note that for $$\epsilon\to 0$$, these velocities diverge even more quickly than they did in $$d=1$$. The short distance fluctuations of the histories in $$d\gt 1$$ quantum field theories are not just those of the random walk we found in quantum mechanics; they are even more violently oscillating. This may be heuristically interpreted as the reason why quantum field theories in ever higher numbers of spacetime dimensions suffer from increasingly severe short-distance problems. You may say that it is one of the ways to see why these theories ultimately become non-renormalizable and ill-defined in the ultraviolet. Not only old Englishment could have built Stonehenge. Škoda has built this Citihenge, named after Citigo, our version of Volkswaven Up!, out of old cars. The vicinity of the Tower Bridge is immediately prettier than before. :-D Implications for spin foams and discreteness of time We have emphasized – or at least I have emphasized – that the divergent values of derivatives in the typical histories were needed for the path integral to agree with the nonzero commutators. This is actually a simple way to see that all would-be path integral theories that want to make the time discrete – e.g. they want to have a built-in $$\epsilon=t_{\rm Planck}$$ which is constant in the quantum gravity realm which means that it cannot be sent to zero – inevitably violate the uncertainty principle, a basic postulate of quantum mechanics. If the degrees of freedom were discrete in this way, e.g. if the time were divided to Planckian intervals, all observables would have finite-width distributions and you couldn't get the finite, nonzero commutators. In such a theory, there would be no observables that are linked to functions of the dummy variables (we path-integrate over) and that refuse to commute with each other. This is another simple way to exclude all theories of the "spin foam" kind (a path-integral incarnation of loop quantum gravity although the equivalence obviously can't hold because LQG still tries to pretend that some commutators are nonzero). The people who study this garbage don't understand the basic stuff about path integrals because they would otherwise know that divergent standard deviations of the velocities are needed to get nonzero commutators from the path integral. Again, don't mess with the path integral. And that's the memo. Bonus: why it's OK that these paths have an infinite action Jan Reimers made a good point in the comments. Textbooks (correctly!) say that the action computed from a particular random-walk-like trajectory mentioned above is infinite. It is indeed infinite. If $$v^2$$ has an expectation value going like $$1/ \epsilon\to\infty$$, the integral of such a kinetic term $$mv^2/2$$ over time is bound to diverge, too. But that's how the things are. These paths dominate the path integral, anyway. It's because there are many of them. If you consider differentiable trajectories, you may get a smaller action, namely a finite one, but you will integrate over a smaller volume of trajectories in the infinite-dimensional space of paths and this suppression by the "excessively small volume" in the space of trajectories is more (well, in some counting equally) important than (or as) the exponential suppression due to the divergent action. One may imagine that all the relevant un-smooth paths are fluctuations away from a classical, smooth one whose action is finite. Quantum mechanics allows one to deviate from such smooth paths and it actually allows enough so that the typical "allowed" paths are non-differentiable. It's useful to do some maths. Expand the path $$x(t)$$ into some standing waves (Fourier modes), with terms $$a_k\cdot \sin (\pi k t/\Delta t)$$ plus some linear term to obey the right condition at the initial and final moments. Now, how do the coefficients $$a_k$$ of the typical allowed trajectories scale with $$k$$? If you rewrite the kinetic part of the action which is proportional to $$\int v^2\dd t$$, you will get terms such as $$k^2|a_k|^2$$. The extra factor of $$k^2$$ came from the need to differentiate $$x(t)$$ to obtain $$v(t)$$; and this got squared because we had $$v^2$$. The path integral contains the factor of $$\exp(-S_E/\hbar)$$: let us switch to the Euclidean space so that I don't have to apologize for the imaginary unit again. Because $$S_E$$ is a sum over $$k$$, essentially, we get factors in the path integral of the form\[ \exp(-C\cdot k^2 |a_k|^2). \] You may see that the distribution of each coefficient $$a_k$$ is essentially independent of others and $$k^2|a_k|^2$$ is of order one, independently of $$k$$, which means that $$|a_k|$$ scales like $$1/k$$ for large $$k$$. If you have a function with Fourier coefficients scaling in this way and translate it to $$x(t)$$, a function of a continuous time $$t$$, you will get a discontinuous function of the same random-walk type discussed above. On the other hand, the action for such a typical trajectory is infinite because it's the sum over $$k$$ of $$k^2|a_k|^2$$, up to some overall constants and other details, and because each term is of order one, independently of $$k$$, and because you have infinitely many terms of this kind (infinitely many Fourier modes), the action becomes infinite. But that's not a problem. Most of the infinity comes from "very large" or "infinite" values of $$k$$, i.e. very quickly oscillating Fourier modes, and those have a very small impact on the low-frequency observations that can be made with large and clumsy "classical" probes. If you only have a classical probe, you're back to the classical intuition because $$a_k$$ modes with too high values of $$k$$ become invisible while their contribution to the action becomes "universal": every smooth classical action allows pretty much the same un-smooth deviations from it so having the family of nearby un-smooth paths essentially adds a universal factor to the path integral only (if you only compute low-frequency processes etc.). Of course, whenever the quantum fluctuations become so large and important that you can't consistently separate them from the "classical smooth parts" of $$x(t)$$, the classical limit and the classical intuition become invalid with all the implications. #### 19 comments: 1. Synchronize I'm glad you finally started to use 'Don't mess with' instead of 'Don't mess up with'. The former just sounds much more natural to me. 2. Dilaton Wow, I like this very nice step by step (and at my level) prove that the uncertainty principle holds for the path integrals :-). And it was interesting (and somewhat shoking) to see explicitely how badly "messing with the short scale behaviour" spoils even this important and basic pillar of "ordinary" quantum mechanics ... 3. "Don't mess up" is just a sign of my unauthentic, non-native English, let's not pretend it's anything else. ;-) I know that the idiom has no "up" in it. To mess something up is a slightly different verb. 4. Jan Reimers Fascinating!!! For some reason I had never before heard then non differentiable paths were included in the integral. I always thought that Laplacian term in the Lagrangian would give these paths infinite action, thereby excluding them. In fact in think this is even stated in some text books. I guess those text books are either wrong, or I am miss remembering what they said. Thanks for the enlightenment Lubos, well done. 5. An excellent point, Jan! It is indeed the case that the action evaluated out of a "typical" trajectory contributing to the path integral - a non-differentiable trajectory - is infinite. But it's in no contradiction with the fact that these trajectories contribute all the size to the path integral. It's somewhat unintuitive to explain why it's so. One may "localize" the path integral near classical trajectories for which the action is finite. However, quantum mechanics allows some fluctuations away from this classical trajectory and the degree of tolerance to such fluctuations is enough to make most of the allowed trajectories non-differentiable. 6. lukelea I thought y'all looked good against Portugal, for 80 minutes. Portugal may go all the way. 7. :-) I liked the Czech game at the beginning but for much less than 80 minutes. Maybe 40: one could watch the flagrant nerves of Ronaldo, Nani, coach, and remaining players, too. Gradually the game changed. Our guys simply couldn't compete when it came to energy, accuracy, and other things. 8. ai Very interesting. One question though, why is it ok for velocities to diverge? Shouldn't they be limited by the speed of light? 9. Good and important question and no, the speed limit of special relativity doesn't apply and can't apply to these velocities that appear in paths summed in the path integral. It only applies to actual objects or information but the probability amplitudes aren't actual objects so they may propagate. So if you make quantum mechanics relativistic, it surely doesn't mean that one is restricted to histories that respect the speed limit one-by-one. In fact, the propagators in quantum field theory which is fully relativistic may still be obtained as sums over particles' paths - paths that are mostly non-differentiable and thus "superluminal" almost everywhere. One may still prove that the causality holds. A way to do so is to see a cancellation between particles and antiparticles in their contributions to all quantities that are actually measurable. So quantum mechanics really allows "extreme things" including apparent violations of relativity to happen in between the measurements - those things and extreme wiggles are really essential for the theory to be quantum mechanical and respect the uncertainty principle - but this intermediate engine is still 100% compatible with the fact that the observable phenomena 100% respect the speed limits and other principles. This is a part of the "anti-quantum zeal" discussions that were dedicated dozens of posts on this blog. Some people incorrectly imagine that the wave function is real and what happens with it in between the measurements are real, objective processes. This incorrect assumption leads one believe that those should obey the relativistic speed limits and other things. But this ain't the case. The intermediate stories of the probability amplitudes are just "subjective calculational tricks" and they aren't constrained by constraints that only apply to facts and real events. If a velocity v(t) in a history that contributes to the path integral exceeds c, it is not a "fact" in the physical sense. It's just a property of an abstract term in an abstract calculation that is only turned into facts after many extra steps. 10. Trimok An intriguing feature is that non charged bosonic fields (scalar particle, photon) behave, in some sense, like physical observables. More precisely, these are real fields, and two fields at points separared by a space-like interval commute. So they are not conceptually physical observables, but technically they are very close. 11. Dear Trimok, it's true for all fields, including the charged ones. After all, a charged bosonic field may still be written in terms of its real/Hermitian and imaginary/antihermitian parts. Fields in quantum field theory *are* observables. They are given by operators so they're observables. They have the same conceptual properties as all other observables in QM theories. One may measure them and QM predicts the probabilities of different values resulting from the measurement. But what are *not* observables are the intermediate steps and entities that appear in the calculation of Green's functions or propagators - the wave functions in the middle. And if one describes fields literally as "second quantization" and interprets the classical fields as single-particle wave functions, these wave functions aren't observables. None of these comments changes anything about any point we have made above and if you wanted to suggest it does, you just added confusion and fog to this thread. 12. Trimok I probably misspoke. I was thinking of physical observables which represent information, such as energy density or charge current, and this type of physical observables must respect the principle of causality. (commutation at space-like intervals) If fields in quantum theory are "observable", they do not represent information, even if they can be measured. Therefore a bosonic field is still a bit strange, because it behaves as an observable related to information, while it is not.. 13. Dear Trimok, again, assuming that a new try will make the key point clearer here: physical observables only represent information if this information is actually measured. Without an actual measurement, there is no information. The paths we're summing over in the path integral aren't being measured - they correspond to imagined intermediate non-events in between several measurements - so even if these paths contribute to the expectation value of products of quantum fields (correlation functions) which may be measured, the properties of the individual paths aren't "real" and they aren't "information". For this reason, these intermediate mathematical object don't have to obey - and they don't obey - the relativistic speed limit even though the ultimate results do. 14. Gordon Wilson R.F. Streater, an emeritus prof at King's College, has a web page called "Lost Causes in Physics", including many I agree with, a few, I don't. He includes "Rigorous Feynman path integrals". He admits that they have "heuristic value", but I think in including them in his list, he is using a mathematician's brain and not a physicist's in that he is looking for rigorous proofs rather than elegance and utility. But maybe not---he did write a QFT text. Personally, I love the elegance and originality of Feynman's approach. Since I am just coming back to this stuff after many years, and learning some of it for the first time, I am not looking for rigorous proofs :) 15. Dear Gordon, if you click at the gear in the right upper corner, you may Edit settings including your displayed name, I think. Your attitude is sensible. I personally tend to agre that trying to make Feynman's path integral rigorous - in the sense similar to Lebesgue measure - is a lost cause, a wrong goal. They're templates that should be followed analogously with finite-dimensional integrals but whenever we encounter subtleties, we should be ready to study new details and learn new things. 16. Gordon OK finally I think I have Disqus solved :) 17. Decoherence "if you try to measure the timing of an event with the precision better than the Planck time, you will fail." http://physics.stackexchange.com/questions/3098/can-decoherence-time-be-shorter-than-planck-time Here is something that needs better clarification within this context. We can imagine that the trajectories of particles are observables and are thus a type of eigenvalue in some matrix representation. So from a multitude of possible trajectories, when a experimentalist looks at his photographs for the first time, he knows that one path will be seen from many possible paths that could have been observed. We see this as a classical path, but know it is in fact a quantum path that has decohered into a preferred path (via the Mott problem resolution) or rather some "pointer" path. Although we know that decoherence is instantaneous, as discussed above, the precision of our measurement of decoherence must necessarily be imprecise. We also know that if we measure the exact location of the trajectory, our measurement must necessarily be imprecise. It seems intuitive to think of the situation as one were although the center of mass trajectory can take on real non-integer coordinate values, there is necessary error around those values. "The reason why Feynman's path integral agrees with the nonzero commutators i.e. with the uncertainty principle is that the standard deviation of the velocity goes to infinity as we approach the continuum limit ϵ=Δt→0 " It is clear from this comment and others above that the uncertainty principle when applied to QFT is related now to the differentiability of paths, and not to precise spatial measurements of observables. String theory posits a modification of the uncertainty principle to a more generalized form (http://arxiv.org/pdf/hep-th/0608016v2.pdf), where the additional terms are related to properties of the strings themselves (length and tension). From a naive perspective, it seems that this is directly related to the uncertainty associated with precise space time resolution of the path and is independent of the uncertainty associated with the differentiability of center of mass trajectories. This appears to be confirmed because in the modified string uncertainty principle, string terms are additive to the traditional non-string terms. It would be interesting to see an explanation as to why it seems that imprecision in path measurement is what allows for string theory to have any possibility for existence at all (or if that is an incorrect understanding of the situation). 18. Apologies, I am not bold enough to try to reply to this comment. Paths are not observables but even if you had some sense in which they are, this has nothing to do with string theory, and so on. If you asked about this: string theory regulates the short-distance problems and divergences because the objects are extended - and the histories are smooth world sheets without singular vertices. The uncertainty or chaotic motion of the center-of-mass (or position of point-like particles) isn't enough to smoothen the short-distance problems. In fact, as I sketched above, the huge short-distance variations of the velocity in field theory is the very reason why these theories have short-distance problems - problems that are increasingly severe for increasing spacetime dimension (because the velocities of typical paths are more infinite in higher dimensions, a higher power law). 19. Decoherence Well, to clarify the sense that observable is used, the Mott problem asked the question of how classical paths can be "observed" when the wave function describing the propagation of a particle is spherical. http://rspa.royalsocietypublishing.org/content/126/800/79 So approaching the problem from that direction, Mott showed that only particles connected in a straight line have any probability of ionizing (in his specific alpha particle example), however, the first particle to be ionized would clearly provide the "observed" vector from the origin. In any case, the joint distribution of particles is considered as a system, and as such a particle path can be well defined and as such a probability amplitude can be assigned to it. So in this sense a path is an observable (a correlation of events from a multitude of similar correlations). Certainly pure uncertainty in the center of mass position is insufficient, this is garanteed because of similar issues seen with hidden variable models, however, however it seems that if one were to consider each potential path within the uncertainty bounds of the center of mass as independent and representative of a particular fourier mode then the string might be viewed as a spatial representation of the distribution of mass under an appropriate transformation. ## Who is Lumo? Luboš Motl Pilsen, Czech Republic View my complete profile ← by date
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 145, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935421347618103, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/123557-show-x-2-e-x-2-has-two-solutions.html
# Thread: 1. ## Show that x^2*e^|x|=2 has two solutions How would I do this? I know from the graph that the roots are -0.9 and 0.9 but how do I show this??? Any help much appreciated. Thank you 2. If you may use that $f(x) = x^2e^{|x|}-2$ is continuous, it shouldn't be that hard: $f[a,b]\to \mathbb{R}$ must reach all values in $[f(a), f(b)]$. (look up theorem of Bolzano. I believe there's other even other names for that) given that $f(1) = e-2 > 0$ and $f(1/2) < 0$ it follows that there exists a $x_0$ with $1/2 < x_0 < 1$ such that $f(x_0) = 0$. You can use this argument twice to show the existence of 2 roots (without necessarily finding them). 3. I meant the Intermediate value Theorem. It says: If f is contunious on a closed interval $f:[a,b]\to \mathbb{R}$ then for any $c\in [f(a),f(b)]$ exists a $x_0\in [a,b]$ such that $f(x_0)=c$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487242102622986, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/12488/decay-of-massless-particles?answertab=oldest
# Decay of massless particles We don't normally consider the possibility that massless particles could undergo radioactive decay. There are elementary arguments that make it sound implausible. (A bunch of the following is summarized from Fiore 1996. Most of the rest, except as noted, is my ideas, many of which are probably wrong.) • 1) Normally we state the lifetime of a particle in its rest frame, but a massless particle doesn't have a rest frame. However, it is possible for the lifetime $\tau$ to be proportional to energy $E$ while preserving Lorentz invariance (basically because time and mass-energy are both timelike components of four-vectors). • 2) The constant of proportionality between $\tau$ and $E$ has units of mass-2. It's strange to have such a dimensionful constant popping up out of nowhere, but it's not impossible. • 3) We would typically like the observables of a theory to be continuous functions of its input parameters. If $X$ is a particle of mass $m$, then a decay like $X\rightarrow 3X$ is forbidden by conservation of mass-energy for m>0, but not for m=0. This discontinuity is ugly, but QFT has other cases where such a discontinuity occurs. E.g., historically, massive bosons were not trivial to incorporate into QFT. • 4) In a decay like $X\rightarrow3X$, the products all have to be collinear. This is a little odd, since it doesn't allow the clear distinction one normally assumes in a Feynman diagram between interior and exterior lines. It also means that subsequent "un-decay" can occur. Strange but not impossible. So what about less elementary arguments? My background in QFT is pretty weak (the standard graduate course, over 20 years ago, barely remembered). • 5) The collinearity of the decay products makes the phase-space volume vanish, but amplitudes can diverge to make up for this. • 6) If $X$ is coupled to some fermion $Y$, then one would expect that decay would correspond to a Feynman diagram with a box made out of $Y$'s and four legs made out of $X$'s. If $Y$ is a massive particle like an electron, $P$. Allen on physicsforums argues that when the energy of the initial $X$ approaches zero, the $X$ shouldn't be able to "see" the high-energy field $Y$, so the probability of decay should go to zero, and the lifetime $\tau$ should go to infinity, which contradicts the requirement of $\tau\propto E$ from Lorentz invariance. This seems to rule out the case where $Y$ is massive, but not the case where it's massless. • 7) If $X$ is a photon, then decay is forbidden by arguments that to me seem technical. But this doesn't forbid decays when $X$ is any massless particle whatsoever. • 8) There are some strange thermodynamic things going on. Consider a one-dimensional particle in a box of length $L$. If one $X$ is initially introduced into the box with energy $E=nE_o$, where $E_o$ is the ground-state energy, then it undergoes decays and "undecays," and if I've got my back-of-the-envelope estimate with Stirling's formula right, I think it ends up maximizing its entropy by decaying into about $\sqrt{n}$ daughters at a temperature $\sim \sqrt{hE/L}$. If you then let it out of the box so that it undergoes free expansion, it acts differently from a normal gas. Its temperature approaches zero rather than staying constant, and its entropy approaches infinity. I may be missing something technical about thermo, but this seems to violate the third law. So my question is this: Is there any fundamental (and preferably simple) argument that makes decay of massless particles implausible? I don't think it can be proved completely impossible, because Fiore offers field theories that are counterexamples, such as quantum gravity with a positive cosmological constant. References: 1. Fiore and Modanese, "General properties of the decay amplitudes for massless particles," 1996, http://arxiv.org/abs/hep-th/9508018. - Could you be a little more clear about what's from Fiore and what you came up with? – Dan Jul 20 '11 at 2:48 @Dan: I went back and added numbers to the paragraphs for reference. Fiore is the source for some ideas and assertions in: 1, 4, 5, 7. My own ideas occur in: 2, 3, 4, 8. P. Allen came up with 6. – Ben Crowell Jul 20 '11 at 3:55 2 I don't understand the question. What do you mean with 'radioactive decay' (rather than just decay)? Do you mean a specific sort of coupling with that? Or do you just mean whether they are stable? Or if they can be composite? – WIMP Jul 20 '11 at 10:11 2 I agree that the word "radioactive" is superfluous in the question's title. For clarity I think a better title would be 'Can massless particles decay?' but of course the choice is yours. – qftme Jul 20 '11 at 16:55 From a more formal point of view, the question would be why can't the pole of a propagator be located on the imaginary axis? What happens? I don't know the answer. – user1631 Jul 20 '11 at 18:08 ## 3 Answers "Is there any fundamental (and preferably simple) argument that makes decay of massless particles implausible?" For decay into massive particles: I don't think you need any more than Special Relativity to answer this: Massless particles necessarily travel at the speed of light. Therefore, even if they were unstable, they cannot decay in any reference frame. This is ultimately due to nature of time dilation in Special Relativity. In a sense, one could say massless particles (which must travel at $c$) do not experience time and therefore cannot decay. In more QFT-type language: For simplicity, consider a tree-level diagram of a hypothetical photon decay: $\gamma\rightarrow e^+e^-$ the translational-invariance of the vacuum means that momentum conservation disallows this transition; since the three-momentum of the virtual pair must be zero whilst that of the photon cannot be zero (since it travels at $c$ in all reference frames.) Note however that in Feynman diagrams such as for: $e^+e^-\rightarrow e^+e^-$ via a virtual photon, $\gamma^*$, exchange the situation is somewhat different. A virtual photon has a finite mass, and thus a rest frame, so the three-momentum is conserved at each vertex. This however, should not be refered to as the decay of a photon and does not mean that a real photon can decay. For decay into massless particles: ? I realise this only addresses part of the question but if no one else contributes a more all-encompassing answer I will endeavour to expand on it once I've done a bit more research. - 4 Kinematical considerations only forbid the decay of a massless particle into massive products. The question is whether one can have processes of the from $\gamma \rightarrow \gamma +\gamma +\gamma$ or something similar in gravity. I believe one has processes like this in non-linear optics - the question would be whether Lorentz invariance necessarily removes them. – BebopButUnsteady Jul 20 '11 at 17:18 @Bebop: Good point, not sure why I only considered decay to massive particles. I'll edit the answer accordingly. – qftme Jul 20 '11 at 17:37 if they are colinear spin should not allow gamma->gamma gamma – anna v Jul 20 '11 at 18:45 @anna v: As it says in the linked paper, in actual QED all processes with a photon decaying to any number of photons are not allowed. I gather that the question is how generic is this behavior, and in particular whether it applies to gravity. – BebopButUnsteady Jul 20 '11 at 19:07 – anna v Jul 21 '11 at 3:43 I agree with qftme's answer for the case of massive decay products. By energy conservation alone, $\gamma \rightarrow e^+e^-$ should be allowed, but momentum conservation forbids it (as well as the opposite case, $e^+e^-$ annihilation). It is only allowed if you have some other particle involved to take care of the photon momentum. In case of a massless particle decaying into other massless particles, this only works if your particles have a self-interaction. As far as I know, you can only have this for Bosons. QED (photons) has no self interaction, and the weak bosons (W, Z) are massive, but this is well known to occur in QCD in form of $g \rightarrow gg$. In fact, gluons are more likely to interact at lower energies, than at highest energies where QCD is asymptotically free - that means the interaction is well-behaved and weak, and we can use perturbation theory. At low energies, gluons just keep splitting and producing even lower energy quarks or gluons, until there are in a sense "infinitely many" of "infinitely low" energy. This is what is called infrared divergence. It may sound strange, but in practice all observable quantities remain finite, so there is nothing to worry about. Your point number 5 is also applies here. The decay products are not only very low energetic, but tend to be very collinear (collinear divergence). This is the source of the famous "jets" that appear in high energy collider experiments. - Great post, thanks! Since gluons are confined, point 8 about violation of the third law of thermo seems to be resolved in the case of gluons. What about point 2? If gluons split at low energies with a lifetime proportional to energy, what fixes the constant of proportionality, which has units of mass^-2? It still seems disappointing to me that there is no overarching solution to these problems, just special cases that act different: the photon can't decay for technical reasons, the gluon doesn't violate the third law because it's confined, ... and gravitons can get away with murder? – Ben Crowell Jul 20 '11 at 23:43 @Ben Crowell gravitons are bosons too. – anna v Jul 21 '11 at 3:47 I'm not sure I understand point no. 8. If we are talking about bosons, they should not behave like a classical gas, because they obey, well, boson statistics. I'm also not sure how you reach your formula for the temperature, that would be interesting to see. – jdm Jul 21 '11 at 6:41 @Ben: morally, gravity is just a (lot more) complicated brother of QCD. It also has an asymptotically free range where we can treat it perturbatively (this is quantized linearized gravity on some background) while it (in a sense) confines at lower energies -> space-time itself is made of confined gravitons. Of course, take everything above with a grain of salt since we don't yet have full theory of quantum gravity (though you must've known what you're walking into with this question...). – Marek Jul 21 '11 at 7:41 @jdm: The equation for the temperature is just a rough estimate. The possible states of the particle in the box are labeled by partitions of the integer n=E/Eo into k integers, ignoring order. The entropy is the log of the number of such partitions, which is maximized when k~sqrt(n). When we have sqrt(n) particles, the average energy per degree of freedom is E/sqrt(n)=E/sqrt(E/Eo)=sqrt(EEo)=sqrt(hE/L). – Ben Crowell Jul 21 '11 at 16:18 show 2 more comments Spontaneous parametric down-conversion has the required properties of a decay $\gamma\to\gamma\gamma$: ... to split photons into pairs of photons that, in accordance with the law of conservation of energy, have combined energies and momenta equal to the energy and momentum of the original photon... The process was independently discovered by two pairs of researchers in the late 1980s: Yanhua Shih and Carroll Alley, and Rupamanjari Ghosh and Leonard Mandel. R. Ghosh and L. Mandel, "Observation of Nonclassical Effects in the Interference of Two Photons", Phys. Rev. Lett. 59, 1903 (1987) IMO, can also be qualified as decays and I am aware that are other interpretations, the $\gamma\to e^+e^-$ with the help of a nucleus/neutron (a catalyst), and $\gamma\gamma\to e^+e^-$ in headon collisions are reported. The gamma ray then collides with four or more laser photons to produce an electron-positron pair - Happens in an external field, no? Then it is not the process under discussion here, as there are many way for a photon to split in the presence of a third body to take up the non-conserved quantities. The question even eludes to this. – dmckee♦ Feb 9 at 16:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369725584983826, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/332960/closed-form-for-sum-k-0n-k-binomnk-log-binomnk
# Closed form for $\sum_{k=0}^{n} k\binom{n}{k}\log\binom{n}{k}$ Is it possible to write this in closed form: $$\sum_{k=0}^{n} k\binom{n}{k}\log\binom{n}{k}$$ Can you get something like $$n2^{n-1}\log(2^{n-1})$$ - Originally, I was trying to use the fact that $$log(n!) = \Theta(nlogn)$$ and the definition of $$\binom{n}{k}$$ to prove a lower bound of this form. It got a bit messy. – rhl Mar 18 at 1:59 2 Are you after a closed form (probably non existent) or a lower bound or an upper bound? – Did Mar 19 at 6:24 ## 3 Answers Warning! I couldn't find a closed form. An approximation is described below. You may start by symmetrizing the summand to get $$\sum_{k=0}^{n} k\binom{n}{k}\log\binom{n}{k}={n\over 2}\sum_{k=0}^{n} \binom{n}{k}\log\binom{n}{k}.\tag1$$ The terms in the sum on the right hand side of (1) are symmetric around $n/2$ and concentrated near $k\approx n/2$, so replacing $\log{n\choose k}$ with $\log{n\choose n/2}$ gives a reasonable approximation, and an upper bound. That is, $${n\over 2}\sum_{k=0}^{n} \binom{n}{k}\log\binom{n}{k}\approx {n\over 2}\,2^n\log{n\choose n/2}.$$ Using Stirling's formula gives another approximation (and upper bound) $${n\over 2} \sum_{k=0}^{n} \binom{n}{k}\log\binom{n}{k}\approx {n\over 2}\,2^n [(n+1/2)\log(2)-\log(n\pi)/2].$$ Added: A better approximation results by replacing $\log{n\choose k}$ with $\log{n\choose n/2}-{2\over n}(k-n/2)^2$. With a little work you can get $${n\over 2}\,\sum_{k=0}^{n} \binom{n}{k}\log\binom{n}{k}={n\over 2}\,2^n \left[\log{n\choose n/2}-{1\over 2}+o(1)\right].$$ - This agrees to $O(2^n n)$ with the result I got by a similar method, $\sum_{k=0}^{n} k\binom{n}{k}\log\binom{n}{k}\sim 2^{n-1} n \log \left(2^n \sqrt{\frac{2}{n \pi e}}\right)$. (+1) – oen Mar 25 at 3:22 @oen Thanks. I think my argument could be made more formal by using Laplace's method for sums, but I was too lazy to chase down all the details. – Byron Schmuland Mar 25 at 3:29 The question asks for a closed form, though no? – Mariano Suárez-Alvarez♦ Mar 25 at 18:14 Mariano Quite right! I couldn't find a closed form, so I decided to look for an approximation. Maybe this will be useful to someone.... – Byron Schmuland Mar 25 at 18:28 If $f(n)$ is your sum, then $e^{f(n)}$ becomes an integer product, say $p(n)$, formed by multiplying each binomial coefficient $\binom{n}{k}$ to the power $k \cdot \binom{n}{k}.$ That is, $$p(n)=e^{f(n)}=\prod_{k=0}^n \binom{n}{k}^{k \binom{n}{k}}.$$ The first few terms are $$p(1)=1,\ p(2)=2^2,\ p(3)=3^9,\ p(4)=2^{44}3^{12},\ p(5)=2^{50}5^{75}.$$ When I put the first three into o.e.i.s there was a hit, but it wasn't this sequence, as discovered when I tried the first four terms. (This is no argument that there is not a closed form, of course.) One thing that initially seems to go against a closed form is that the primes entering into the log terms in $f(n)$ are the set of primes dividing binomial coefficients in row $n$ of the binomial triangle, and such primes don't seem to appear in any regular way from row to row, and it seems such lists become arbitrarily long as $n$ increases; at least one can say that in row $n=p$ the prime $p$ will appear. - THIS IS PART OF THE ANSWER: Ok since $$\binom{n}{k} = \binom{n}{n-k}$$ then we can large index terms with small index terms: so: $$\sum_{k=0}^{n} k\binom{n}{k}\log{\binom{n}{k}} = \sum_{k=0}^{n/2}(k \binom{n}{k}\log{\binom{n}{k}} + (n-k)\binom{n}{n-k}\log{\binom{n}{n-k}}) = \sum_{k=0}^{n/2}n\binom{n}{k}\log{\binom{n}{k}} = n\sum_{k=0}^{n/2}\binom{n}{k}\log{\binom{n}{k}}$$ Now we just need to show the rest is bounded by $$2^{n-1}\log(2^{n-1})$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9641841053962708, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/20458-force-problem.html
Thread: 1. Force problem. Force questions always confuse me because of the N. (newtons?) Two forces, 1 and 2, act on the 5.00 kg block shown in the drawing. The magnitudes of the forces are F1 = 55.5 N and F2 = 42.5 N. What is the horizontal acceleration (magnitude and direction) of the block? http://img171.imageshack.us/img171/3181/0411vf7.gif 2. Originally Posted by rcmango Force questions always confuse me because of the N. (newtons?) Two forces, 1 and 2, act on the 5.00 kg block shown in the drawing. The magnitudes of the forces are F1 = 55.5 N and F2 = 42.5 N. What is the horizontal acceleration (magnitude and direction) of the block? http://img171.imageshack.us/img171/3181/0411vf7.gif You want to start with a Free-Body Diagram. I'm defining a +x axis to the right and a +y axis upward. You have two given forces, $F_1$ and $F_2$. There are two implied forces, the weight w acting downward, and the normal force N acting upward. So take components of each force in the x and y directions and use Newton's 2nd in each component direction. (We automatically know that there's no acceleration in the y direction, right? So all the acceleration must be in the x direction.) -Dan 3. okay, so you already have the x component of force 2 right, its the same as the force? also using trig i believe the x component of the force 1 angle is 23.5 and i know accleration is net force / mass so couldn't i just add the x components and divide by the mass? i'm confused here. 4. Originally Posted by rcmango okay, so you already have the x component of force 2 right, its the same as the force? also using trig i believe the x component of the force 1 angle is 23.5 and i know accleration is net force / mass so couldn't i just add the x components and divide by the mass? i'm confused here. You aren't confused, you have it right! (Just make sure that, in my coordinate system at least, that you have the x component of F2 as negative since F2 is pointing in the negative x direction. -Dan Edit: You should probably be carrying more digits in your intermediate answers, though, in case you aren't. For example $F_{1x} \approx 23.4553~N$ not 23.5. Using more decimals gives you a slightly better answer at the end, where you can then round off properly. 5. i think i'm close but not correct just yet. okay so the x0 of force 2 is: -42.5 and x0 of force 1 is: 23.4553 so if i add these together i get: -19.0447 so then i divided -19.0447 by the mass nevermind i just realised that i can't have a negative acceleration. its right 6. Originally Posted by rcmango i think i'm close but not correct just yet. okay so the x0 of force 2 is: -42.5 and x0 of force 1 is: 23.4553 so if i add these together i get: -19.0447 so then i divided -19.0447 by the mass nevermind i just realised that i can't have a negative acceleration. its right A detail that many people tend to forget is that these are vector equations. What we are doing is taking components along one direction or another. For simplicity we define a positive component to be in one direction and a negative component to be in the other. Your first answer is correct: The acceleration comes out to be $-3.81~m/s^2$. What does this mean? It simply means that the acceleration is in the -x direction, or as we defined it, off to the left of the diagram. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922869861125946, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/rotation?page=2&sort=newest&pagesize=15
# Tagged Questions The rotation tag has no wiki summary. 3answers 334 views ### Does being suspended in air allow you to not be affected by Earth's rotation? Let's assume that there was some mechanism by which we could remain suspended in air. By this I mean that our feet is not in contact with the ground. One possible way of doing this would be by means ... 1answer 84 views ### Synchronising the Earth's rotation via mass redistribution How much material would have to be moved per year from mountain-tops to valleys in order to keep the Earth's rotation synchronised with UTC, thus removing the need for leap seconds to be periodically ... 2answers 57 views ### Can a minimum ultimate tensile strength (UTS) for an asteroid be established based on its diameter and rotation? I found it fascinating that many asteroids rotate with a period of just seconds. For this fast of a rotation on this size of object, I thought that would actually cause significant acceleration on the ... 2answers 685 views ### Will a boiled egg or a raw egg stop rolling first? If we roll a normal egg and a boiled egg at the same time on a floor 1) with friction 2) without friction which one will come to stop first (if they will stop at all) and why? Can anyone tell ... 4answers 909 views ### Two axes for rotational motion I understand that angular momentum is a vector, etc.. But, what really happens when some object, say a ball for example, is set to rotate along two axes? What would the resulting motion look like? 1answer 180 views ### What “I” should use in Rotational Energy formula $(I \omega^2)/2$ $\text{Rotational Energy} = \frac{1}{2} I \omega^2$. What $I$ should be used? $I$ as a inertia tensor matrix = stepRotation * inverse moment of inertia * inverse stepRotation; Or I as moment of ... 6answers 545 views ### Why does the earth rotate? [duplicate] Possible Duplicate: Why does every thing spin? So why would the earth, or any planet for that matter, rotate along an axis? I know of no force which could come into play here, so i assume ... 1answer 160 views ### Finding stopping time when only given initial angular velocity and an expression for angular acceleration? Question: A wheel starts is spinning at $27\text{ rad/s}$ but is slowing with an angular acceleration that has a magnitude given by $\alpha(t) = (3.0\;\mathrm{rad/s^4})t^2$. It stops in a time ... 0answers 133 views ### Error caused by pulley eccentricity [closed] Not sure if this is perfectly a physics question: A rope is wrapped around a cylinder of diameter D. The cylinder is slightly eccentric so that the distance between the axis of the cylinder and the ... 4answers 8k views ### Does the rotation of the earth dramatically affect airplane flight time? Say I'm flying from Sydney, to Los Angeles (S2LA), back to Sydney (LA2S). During S2LA, travelling with the rotation of the earth, would the flight time be longer than LA2S on account of Los Angeles ... 4answers 395 views ### Why does a ping pong ball change direction when I spin it on a table? When I spin a ping pong ball on the table, it rolls forward in the opposite direction of the spin, and then eventually changes direction and rolls backward. Here's a video demonstrating the effect. ... 1answer 619 views ### Which way do spiral galaxies rotate? Is it known whether spiral galaxies typically (or exclusively?) rotate with the arms trailing or facing? Intuitively it feels weird to think of the arms as facing the direction of rotation, but ... 4answers 339 views ### why does what get pushed away when centripetal acceleration is towards the center If centripetal acceleration is towards the center, then why - when you spin a bucket of water (a classic demonstration) - does the water not get pushed out but rather stays in the bucket without ... 4answers 647 views ### Video of Earth spinning? If the Earth is spinning or rotating at a really fast speed, why haven't we seen any videos from space of it spinning when we get a lot of photos of it? 4answers 3k views ### Are there planets that do not rotate on their axis? I was reading a thread about how a pendulum would be affected if the Earth did not rotate and Larian's answer made me wonder if all planets rotate necessarily due to physics. So that's the question: ... 1answer 276 views ### If the Earth didn't rotate, how would a Foucault pendulum work? How does the Foucault pendulum work exactly, and would it work at all, if the Earth didn't rotate? 2answers 506 views ### Formula for Rotation curves of Galaxies To ask a more specific one for the rotation curves of elliptical galaxies, and hope from there to later understand the dynamics of spiral galaxies. Treating the galaxy as an isothermal ... 3answers 1k views ### MEMS gyroscope and orientation I'm trying to understand how a MEMS gyroscope can give you orientation. The question may not make any sense, I may have some misunderstandings about the different topics... From the wikipedia page ... 1answer 153 views ### What are the known relationships between rotation of planets/moons and their distance to Sun? What are the known relationships between rotation of planets/moons and their distance to Sun? Or any other known attributes? For example, the sidereal year for planets is directly related to their ... 1answer 67 views ### What exactly is the definition of motion and its relation to Mach's conjecture? The notion of "movement" seems to be well understood in physics. In fact, I don't recall any physics text-book defining motion. Special relativity theory says that there is no absolute frame of ... 1answer 412 views ### Help understanding a Magnetic Levitation “Physics Toy” I was shown a toy, yesterday, which I would like help understanding qualitatively. A fellow engineer showed me a kit which included three main parts: 1.) A base (black box), approximately 4 ... 2answers 218 views ### Doubt concerning centripetal acceleration What is the centripetal acceleration and angular velocity of a child located 8.2 m the center of a carousel? The speed (size of the tangential velocity) of the child is 2.1 m / s A train moves in a ... 3answers 84 views ### What frame(s) of reference are used to measure the rotation of the Sun around the galaxy ? I can find various speeds and estimated durations listed at numerous places but none specifically describe the frame of reference. Possible options as example of kind of answer I expect. Local ... 1answer 124 views ### How can the Earth's day increase and its rotation slow down at the same time? [closed] I heard that the Earth rotation is slowing down, but I also heard the Earth's length of day is increasing. Does the two theories go together or conflict with each other? 3answers 539 views ### Why does each celestial object spin on its own axis? AFAIK all the celestial objects have a spin motion around its axis. What is the reason for this? If it must rotate by some theory, what decides it's direction and speed of rotation? Is there any ... 3answers 832 views ### How is it that angular velocities are vectors, while rotations aren't? Does anyone have an intuitive explanation of why this is the case?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409757852554321, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21676/in-the-analytic-category-finite-morphisms-are-open-maps/27862
## in the analytic category, finite morphisms are open maps? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Work locally, suppose X and Y are open subsets of C^n where C is the complex number field. Suppose f: X--->Y is a map given by n polynomials. If f is quasi-finite (i.e. each fiber is a finite set) and surjective, then is f an open map? Another question is about finite morphism in the analytic category. Let X and Y be complex manifolds and f:X--->Y an analytic (i.e. holomorphic) map. I guess there is a notion of f being finite, and a definition is that the induced maps between the local rings of germs of analytic functions are finite ring homomorphisms. Are there equivalent and more transparent characterizations of f being finite? - 3 For finiteness, your proposed criterion only implies the "right" notion locally on source and target (think of an open embedding). The very self-contained book "Coherent Analytic Sheaves" answers these (in the affirmative) and many other related questions. (A more transparent definition of finiteness in the connected manifold setting is being either proper with finite fibers or being "classified" by a coherent sheaf of $O_Y$-algebras that is locally free of finite rank; equivalences among possible definitions of analytic finiteness are much deeper than in the algebraic theory.) – BCnrd Apr 17 2010 at 17:47 ## 2 Answers Open Mapping Theorem [Grauert-Remmert: Coherent analytic sheaves, p.107] Let $X,Y$ be pure $d$-dimensional complex spaces and assume that $Y$ is locally irreducible. Then any holomorphic map $f:X\to Y$ with discrete fibers is open. - cf. same book p.69: Criterion of Openness. – Sándor Kovács Oct 11 2010 at 1:03 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A finite morphism $f:X-->S$ between complex manifolds is open. Because, in the general case, if the target is pure dimensionnal, the base locally irreducible and the fibers have the same dimension then the map is open (open mapping Remmert theorem in Fischer book). If the base is not locally irreducible but we can endowed the fibers with appropriate multiplicities so that becamme analytic family of cycles then the map is necessarily open. In the finite case, it means that we have a trace map compatible with cycle structure $f_{*}O_X---->O_S$ or a holomorphic map $S--> Sym^{k}(X)$ where k is the generic degre of the branched covering defined by f. - The (strong) normalization is finite but no open. The weak normalization is open..... – kaddar Jun 11 2010 at 20:07 Read " source" instead "target" below! – kaddar Jun 11 2010 at 20:28 3 In the first statement you may want to add dim(X)=dim(S), as closed immersions are finite but rarely open. – Qing Liu Jun 14 2010 at 21:20 Excuse me. Yes, of course, $dim(X)=dim (S)$. – kaddar Jun 18 2010 at 9:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9103557467460632, "perplexity_flag": "head"}
http://math.stackexchange.com/users/52429/stephen?tab=activity&sort=all
# stephen Unregistered reputation 6 bio website location age member for 5 months seen Jan 11 at 21:34 profile views 16 | | | bio | visits | | | |----------|----------------|---------|----------|-----------------|----------| | | 106 reputation | website | | member for | 5 months | | 6 badges | location | | seen | Jan 11 at 21:34 | | # 38 Actions | | | | |-------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Dec23 | awarded | Tumbleweed | | Dec16 | accepted | Absolutely continuous functions with derivatives in $L^p$ | | Dec16 | asked | On a question about the Haar measure | | Dec15 | awarded | Teacher | | Dec15 | comment | Absolutely continuous functions with derivatives in $L^p$@Jonas. Yes you are totally right. It is integrable on bounded intervals, and this is what I use for the proof. Sorry for my confusion! | | Dec15 | revised | Absolutely continuous functions with derivatives in $L^p$added 8 characters in body | | Dec15 | comment | Absolutely continuous functions with derivatives in $L^p$@Jonas. Yes thank you! | | Dec15 | answered | Absolutely continuous functions with derivatives in $L^p$ | | Dec15 | comment | Absolutely continuous functions with derivatives in $L^p$...is integrable, and we have $F(b)-F(a)= \int_a^b F'(x)dx$. I will post an answer to my question so that you will understand the discussion between Giuseppe and I. | | Dec15 | comment | Absolutely continuous functions with derivatives in $L^p$@Jonas: The problem comes from a problems set that our professor gave us to practise for the final exam. I am not sure where she took them from. The definition of absolutely continuous function that I use is: For every $\epsilon>0$, there exists $\delta > 0$ such that for any finite collection of disjoint intervals $(a_i, b_i)_{i=1}^n$, $\sum_i |F(b_i)-F(a_i)|<\epsilon$ whenever $\sum_i (b_i-a_i) < \delta$. This is the definition from Folland's real analysis. The Fundamental theorem of calculus states that $F$ is absolutely continuous if and only if $F'$ exists almost everywhere and... | | Dec14 | awarded | Commentator | | Dec14 | comment | Absolutely continuous functions with derivatives in $L^p$@Jonas. Is it important in this case? | | Dec14 | comment | Absolutely continuous functions with derivatives in $L^p$@Jonas. It is not directly part of my definition, but it follows from the fundamental theorem of calculus for Lebesgue integration. | | Dec13 | accepted | “Commutativity” of integrals | | Dec13 | asked | “Commutativity” of integrals | | Dec12 | comment | Absolutely continuous functions with derivatives in $L^p$Oh if you take $L = \int_{\mathbb R} f'(t) dt$, then I think it works applying Holder. | | Dec12 | comment | Absolutely continuous functions with derivatives in $L^p$I thought about that. Then doesn't the constant $L$ that you get depend on $x$ and $y$? | | Dec12 | revised | Absolutely continuous functions with derivatives in $L^p$added 15 characters in body | | Dec12 | asked | Absolutely continuous functions with derivatives in $L^p$ | | Dec11 | accepted | How to know a function is in $L^p$. |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313102960586548, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78103?sort=newest
## Weighted area of a Voronoi cell ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X = { x_1,\dots,x_n}$ denote a set of $n$ points in the unit square $S = [0,1]\times[0,1]$, and let $w = {w_1,\dots,w_n}$ denote a set of weights corresponding to the $n$ points in $X$. Define the "power diagram" of $X$ in $S$ to be a partition of $S$ into at most $n$ pieces $V_i$, where $V_i = {x\in S: \|x - x_i\|^2 + w_i \leq \|x - x_j\|^2 + w_j \forall j \neq i }$ i.e. a "weighted Voronoi diagram". Now let's consider varying the weight $w_1$ while fixing the other weights; specifically, consider the function $f(w_1) = w_1\cdot \text{Area}(V_1)$ Clearly as $w_1 \rightarrow 0$ we have $f(w_1) \rightarrow 0$ and as $w_1 \rightarrow \infty$ we have $f(w_1) \rightarrow 0$ as well. My question: is $f(w_1)$ unimodal? Convex? Is the answer different if I only have $n=2$ points? What if I define my cells slightly differently, such as $V_i = {x\in S: \|x - x_i\| + w_i \leq \|x - x_j\| + w_j \forall j \neq i }$ ? - Good question. The second definition is quite different from the first, and much worse understood. – Igor Rivin Oct 14 2011 at 10:24 ## 1 Answer This is not an answer, just a way to empirically explore your question. There is publicly available code for computing the weighted Voronoi diagram. For example, this Matlab code written by Andrew Kwok, which produced the image below (left), or this Java and VB code by Takashi Ohyama, or this applet by Oliver Münch, which produced the image below (right). Using such code, it would not be too difficult to gather data to plot $f(w_1)$ in a random diagram and see if it is unimodal or convex. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333946704864502, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/03/11/orders/?like=1&source=post_flair&_wpnonce=671450efa2
# The Unapologetic Mathematician ## Orders As a bonus today, I want to define a few more kinds of relations. A preorder is a relation on a set which is reflexive and transitive. We often write a general preorder as $x\preceq y$ and say that $x$ precedes $y$ or that $y$ succeeds $x$. A set equipped with a preorder is called a preordered set. If we also have that for any two elements $x$ and $y$ there is some element $z$ (possibly the same as $x$ or $y$) that succeeds both of them we call the structure a directed set. A partial order is a preorder which is also antisymmetric: the only way to have both $x\preceq y$ and $y\preceq x$ is for $x$ and $y$ to be the same element. We call a set with a partial order a partially-ordered set or a “poset”. Any set gives a partial order on its set of subsets, given by inclusion: if $A$ and $B$ are subsets of a set $X$, then $A$ precedes $B$ if $A$ is contained in $B$. This has the further nice property that it has a top element, $X$ itself, that succeeds every element. It also has a bottom element, the empty subset, that precedes everything. The same sort of construction applies to give the poset of subgroups of any given group. These kinds of partially-ordered sets are very important in logic and set theory, and they’ll come up in more detail later. Finally, a partial order where for any two elements $x$ and $y$ we either have $x\preceq y$ or $y\preceq x$ is called a total order. Total orders show up over and over, and they’re nice things to have around. I must admit, though, that as far as I’m concerned they’re pretty boring in and of themselves. ### Like this: Posted by John Armstrong | Fundamentals, Orders ## 8 Comments » 1. [...] A well-ordering on a set is a special kind of total order: one in which every non-empty subset contains a least [...] Pingback by | April 2, 2007 | Reply 2. [...] Lower Bounds and Euclid’s Algorithm One interesting question for any partial order is that of lower or upper bounds. Given a partial order and a subset we say that is a lower [...] Pingback by | May 4, 2007 | Reply 3. [...] A poset which has both least upper bounds and greatest lower bounds is called a lattice. In more detail, [...] Pingback by | May 14, 2007 | Reply 4. [...] containment from to those collections of subsets of which are actually topologies, it defines a partial order on the collection of all topologies on [...] Pingback by | November 5, 2007 | Reply 5. [...] numbers for sequences is that they’re “directed”. That is, there’s an order on them. It’s a particularly simple order since it’s total — any two elements are [...] Pingback by | November 19, 2007 | Reply 6. [...] let’s consider the collection of all subspaces of . This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power [...] Pingback by | July 21, 2008 | Reply 7. [...] Complements and the Lattice of Subspaces We know that the poset of subspaces of a vector space is a lattice. Now we can define complementary subspaces in a way [...] Pingback by | May 7, 2009 | Reply 8. [...] we want to introduce a partial order on the collection of partitions called the “dominance order”. Given partitions and , [...] Pingback by | December 17, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462995529174805, "perplexity_flag": "head"}
http://mathoverflow.net/questions/106848?sort=votes
## At what times were people interested in prime numbers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) While prime numbers are central objects in mathematics it looks that they were ignored and forgotten for long periods of time. I am interested to get some facts and insights about this matter, in particular: 1) Were prime numbers studied in ancient times only by the ancient Greeks? At what periods were they studied by the ancient Greeks themselves? 2) Is it the case that people largely or even entirely lost their interest in the prime numbers for about fifteen centuries until Fermat? What are the facts of the matter and what are the reasons that may explain these facts. (motivated by conversations with Ron Livne.) - 1 Isn't that "people largely or even entirely lost their interest in Mathematics for about fifteen centuries", after the end of the Hellenistic era? – Pietro Majer Sep 10 at 22:13 4 No no no , not at all, and also in the Hellenistic era itself people continued interest in mathematics, even in number theory, but lost interest in prime numbers. – Gil Kalai Sep 10 at 22:31 The Ishango bone is pretty old and curiously has some suspicious prime numbers on it. I'm adding this as a comment because of lack of reasons for considering it as relevant, but I could not resist. P. – Pasten Sep 10 at 23:44 2 I don't understand the question, or why the answers given so far are not what you want. In which respect does your question differ from others where "prime number" is replaced by just about any problem in mathematics covered in the Elements and other works of the Greeks? And as for 1), a quick glance at some history book will reveal that we know next to nothing about when the Greeks studied what - we have Euclid, Nicomachus, and Diophantus, everything else is extrapolation. – Franz Lemmermeyer Oct 14 at 16:22 1 Dear Franz The answers give some nice information about question 1 and I will welcome more information and details. I am specifically asking about prime numbers and I would like to know to what extent they are present in the works of Nicomachus, and Diophantus and about studying them in other cultures/times. Also I am curious about question 2: what can be the explanation for the lost of interest in prime numbers for many centuries. – Gil Kalai Oct 14 at 21:18 show 1 more comment ## 7 Answers The Liber Abaci (1202) of Fibonacci contains a chapter on perfect numbers and Mersenne primes (of course Mersenne came much later, but possibly slightly before Fermat; he is born slightly before Fermat but is essentially a contemporary). I do not know if there are any new results; but at least it seems he was interested in them. I am not sure if this counts as interested in prime numbers, but it is certainly number theory and involves primes very directly: the Chinses Remainder Theorem developped from about 3rd to 13th century in China (no surprise here); but also in 6th and 7th century in India. A non-example would be the Chinese Hypothesis that used to be believed to originate in ancient China but did not. - +1 for debunking the so-called Chinese Hypothesis. – Charles Sep 10 at 22:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In response to question (1), an authoritative source is Peter Rudman in "How Mathematics Happened: The First 50,000 Years". Some revelant quotes: On the Ishango bone (20,000 BCE): The concept of division, which must precede the concept of prime number, probably did not evolve until after 10,000 BCE and the emergence of herder-farmer cultures. The concept of prime numbers was probably only really understood after about 500 BCE by Greek mathematicians. On the Babylonian clay tablet Plimpton 322 (1800 BCE): This clay table shows that Babylonian scribes understood Pythagorean triples and perhaps the Pythagorean theorem. It also hints at some understandig of number concepts: prime numbers, composite numbers, regular numbers, rational numbers, and reduced fractions. On the Sieve of Eratosthenes (250 BCE): Is easy to apply and to understand. Babylonian scribes could have invented it more than one thousand years earlier --- but they apparently did not. Its invention was only possible after Pythagoras (500 BCE) and Euclid (300 BCE) had made the study of properties of numbers a subject worthy of the attention of Greek philosophers. In response to question number 2, as described by O'Connor & Robertson, see also the Wikipedia entry, Islamic mathematicians were the heirs of the Greeks throughout the Middle Ages, motivated in part by their interest in practical applications of geometry and number theory to architecture and decoration. (Similarly, the Islamic law of inheritance served as a drive for the development of algebra.) The translation by Islamic scholars of the mathematical works of Greek mathematicians was the principal route of transmission of these texts to the Middle Ages. For example, Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912), while the Latin translation had to wait until Xylander (1575). Some notable Islamic heroes of prime numbers: As noted by Stopple, the 9th century astronomer Thabit ibn Qurra studied prime numbers of the form $3\cdot 2^n-1$ (now called Thabit numbers). Ibn Al-Haytham (born 965) seems to have been the first to attempt to classify all even perfect numbers (numbers equal to the sum of their proper divisors) as those of the form $2^{k-1}(2^k - 1)$ where $2^k - 1$ is prime. As noted by John Stillwell, Al-Haytham is also the first person that we know to state the theorem that if $p$ is prime then $1+(p-1)!$ is divisible by $p$ (only proven 750 years later by Lagrange). Al-Farisi (born 1260) stated and attempted to prove the fundamental theorem of arithmetic, on the unique factorization of an integer into prime numbers. Finally, the "why" question: There are no comparable heroes in Mediaeval Europe. My surmise is that this is because Christianity, with its figurative art, did not stimulate the interest in geometric and numerical patterns to the same extent as Islam did. - For 2), it depends a little on how you interpret the question. Primes in the abstract are covered in Chapter XVIII of Dickson's History of the Theory of Numbers, vol I. There's not much between Euclid and Euler. On the other hand, primes of special forms related to perfect numbers or amicable pairs were written about extensively in the 15 centuries before Fermat. Admittedly, often incorrectly or with little content. In Chapter I of Dickson, Carolus Bovillus (1470-1553) claims that $2^n-1$ is prime if $n$ is odd, giving the example $511=2^9-1$. (In fact $7|511$). But it was not all nonsense. For example, Thabit ibn Qurra (836-901) showed that if $$p=3\cdot 2^{k-1}-1, q=3\cdot 2^k-1, r=9\cdot 2^{2k-1}-1$$ are all primes, then $$m=p\cdot q\cdot 2^k, n=r\cdot 2^k$$ form an amicable pair: $s(m)=n$ and $s(n)=m$, where $s(k)$ is the sum of the proper divisors of $k$. - 4 The error with 511 is slightly shocking. – quid Sep 10 at 22:50 2 Not as shocking as the Grothendieck 57 (or 27) story... Not nearly as shocking, in fact, considering the dates (we've learned so early that $m|n \Rightarrow 2^m-1 | 2^n-1$ that $2^3-1 | 2^9-1$ feels obvious to us, but that's anachronistic). – Noam D. Elkies Oct 15 at 0:55 2 The Grothendieck story is so shocking that it sounds apocryphal. I don't think the shock of the error here is related to the approach you've cited; rather, I would have thought that anyone working with primes (even back then?) would note 511 is not even and not divisible by 3 or 5. So any real attempt to check divisibility would start at 7... – Benjamin Dickman Oct 16 at 5:26 As far as I know, in the Grothendieck story, the number 57 was a spontaneous reaction -- Grothendieck wouldn't have made this mistake in an article. – Lennart Meier Oct 19 at 11:45 My 'shock' regarding the 511 was due to reason Benjamin Dickman gives. Regarding Gr. I am with Lennart Meier. I never found this story (as I read/understood it) shocking at all, not even that surprising. My understanding of the story: He was sort-of pressured by somebody (in a conversation) to discuss something with an explicit example of a prime, instead of in general. He found this misguided/annoying . And thus responded something like: well, whatever, so take fifty-seven. (So while he likely intended to say a prime, there was no relevance to it at all.) – quid Oct 19 at 12:12 In recent times it has been claimed that Bhaskara I (around 700) and more definitely Ibn al-Haytham (965 - 1040) were aware of Wilson's theorem. This is much earlier than Wilson's theorem was previously supposed to be known, so perhaps there is more to be discovered about early work on prime numbers. - Who has claimed this? And based on what text of Bhaskara I? – Marty Sep 10 at 23:46 2 @Marty: for Bhaskara I the only source I've seen is Wikipedia. However, for al-Haytham there is more scholarly support: Rashed, Roshdi Ibn al-Haytham et le théorème de Wilson. Arch. Hist. Exact Sci. 22 (1980), no. 4, 305–321. – John Stillwell Sep 11 at 0:18 Thanks for the reference - I'll check it out. And I'll look through Bhaskara I for clues about primes as well. – Marty Sep 11 at 0:44 3 Ibn al-Haytham: "This being shown we say that this is a necessary property for any prime number, that is to say that for any prime number - which is the number that is a multiple only of the unit - if you multiply the numbers that precede each other in the way that we have introduced, and if we add one to the product, and if we divide the sum by each of the numbers before the prime number, there is one, and if it is divided by the prime number, nothing is left." – Marty Sep 11 at 7:19 4 Regarding Bhaskara I, I am just about ready to call it bunk. It is all over the internet, so it is popular bunk. I think that the bunk comes from misinterpreting the following: In Vol. 2, Ch II, p.59 of his "History of the Theory of Numbers", Dickson explains how both "Ibn al-Haitam (about 1000)" and "Bhascara (born, 1114 A.D.)" treated similar problems about finding a number which has given remainders when divided by 2,3,4,5,6. From the date, this would appear to be Bhaskara II. Al-Haitam's problem leads him to "Wilson's Theorem" context, but Wilson's theorem is absent from Bhaskara II. – Marty Sep 11 at 7:26 show 2 more comments According to the book of David Wells on prime numbers (see page 43), critics think that Diophantus (b. between A.D. 200 and 214, d. between 284 and 298 at age 84) knew (empirically, presumably) that every prime of the form $4n+1$ is a sum of two squares. - Dear Richard, That's interesting.Other than that isnt it true that while being number theorist, primes are not present in Diophantus' work? – Gil Kalai Oct 14 at 21:11 Dear Gil, From what I read on the internet, it seems to be true that primes are not present in Diophantus' work, but I am not an expert. – Richard Stanley Oct 14 at 23:42 It's true that Diophantus does not mention the concept of prime number, but he seldom mentioned any general concept, and was content to illustrate general ideas by examples. This was enough for Fermat, who became interested in primes of the form $x^2+y^2$ after reading the remark (in Diophantus Book III, Problem 19) that 65 is a sum of two squares "due to the fact that 65 is the product of 13 and 5, each of which is the sum of two squares." – John Stillwell Oct 15 at 22:05 Gil Kalai writes: 2) Is it the case that people largely or even entirely lost their interest in the prime numbers for about fifteen centuries until Fermat? What are the facts of the matter and what are the reasons that may explain these facts. It depends on who the people here are! (a) In the Arabic-speaking world, where mathematics was alive and well, prime numbers did not lose their interest; in fact, as John Stillwell said above, the statement "Wilson's theorem" dates from that period. (b) In most of Europe, there was essentially no pure mathematics of interest throughout the Middle Ages. (About the one exception is Fibonacci, who of course got at least part of his mathematical education outside Europe.) Still, it would not surprise me if prime numbers turned out to be one of the few things in what we call number theory that was ever discussed in Western Europe during the Middle Ages. Reason: the popularity of Nicomachus's Arithmetic, translated (freely) by Boethius. Boethius' Latin version was destined to exert a great influence on subsequent encyclopedic authors of the sixth and seventh centuries and throughout the Middle Ages up to the sixteenth century. From the sixth to the twelth century, when Greek geometry had almost vanished and science was at its lowest ebb, Boethius's Arithmetic, for all its faults, preserved the ideal of a theoretical science. Not until the thirteenth century, when Jordanus de Nemore's Arithmetic appeared in ten books, do we have a theoretical arithmetic on the Euclidean model, complete with proofs. E. Grant, A source book in medieval science, Harvard U Press, 1974. From a quick look at Nicomachus's original, it seems to be almost entirely about properties of integers, which are sometimes given a mystical or moral significance. Primality appears as one noteworthy property among several, side by side with being odd, even, triangular, pentagonal, heptagonal, perfect, superparticular, heteromecic, etc. (Nothing or almost nothing non-trivial seems to be shown about any of these.) As for Diophantus's Arithmetic, (a) it could not have an influence in Western Europe during the Middle Ages, as it was unknown there, (b) at any rate, it is largely about what we now would call the (highly ingenious!) construction of rational maps from n-dimensional affine space to varieties. There's very little in Diophantus about integers, and that as auxiliary material. Hence the fact that he does not really discuss prime numbers as such does not tell us much. - Bhaskaracharya in his Lilavati ( a compendium of math puzzles for his daugther) has several examples that include prime numbers -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9630985856056213, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/12050/need-to-refine-results-of-logarithmic-regression
# Need to refine results of logarithmic regression Using a logarithmic regression tool found at xuru.org ( http://www.xuru.org/rt/LnR.asp#CopyPaste ) and the data from below, the curve of the graph for this data is roughly described by y = 31.78303295ln(x) - 36.17569359, which has an RSS value of 10877.59526. My goal is to find an equation that describes the data accurately enough so that I can then multiply it by some smallish constant and the resulting y value will always be above, yet still reasonably close to, the expected y value. As it is, the errors for the calculated y values stray (+ or -) from the actual value in a roughly sinusoidal manner. The errors for the data below are as follows: ```expected Calculated Error 0 -37.0769275 37.0769275 1 -14.88358987 15.88358987 7 -1.901319596 8.901319596 8 20.29201803 12.29201803 16 25.22764812 9.227648123 19 33.27428831 14.27428831 20 55.46762593 35.46762593 23 65.98574081 42.98574081 111 68.44989621 42.55010379 112 90.64323384 21.35676616 113 100.2959388 14.70406121 118 109.3971665 8.602833487 121 118.5256062 2.474393843 124 127.5499778 3.549977828 127 137.1795899 10.17958994 130 146.9062597 16.9062597 143 148.3072802 5.307280243 144 170.2548897 26.25488974 170 172.8139194 2.813919413 178 179.674946 1.674946009 181 188.876822 7.876821965 182 209.6750859 27.67508586 208 212.9576731 4.957673121 216 218.3963188 2.396318798 237 226.0826215 10.91737846 261 242.3657911 18.63420891 267 260.7888986 6.211101379 275 266.8441652 8.155834795 278 276.0074897 1.992510289 281 285.2181035 4.218103464 307 289.1736918 17.8263082 310 297.2291502 12.77084978 ``` Bearing in mind that I know very little about statistics, my questions are: Why is it that when I add more data points the RSS value becomes worse? e.g. the next data point following "34239 310" is "35655 323". When added to the set below and regression is done on the updated set, I get y = 32.38336295 ln(x) - 38.48210346 with RSS=11417.26182. As the value of x increases, the results become increasingly inaccurate (namely, y consistently falls well below the target value). How should I interpret this? Given that the errors seem to fluctuate in a sine-like manner, is there some way to use this knowledge to improve the results of the function? ```data set: 1,0 2,1 3,7 6,8 7,16 9,19 18,20 25,23 27,111 54,112 73,115 97,118 129,121 171,124 231,127 313,130 327,143 649,144 703,170 871,178 1161,181 2223,182 2463,208 2919,216 3711,237 6171,261 10971,267 13255,275 17647,278 23529,281 26623,307 34239,310 ``` Edit by @PeterEllis - addition of illustrative plot showing the original fit - Maybe some plots instead of raw data? – mbq♦ Jun 17 '11 at 22:45 "Why is it that when I add more data points the RSS value becomes worse? " I think because the logarithm regression is not a good fit. Imagine adding points from a quadratic to a linear regression. Also, I think you meant "namely, y consistently falls well below the target value" – Theta30 Jun 18 '11 at 4:13 @Bogdan yes, I meant y not x. Too bad "y" isn't a six-letter word -- then I could edit the post. :) – user5076 Jun 18 '11 at 5:22 1 @jnthn It would be great if you could register your account here and on maths -- you'll gain full rights to edit your posts, post comments in your threads and claim reputation. You can do this here and here. – mbq♦ Jun 18 '11 at 7:15 @jnthn (1) Plot your (x,y) data first. The break will clearly show that no simple equation will do a decent job. (2) Your calculation of errors is misleading: those are absolute errors. The errors themselves are not sinusoidal. (3) Use a better fitting tool. E.g., check out Eureqa. – whuber♦ Jun 18 '11 at 21:33 ## 1 Answer Why does RSS increase the more data I add? RSS is the sum of the squared residuals. If you add another data point and that point is not perfectly on the line, RSS will go up by the square of the distance of your new datum to the regression line. Don't use absolute RSS. Try $R^2$ Given that the errors seem to fluctuate in a sine-like manner, is there some way to use this knowledge to improve the results of the function? The website you mentioned fits $Y = \alpha \log(x) + \epsilon$. If you think (and it almost appears so) that the non-squared errors have a sinusoidal structure, you might want to try and fit $Y = \alpha \log(x) + \beta \sin(x) + \epsilon$. See if that improves anything. Unfortunately that can't be done from that website. So you should try and get a decent software for it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964828252792358, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/17095-counting-permutation-combinations.html
# Thread: 1. ## Counting, Permutation, and Combinations Guys I have several questions that I would like to ask about: 1. In how many ways can a photographer at a wedding arrange 6 people in a row from a group of 10 people, if the bride and the groom are among these 10 people, if: a. the bride must be in the picture b. exactly one of the bride and the groom is in the picture 2. How many ways are there for 10 women and six men to stand in a line so that no two men stand next to each other? 3. A professor writes 40 discrete mathematics true/false questions. Of these statements in these questions, 17 are true. If the questions can be positioned in any order, how many different answer keys are possible. My answer so far: 1a. because the bride is already included among 6 people, therefore there are 5 more people that we can choose and arrange from, so it's 5!. I am not sure if this is right 1b. I don't quite understand the question 2. P(10,10) x P(11,6) 3. C(40,17) I am not quite sure of all these answers, but suggestions/help to my questions are highly appreciated 2. a. the bride must be in the picture Let us divide this problem into cases. Code: ```BXXXXX XBXXXX XXBXXX XXXBXX XXXXBX XXXXXB``` Where "X" stands for any other person. And "B" stands for bride. We will find the number of such cases and then add all of them up together. In the first case there are $(1)(9)(8)(7)(6)(5) = 15120$. Similarly in each case we have the same number of arrangements. So in total we have $6\cdot 15120 = 90720$. b. exactly one of the bride and the groom is in the picture Again divide this into cases. Case 1 is that bride is in picture and groom is not. Case 2 is that bride is not in picture and groom is. Then add those results together. Each subcase is similar to the problem just did above. 3. Originally Posted by ThePerfectHacker Let us divide this problem into cases. Code: ```BXXXXX XBXXXX XXBXXX XXXBXX XXXXBX XXXXXB``` Where "X" stands for any other person. And "B" stands for bride. We will find the number of such cases and then add all of them up together. In the first case there are $(1)(9)(8)(7)(6)(5) = 15120$. Similarly in each case we have the same number of arrangements. So in total we have $6\cdot 15120 = 90720$. Again divide this into cases. Case 1 is that bride is in picture and groom is not. Case 2 is that bride is not in picture and groom is. Then add those results together. Each subcase is similar to the problem just did above. as for case 1: the bride is in the picture but the groom is not, 1 x 8 x 7 x 6 x 5 x 4 = 6,720. 6720 x 6 = 13,440. I choose 8 because therefore the number of possibilities of people chosen is now 9 instead of 10 because the groom is excluded here. and for case 2: the bride is not in picture but the groom is has the same possibilities as case 1 right? then we just add case 1 and case 2? 4. Originally Posted by TheRekz as for case 1: the bride is in the picture but the groom is not, 1 x 8 x 7 x 6 x 5 x 4 = 6,720. 6720 x 6 = 13,440. I choose 8 because therefore the number of possibilities of people chosen is now 9 instead of 10 because the groom is excluded here. and for case 2: the bride is not in picture but the groom is has the same possibilities as case 1 right? then we just add case 1 and case 2? I agree with you. 5. what if now the question is both the bride and the groom must be in the picture: (1 x 1 x 8 x 7 x 6 x 5) = 1,680 x 6 = 10,080 is this right? 6. Hello, TheRekz! 1. In how many ways can a photographer at a wedding arrange 6 people in a row from a group of 10 people, if the bride and the groom are among these 10 people, and: a. the bride must be in the picture b. exactly one of the bride and the groom is in the picture a) The bride is in the picture. Select five more from the other nine people: . $C(9,5)$ ways. These six people can be arranged in $6!$ ways. Therefore, there are: . $C(9,5) \times 6!$ ways. b) Either bride or the groom is in the picture (but not both). There are 2 choices for the newlywed to be in the picture. The other 5 people are chosen from the remaining 8 people (not the other newlywed). . . There are: . $C(8,5)$ ways. The six people can be arranged in $6!$ ways. Therefore, there are: . $2 \times C(8,5) \times 6!$ ways. 2. How many ways are there for 10 women and six men to stand in a line so that no two men stand next to each other? Your answer is correct! .Here's my reasoning . . . Arrange the tenwomen in a row. .There are $P(10,10) = 10!$ possible arrangements. Leave spaces between the women: . $\_\;W\;\_\;W\;\_\;W\;\_\;W\;\_\;W\;\_\;W\;\_\;W\;\ _\;W\;\_\;W\;\_\;W\;\_$ Arrange the six men in any six of the eveleven spaces. . . There are: . $P(11,6)$ ways. Therefore, there are: . $P(10,10) \times P(11,6)$ ways. 3. A professor writes 40 discrete mathematics true/false questions. Of these statements in these questions, 17 are true. If the questions can be positioned in any order, how many different answer keys are possible? Your answer is correct . . . $C(40,17)$ 7. Originally Posted by TheRekz what if now the question is both the bride and the groom must be in the picture: (1 x 1 x 8 x 7 x 6 x 5) = 1,680 x 6 = 10,080 is this right? If both the bride and groom are in the picture then: fix the bride in the first position: BGXXXX = 1*1*8*7*6*5 BXGXXX = 1*8*1*7*6*5 . . . BXXXXG = 1*8*7*6*5*1 Then there are 5*(1*1*8*7*6*5) ways to arrange them with the bride in the first position. Now repeat with the bride in the second, third,....,sixth position and you get the same result each time. Therefore, if you add these all up you get (5*(1*1*8*7*6*5))*6 = 50,400
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351323246955872, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33911/entanglement-is-it-possible-to-prepare-and-reset-probabilities-to-send-informat
# Entanglement: Is it possible to prepare and reset probabilities to send information? I'm pretty certain that the answer to the question in the title is a no, but I don't understand why. I have some basic misunderstanding of quantum processes that I’d like clarified in the form of asking questions about a hypothetical scenario. So, let’s say we have a pair of entangled photons… 1) Is it possible to prepare an entangled pair such that it can be known that, for example, the photon going down a certain path will be measured 25% of the time to be horizontally polarized and 75% likely to be vertically polarized? I guess this would be some sort of quantum decoherence? Partial collapse? 2) If that is possible, then would it also be possible to measure that photon’s diagonal polarization in order to "reset" the horizontal-vertical probability back to 50-50? I've seen a trick done where you take some polarizing filters and stack them horizontal on top of vertical, so that no light gets through. But when you slide a diagonally oriented filter between them, some light does get through, because the horizontal/vertical polarization information gets "reset" when measured against the diagonal filter. 3) If these two things are possible, then wouldn’t it be possible to send information along the entangled pair? If you sent bursts of 1000 photons and were able to measure them all, and found that the first burst had ~750 vertically polarized photons but the second burst had ~500 vertically polarized photons, then wouldn’t you know that the second burst had their diagonal measurements taken? Or would that not work because the extra measurements "break" the entanglement..? 4) Along with that, how many times can the photon pairs be measured before there are no longer entangled? Just once? Or maybe just once per state/qubit/aspect (such as polarization angle)? I understand that measuring the photon would necessarily change it (like with the polarization filter trick), but then how do scientists measure the polarization angles of two photons like in experiments of Bell’s inequality? I know that somewhere along the line, what I've described cannot be possible. If it were, then you could easily send information faster than light. E.g., say you had a photon-entanglement-generator-and-probability-fixer station in between two planets, 1 light year from the planet Foo and .99 light years from the planet Bar. After a year's time, after the photon stream has propagated to both planets, scientists on Bar could measure diagonally polarization of 1000 photons to encode a 1 or not measure to encode a 0. Then scientists on the planet Foo could measure vertically polarity and decode the 1s and 0s based on the statistical readings. The Bar scientists could encode that such a message and the Foo scientists would "instantly" receive the message. Except it would actually be from the past, since Foo is 1.99 years outside the Bar light cone. So since this is impossible, what am I misunderstanding? Thanks for answering, and thanks for putting up with yet another clueless but curious layman. - my guess is that you can do 1), but once you do 2) to one of the photons in the pair, it become disentangled from its remote cousin, which will stay in a 25%-75% distribution – lurscher Aug 10 '12 at 16:02 ## 3 Answers • 1. Without remarks as to how you would best prepare this state experimentally with photons, a pure entangled state of the form $$|\phi\rangle \;=\; \tfrac{1}{2} \Bigl( |H\rangle_L |V\rangle_R \;-\; \sqrt3\, |V\rangle_L |H\rangle_R \Bigr)$$ describes the polarization situation you describe. (To consider this as an abstract two-qubit state, replace "H" and "V" with "0" and "1".) This is not a "maximally" entangled state, but as it doesn't factor into independent states of the two photons, it is nevertheless entangled. • 2 & 3. Yes, filtering the photon in one of the beams (e.g the left-hand beam) gives rise to a photon is a uniform superposition of horizontal and vertical. However, there are no useful correlations between this "reset" photon, and the photon in the right-hand beam. Let's consider a filter for the polarization along the angle +45°, which is a projective measurement onto the basis $|\mathbin\nearrow\rangle, |\mathbin\searrow\rangle$. We have $$|H\rangle = \tfrac{1}{\sqrt 2} \Bigl(|\mathbin\nearrow\rangle + |\mathbin\searrow\rangle\Bigr), \qquad |V\rangle = \tfrac{1}{\sqrt 2} \Bigl(|\mathbin\nearrow\rangle - |\mathbin\searrow\rangle\Bigr),$$ so that projecting the left-hand beam of the state $|\phi\rangle$ onto the state $|\mathbin\nearrow\rangle$ yields the (un-normalised) state $$\begin{align*} |\psi\rangle \;&=\; \tfrac{1}{2} \Bigl( \tfrac{1}{\sqrt 2}|\mathbin\nearrow\rangle_L |V\rangle_R \;-\; \tfrac{\sqrt3}{\sqrt 2}\, |\mathbin\nearrow\rangle_L |H\rangle_R \Bigr) \\&=\;|\mathbin\nearrow\rangle_L\;\otimes\;\tfrac{1}{2\sqrt 2}\Bigr(|V\rangle_R - \sqrt 3 |H\rangle_R\Bigr) \end{align*}$$ so that the left and right beams now share no entanglement whatsoever. Whatever further operations and observations you perform on the left, there will be no correlation that is observable on the right. The photons in the left and right beams have no enduring sort of connection or non-local interaction: their initial states were merely correlated — and as soon as you performed the measurement, you destroyed even that correlation which there was. Note that if you had instead filtered for the other polarization, you would instead obtain a state $$\begin{align*} |\psi'\rangle \;&=\; \tfrac{1}{2} \Bigl( \tfrac{1}{\sqrt 2}|\mathbin\searrow\rangle_L |V\rangle_R \;+\; \tfrac{\sqrt3}{\sqrt 2}\, |\mathbin\searrow\rangle_L |H\rangle_R \Bigr) \\&=\;|\mathbin\searrow\rangle_L\;\otimes\;\tfrac{1}{2\sqrt 2}\Bigr(|V\rangle_R + \sqrt 3 |H\rangle_R\Bigr) ; \end{align*}$$ notice that the state of the photon in the right-hand beam is different than if we selected $|\mathbin\nearrow\rangle$. That's because the entangled state $|\phi\rangle$ shows correlations not just in the H/V basis, but in every basis of measurement. While the person measuring the left-hand beam gets no information what someone on the right might measure in the H/V basis — they know that it will be biased towards H, but their measurement outcome of $|\mathbin\nearrow\rangle$ or $\mathbin\searrow\rangle$ will give them no further information — they do know what state the photon in the right-hand beam has collapsed to. That is in fact the defining characteristic of entanglement, as opposed to classical randomness: the correlation extends to multiple bases of measurement. • 4. Each independent degree of freedom can sustain correlations with other degrees of freedom, on the same particle or other particles, in principle. In this case, all of the entanglement that the system had, aside from correlated uncertainties in their momenta (which you implicitly perform a weak measurement on, just by virtue of making a polarization measurement at a particular location after it was emitted), are in the polarizations. As the polarization of each beam is entirely involved in what entanglement there is (in a multi-particle scenario, you can consider entanglement spread across the system in a way that some particles are more involved in the entangled state than others), any single measurement will destroy all entanglement present. - Niel's answer shows in particular that your scheme does not work. But in fact you can show that as long as you cannot control the outcome of the measurement on Alice's side, then Bob cannot learn anything about Alice's choice of measurement. The only thing he can learn is what outcome Alice has obtained or will obtain, but that is random and outside her control. To see that what I am saying is correct suppose we have some set of measurement operators $M_k \otimes 1$, where the first part of the Hilbert space is on Alice's side and the second part is on Bob's (and 1 is the identity operator on Bob's system). The measurement operators tell you what is the state of the system after doing the measurement. Now provided measurement outcomes cannot be controlled, then the average state after Alice has conducted her measurement is $$\sum_k (M_k \otimes 1) \rho (M_k \otimes 1)^\dagger.$$ Bob's part of the state then is $$\sum_k Tr_A\left[ (M_k^\dagger M_k \otimes 1) \rho\right]$$ due to the cyclic property of the trace and since $\sum_k M_k^\dagger M_k = 1$ (probabilities sum to one), we have $$\sum_k Tr_A\left[ (M_k^\dagger M_k \otimes 1) \rho\right] = Tr_A\left[\rho\right]$$ So as far as Bob is concerned, the fact that Alice has conducted the measurement changed nothing at all about his measurement statistics. The situation would be different if Alice could control the outcome, as then the sum would not need to be taken and the operators would not sum to 1. Quantum mechanics predicts this is not possible and the current experimental evidence seems to support this view. But it could be that one day experimental evidence will falsify quantum mechanics and FTL communication will be possible. Until then, entanglement is not useful for FTL communication. - 1)yes, with Quantum Non Demolition Measurements 4)gets disentangled after measuremnt maybe can help Partial recovery of entanglement in bipartite entanglement transformations Somshubhro Bandyopadhyay Partial Recovery of Quantum Entanglement Runyao Duan -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950203537940979, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/category/algebra/ring-theory/structure-of-rings/
# The Unapologetic Mathematician ## Local Rings Sorry for the break last Friday. As long as we’re in the neighborhood — so to speak — we may as well define the concept of a “local ring”. This is a commutative ring which contains a unique maximal ideal. Equivalently, it’s one in which the sum of any two noninvertible elements is again noninvertible. Why are these conditions equivalent? Well, if we have noninvertible elements $r_1$ and $r_2$ with $r_1+r_2$ invertible, then these elements generate principal ideals $(r_1)$ and $(r_2)$. If we add these two ideals, we must get the whole ring, for the sum contains $r_1+r_2$, and so must contain $1$, and thus the whole ring. Thus $(r_1)$ and $(r_2)$ cannot both be contained within the same maximal ideal, and thus we would have to have two distinct maximal ideals. Conversely, if the sum of any two noninvertible elements is itself noninvertible, then the noninvertible elements form an ideal. And this ideal must be maximal, for if we throw in any other (invertible) element, it would suddenly contain the entire ring. Why do we care? Well, it turns out that for any manifold $M$ and point $p\in M$ the algebra $\mathcal{O}_p$ of germs of functions at $p$ is a local ring. And in fact this is pretty much the reason for the name “local” ring: it is a ring of functions that’s completely localized to a single point. To see that this is true, let’s consider which germs are invertible. I say that a germ represented by a function $f:U\to\mathbb{R}$ is invertible if and only if $f(p)\neq0$. Indeed, if $f(p)=0$, then $f$ is certainly not invertible. On the other hand, if $f(p)\neq0$, then continuity tells us that there is some neighborhood $V$ of $p$ where $f(p)\neq0$. Restricting $f$ to this neighborhood if necessary, we have a representative of the germ which never takes the value zero. And thus we can define a function $g(q)=\frac{1}{f(q)}$ for $q\in V$, which represents the multiplicative inverse to the germ of $f$. With this characterization of the invertible germs in hand, it should be clear that any two noninvertible germs represented by $f_1$ and $f_2$ must have $f_1(p)=f_2(p)=0$. Thus $f_1(p)+f_2(p)=0$, and the germ of $f_1+f_2$ is again noninvertible. Since the sum of any two noninvertible germs is itself noninvertible, the algebra $\mathcal{O}_p$ of germs is local, and its unique maximal ideal $\mathfrak{m}_p$ consists of those functions which vanish at $p$. Incidentally, we once characterized maximal ideals as those for which the quotient $R/I$ is a field. So which field is it in this case? It’s not hard to see that $\mathcal{O}_p/\mathfrak{m}_p\cong\mathbb{R}$ — any germ is sent to its value at $p$, which is just a real number. ## Rings Okay, I know I’ve been doing a lot more high-level stuff this week because of the $E_8$ thing, but it’s getting about time to break some new ground. A ring is another very well-known kind of mathematical structure, and we’re going to build it from parts we already know about. First we start with an abelian group, writing this group operation as $+$. Of course that means we have an identity element ${}0$, and inverses (negatives). To this base we’re going to add a semigroup structure. That is, we can also “multiply” elements of the ring by using the semigroup structure, and I’ll write this as we usually write multiplication in algebra. Often the semigroup will actually be a monoid — there will be an identity element $1$. We call this a “ring with unit” or a “unital ring”. Some authors only ever use rings with units, and there are good cases to be made on each side. Of course, it’s one thing to just have these two structures floating around. It’s another thing entirely to make them interact. So I’ll add one more rule to make them play nicely together: $(a+b)(c+d) = ac+ad+bc+bd$ This is the familiar distributive law from high school algebra. Notice that I’m not assuming the multiplication in a ring to be invertible. In fact, a lot of interesting structure comes from elements that have no multiplicative inverse. I’m also not assuming that the multiplication is commutative. If it is, we say the ring is commutative. The fundamental example of a ring is the integers $\mathbb{Z}$. I’ll soon show its ring structure in my thread of posts directly about them. Actually, the integers have a lot of special properties we’ll talk about in more detail. The whole area of number theory basically grew out of studying this ring, and much of ring theory is an attempt to generalize those properties. ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287198185920715, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/23335/list
## Return to Answer 3 added 404 characters in body José is correct in his comment. Just to elaborate: in the linear case, one can easily study the equation using Fourier methods. Let $\tilde{u}$ denote the space-time Fourier transform and $\hat{u}$ denote the spatial Fourier transform, the equation with $p=1$ can be written as $$(\tau - \xi^3 + \xi)\tilde{u} = 0$$ or $$\partial_t\hat{u} = i(\xi^3 - \xi)\hat{u}$$ The first formulation tells you that the space-time Fourier transform of a solution is a measure supported on the curve $\tau = \xi^3 - \xi$ in frequency space. That the frequency support has curvature implies that the solution should decay in time in physical space, justifying José's comment. (Look up Fourier restriction theorems or Strichartz estimates in the literature for more details.) We can also see the temporal decay directly from the second formulation using oscillatory integral techniques. The solution can be written as $$\hat{u}(t,\xi) = e^{i(\xi^3 - \xi)t}\hat{u}_0(\xi)$$ It is then a standard exercise to show that, given initial data of sufficient decay in frequency space (say, the frequency has compact support), taking the inverse Fourier transform of the above solution gives you something with decay in $L^\infty$. If a solution decays in $L^\infty$, it cannot be a soliton. We can also see the lack of solitons by posing the traveling wave ansatz $u(t,x) = f(\omega t + x)$. A simple computation shows that the function $f$ must solve $$-(\omega + 1)f' = f''' \implies f = \exp [ i \sqrt{\omega + 1}(\omega t + x) ]$$ so that traveling waves cannot be spatially localized. In fact, the traveling wave ansatz gives us a dispersion relation for this equation: that waves of spatial frequency $\beta$ travels with velocity $\omega = \beta^2 - 1$. The fact that different frequency components of the wave tend to travel at different velocities illustrates why, starting with a pulsed wave packet, the solution will become wider and wider while its height gets smaller and smaller. In short, I think the reason why this equation is not heavily studied in the literature is because that as a linear PDE in (1+1) dimensions, it is not really all that interesting to look at. 2 Added more clarifications José is correct in his comment. Just to elaborate: in the linear case, one can easily study the equation using Fourier methods. Let $\tilde{u}$ denote the space-time Fourier transform and $\hat{u}$ denote the spatial Fourier transform, the equation with $p=1$ can be written as $$(\tau - \xi^3 + \xi)\tilde{u} = 0$$ or $$\partial_t\hat{u} = i(\xi^3 - \xi)\hat{u}$$ The first formulation tells you that the space-time Fourier transform of a solution is a measure supported on the curve $\tau = \xi^3 - \xi$ in frequency space. That the frequency support has curvature implies that the solution should decay in time in physical space, justifying José's comment. (Look up Fourier restriction theorems or Strichartz estimates in the literature for more details.) We can also see the temporal decay directly from the second formulation using oscillatory integral techniques. The solution can be written as $$\hat{u}(t,\xi) = e^{i(\xi^3 - \xi)t}\hat{u}_0(\xi)$$ It is then a standard exercise to show that, given initial data of sufficient decay in frequency space (say, the frequency has compact support), taking the inverse Fourier transform of the above solution gives you something with decay in $L^\infty$. If a solution decays in $L^\infty$, it cannot be a soliton. We can also see the lack of solitons by posing the traveling wave ansatz $u(t,x) = f(\omega t + x)$. A simple computation shows that the function $f$ must solve $$-(\omega + 1)f' = f''' \implies f = \exp [ i \sqrt{\omega + 1}(\omega t + x) ]$$ so that traveling waves cannot be spatially localized. In short, I think the reason why this equation is not heavily studied in the literature is because that as a linear PDE in (1+1) dimensions, it is not really all that interesting to look at. 1 José is correct in his comment. Just to elaborate: in the linear case, one can easily study the equation using Fourier methods. Let $\tilde{u}$ denote the space-time Fourier transform and $\hat{u}$ denote the spatial Fourier transform, the equation with $p=1$ can be written as $$(\tau - \xi^3 + \xi)\tilde{u} = 0$$ or $$\partial_t\hat{u} = i(\xi^3 - \xi)\hat{u}$$ The first formulation tells you that the space-time Fourier transform of a solution is a measure supported on the curve $\tau = \xi^3 - \xi$ in frequency space. That the frequency support has curvature implies that the solution should decay in time in physical space, justifying José's comment. (Look up Fourier restriction theorems or Strichartz estimates in the literature for more details.) We can also see the temporal decay directly from the second formulation using oscillatory integral techniques. The solution can be written as $$\hat{u}(t,\xi) = e^{i(\xi^3 - \xi)t}\hat{u}_0(\xi)$$ It is then a standard exercise to show that, given initial data of sufficient decay in frequency space (say, the frequency has compact support), taking the inverse Fourier transform of the above solution gives you something with decay in $L^\infty$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271177053451538, "perplexity_flag": "head"}
http://alanrendall.wordpress.com/2010/01/28/invariant-manifolds/
# Hydrobates A mathematician thinks aloud ## Invariant manifolds Here I want to say something about invariant manifolds of flows and diffeomorphisms. There are close connections between the two. I feel a closer attachment to the first (continuous evolution) than the second (discrete evolution) and so I will tend to emphasize it. I have known various things about invariant manifolds and used them in my work for many years. Just recently I was able to add some small things to my knowledge on the subject which has given me the feeling that I have a more global view. A treatment of this subject which I found very helpful is that given in lecture notes of Grant. I am interested here in the situation of a smooth dynamical system on $R^n$ with a stationary point. Let its linearization at that point be denoted by $A$. Let $a$ be a real number such that no eigenvalue of $A$ has real part $a$. Then the eigenvalues can be split into the sets with real parts less than and greater than $a$ respectively. The generalized eigenvectors corresponding to eigenvalues in the first set define a subspace $E^s_a$ called the pseudostable subspace. The pseudostable manifold theorem says that if the system is $C^k$ for some $k\ge 1$ there is a $C^k$ submanifold $V^s_a$ passing through the stationary solution which is invariant and whose tangent space at the stationary point is $E^s_a$. If $a=0$ the terminology is simplified by omitting the prefix ‘pseudo’ and this gives rise to the more widely known stable manifold theorem. By reversing the direction of time it is possible to get corresponding statements replacing ‘stable’ by ‘unstable’. If $a>0$ and there are no eigenvalues whose real parts lie in the interval $(0,a)$ then $E^s_a$ is called a centre-stable manifold $V^{cs}$. The intersection of a centre-stable and a centre-unstable manifold is called a centre manifold.The pseudostable manifold is uniquely determined in a small neighourhood of the stationary point if $a<0$. The other invariant manifolds are in general not unique. These results for continuous time dynamical systems have analogues for a diffeomorphism (which by iteration defines a discrete dynamical system). It is merely necessary to replace the additive inequalities on the real part of the eigenvalues by multiplicative inequalities on the modulus of the eigenvalues. There are two common methods to prove the stable manifold theorem. The first is called the Lyapunov-Perron method and is analytical in flavour while the second, called the graph transform and due to Hadamard, is more geometrical. The first method starts by writing down an integral equation. It is then proved that any solution of this integral equation is a solution of the dynamical system which lies on the unstable manifold. The stable manifold is obtained as a union of solutions of this type. I found the proof of these statements rather easy to follow. What disturbed me was that that I did not at all see where the integral equation comes from. Fortunately in his lecture notes Grant gives an elementary step by step description of how to get to that integral equation, starting from the solution formula for an inhomogeneous linear ODE (Duhamel’s formula). The second method represents the stable manifold as a graph over the stable subspace. It defines an iteration for the function describing this graph. The map from one iterate to the next is given by the time-one flow of the system. If you think about this by drawing a picture for a saddle point in the two-dimensional case it is very plausible that it works. The actual proof can be done by noting that thetime-one map is a diffeomorphism whose stable manifold is identical with the manifold being sought. So this proof reduces the continuous time case to the discrete time case. Up to know I have been talking about a single dynamical system. There are useful extensions to systems which depend on a parameter $\lambda$. There is a trick which I had seen before but never really appreciated the importance of. Suppose we have a system $\dot x=f(z,\lambda)$ for $x\in R^n$. Augment it by the equation $\dot\lambda=0$. Then we have a dynamical system on $x\in R^{n+1}$. If for $\lambda=0$ the system has a stationary point at $x_0$ then we can study invariant manifolds for the augmented system about the point $(x_0,0)$. Considering for instance the centre-stable manifold in a situation of this type can give valuable information about the way in which solutions near that point change when $\lambda$ passes through zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946931779384613, "perplexity_flag": "head"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G02/g02bwc.html
# NAG Library Function Documentnag_cov_to_corr (g02bwc) ## 1  Purpose nag_cov_to_corr (g02bwc) calculates a matrix of Pearson product-moment correlation coefficients from sums of squares and cross-products of deviations about the mean. ## 2  Specification #include <nag.h> #include <nagg02.h> void nag_cov_to_corr (Integer m, double r[], NagError *fail) ## 3  Description nag_cov_to_corr (g02bwc) calculates a matrix of Pearson product-moment correlation coefficients from sums of squares and cross-products about the mean for observations on $m$ variables which can be computed by a single call to nag_sum_sqs (g02buc) or a series of calls to nag_sum_sqs_update (g02btc). The sums of squares and cross-products are stored in an array packed by column and are overwritten by the correlation coefficients. Let ${c}_{jk}$ be the cross-product of deviations from the mean, for $\mathit{j}=1,2,\dots ,m$ and $\mathit{k}=j,\dots ,m$, then the product-moment correlation coefficient, ${r}_{jk}$ is given by $rjk=cjkcjjckk .$ None. ## 5  Arguments 1:     m – IntegerInput On entry: $m$, the number of variables. Constraint: ${\mathbf{m}}\ge 1$. 2:     r[$\left({\mathbf{m}}×{\mathbf{m}}+{\mathbf{m}}\right)/2$] – doubleInput/Output On entry: contains the upper triangular part of the sums of squares and cross-products matrix of deviations from the mean. These are stored packed by column, i.e., the cross-product between variable $j$ and $k$, $k\ge j$, is stored in ${\mathbf{r}}\left[\left(k×\left(k-1\right)/2+j\right)-1\right]$. On exit: Pearson product-moment correlation coefficients. These are stored packed by column corresponding to the input cross-products. 3:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{m}}\ge 1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_ZERO_VARIANCE On entry, a variable has zero variance. ## 7  Accuracy The accuracy of nag_cov_to_corr (g02bwc) is entirely dependent upon the accuracy of the elements of array r. ## 8  Further Comments nag_cov_to_corr (g02bwc) may also be used to calculate the correlations between parameter estimates from the variance-covariance matrix of the parameter estimates as is given by several functions in this chapter. ## 9  Example A program to calculate the correlation matrix from raw data. The sum of squares and cross-products about the mean are calculated from the raw data by a call to nag_sum_sqs (g02buc). The correlation matrix is then calculated from these values. ### 9.1  Program Text Program Text (g02bwce.c) ### 9.2  Program Data Program Data (g02bwce.d) ### 9.3  Program Results Program Results (g02bwce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6624400615692139, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/206008-similar-triangles-question.html
# Thread: 1. ## Similar Triangles question Hello! I am writing a program that requires some geometry and nothing I have found online gives me a clear answer... I have a point lets say at (3,2) I am animating moving that point to the origin at (0,0), and then back to (3,2). I understand that the side lengths will remain proportional to each other, but I can't figure out how I might be able create a sort of rate function I guess? To be able able to calculate the X and Y coordinates all along the path. I need to be able to manipulate each one of the somehow... the animation is continuously updating so I need to be able to decrement and increment the X and Y accordingly... any ideas? Thanks guys! 2. ## Re: Similar Triangles question The distance from the origin to a point at (x,y) is $r= \sqrt {x^2 + y^2}$. Given that the point (3,2) is on the line that connects (3,2) to the origin, the equation of that line is y = (2/3)x. So the distance from the origin to any point on the line is $r = \sqrt{x^2 + \frac 4 9 x^2} = \sqrt{\frac {13} 9} x =\frac { \sqrt{13} x} 3$. The rate at which 'r' changes is therefore equal to $\frac {\sqrt {13}} 3$ times the rate at which x changes. Is that what you're looking for?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439066052436829, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/30110/injective-r-module-homomorphism-vs-injective-ring-homomorphism
# injective $R$-module homomorphism vs. injective ring homomorphism The following question has been lingering in my mind for months. Let $R$ be a non-zero commutative ring with $1$. Consider $\phi : R^n \rightarrow R^m$, 1) as an injective $R$-module homomorphism. 2) as an injective ring homomorphism. (by definition $\phi(1)=1$.) In which of the above cases, we can deduce that $n \leq m$? and why? - Do you have a particular $\phi$ in mind? If so, which one? – Pete L. Clark Apr 2 '11 at 18:38 No, I don't. I'm just trying to find the complete solution for the first case. – Ehsan M. Kermani Apr 3 '11 at 18:07 ## 2 Answers Let $R = \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \times \ldots \times \mathbb{Z}/2\mathbb{Z}$ (infinitely many times). Then as a ring $R^m = R^n = R$ $\forall m, n \in \mathbb{N}$. So the answer is negative in the case 2. But in the first case the answer is YES. But proof for general commutative ring is complicated. Here I am giving an easy proof assuming $R$ is commutative noetherian ring. After localizing at a minimal prime ideal we may assume that $R$ is a zero dimensional local ring ie artinian ring and $\phi : R^n \rightarrow R^m$ is an injective $R$ module homomorphism. Now length of $R$ as a $R$ module is finite and is equal to $l$ (say). Then comparing the length of both sides we have $ln \leq lm$. This means that $n \leq m$ - Interesting. I always assumed you needed $R$ to be an integral domain or something for case 1. – Matt Mar 31 '11 at 16:53 No. I donot need $R$ to be an integral domain. Localization preserves exactness. So first I have simplified the problem. Then comparison of length has done job. – Anjan Gupta Mar 31 '11 at 17:08 Thanks. I appreciate it. It is a nice and important special case, but do you know the outline of the proof for the general case? – Ehsan M. Kermani Apr 2 '11 at 8:19 Give me some time. I have to search my old notes for the solution. – Anjan Gupta Apr 2 '11 at 18:25 ## Did you find this question interesting? Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). email address Let $S$ be your favorite non-zero commutative ring with $1$ and let $R$ be the product of countably many copies of $S$. Then $R^n$ is the product of countably many copies of $S$ for any $n\in\mathbb{N}$, since the union of finitely many countable sets of countable. Therefore $R^n$ and $R^m$ are isomorphic as rings for all $m,n\in\mathbb{N}$, so without further assumptions we cannot deduce $n\leq m$ in case $(2)$. EDIT: I had originally written that this argument also applies in case $(1)$, but it does not; see the comments below. - – Noah Stein Mar 31 '11 at 12:09 Concerning the IBN question (which is about $R$-modules): you need to look at $(S^\infty)^n$ as a module over $S^\infty$ -- not just over $S$. – Rasmus Mar 31 '11 at 12:34 @Rasmus: Ah, thanks. Shall I go ahead and delete the answer, then? – Noah Stein Mar 31 '11 at 13:01 Well, as Anjan Gupta notes, your reasoning is correct for question (ii). For question (i) it's nice to have your link to the relevant wikipedia article. – Rasmus Mar 31 '11 at 16:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359968304634094, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/76422/list
## Return to Question 4 edited tags 3 added 334 characters in body Let $G$ be a group of automorphisms of the countable atomless Boolean algebra $B$. Suppose that every orbit of $G$ on $B$ is an antichain. Does it follow that $G$ preserves a non-zero (probability) measure on $B$? Does the answer change if we extend $B$ to some complete or $\sigma$-complete algebra, and the action of $G$ extends to one in which orbits are still antichains? I'm also interested in group actions that satisfy a very different condition: $G$ is a group such that for some (all) $a \in B \setminus {0,1}$ and for all $b \in B \setminus {0,1}$ there is some $g \in G$ such that $ga < b$. Do such actions have a name and has anything been proved about them (or about groups that have such actions)? Edit: To clarify, by 'antichain' I just mean a set of pairwise incomparable elements. I didn't know about the stronger meaning used by set theorists. For what it's worth I am mainly interested in using actions to understand algebraic properties of the group $G$.G$, so I probably don't need to consider any exotic algebras of the kind set theorists would find interesting; the most obvious examples, such as the countable atomless Boolean algebra or the standard Borel$\sigma$-algebra, are probably good enough. I definitely do not want to assume that$G\$ is the whole automorphism group, however. 2 added 268 characters in body Let $G$ be a group of automorphisms of the countable atomless Boolean algebra $B$. Suppose that every orbit of $G$ on $B$ is an antichain. Does it follow that $G$ preserves a non-zero (probability) measure on $B$? Does the answer change if we extend $B$ to some complete or $\sigma$-complete algebra, and the action of $G$ extends to one in which orbits are still antichains? I'm also interested in group actions that satisfy a very different condition: $G$ is a group such that for some (all) $a \in B \setminus {0,1}$ and for all $b \in B \setminus {0,1}$ there is some $g \in G$ such that $ga < b$. Do such actions have a name and has anything been proved about them (or about groups that have such actions)? Edit: To clarify, by 'antichain' I just mean a set of pairwise incomparable elements. I didn't know about the stronger meaning used by set theorists. For what it's worth I am mainly interested in using actions to understand algebraic properties of the group $G$. 1 # Antichains and measure-preserving actions on Boolean algebras Let $G$ be a group of automorphisms of the countable atomless Boolean algebra $B$. Suppose that every orbit of $G$ on $B$ is an antichain. Does it follow that $G$ preserves a non-zero (probability) measure on $B$? Does the answer change if we extend $B$ to some complete or $\sigma$-complete algebra, and the action of $G$ extends to one in which orbits are still antichains? I'm also interested in group actions that satisfy a very different condition: $G$ is a group such that for some (all) $a \in B \setminus {0,1}$ and for all $b \in B \setminus {0,1}$ there is some $g \in G$ such that $ga < b$. Do such actions have a name and has anything been proved about them (or about groups that have such actions)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619230628013611, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/31161-vector-equation.html
# Thread: 1. ## vector equation find the vector equation for the line passing through point (5, -1) and perpendicular to the vector i+j. thanks in advance. (: 2. Originally Posted by overduex find the vector equation for the line passing through point (5, -1) and perpendicular to the vector i+j. You need a line though (5,-1) with direction vector <1,-1>. 3. Originally Posted by Plato You need a line though (5,-1) with direction vector <1,-1>. I understand that. but what is the vector equation form. In other words, I want to know how to put it in vector equation form. What is vector equation form? Sorry, this is probably a simple task, but unfortunately my teacher does not teach. 4. $\left\langle {5, - 1} \right\rangle + t\left\langle {1, - 1} \right\rangle = \left\langle {5 + t, - 1 - t} \right\rangle = \left\{ \begin{gathered}<br /> x = 5 + t \hfill \\<br /> y = - 1 - t \hfill \\ <br /> \end{gathered} \right.<br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931210994720459, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/122233-intersection-two-spheres.html
# Thread: 1. ## Intersection of Two Spheres For quite sometime I have been intrigued by the curve created by the intersection of two spheres with the following properties/conditions: Let both spheres have the same radius. Let's say a radius of 2. Let one sphere be centered at (0,0,0) and the other sphere be centered at (2,2,2). I think the equations for the two spheres are: x^2+y^2+z^2=4 and (x-2)^2+(y-2)^2+(z-2)^2=4 I have seen a method finding the intersection when both spheres have the same coordinates on the y and z axis and differing coordinates on the x axis. This involves combining the two equations and solving for x. This would yield a circle parallel to the yz plane. So, here is what I did: (x-2)^2+(y-2)^2+(z-2)^2=x^2+y^2+z^2 Multiplying through and rearranging I got x+y+z=3. I think this is the equation of the plane where the two spheres intersect. I am not sure what to do next. I have read that it will be a circle and that there will be two equations describing the curve.(Now that i think about it i guess the intersection of 2 spheres will always be a circle) I had built a model about 10 years ago out of balsa wood and the curve really did not look circular. But maybe that was due to the inaccuracies of the model. How do I find an equation that describes this curve. I looked in my old calculus book but could find nothing. Any help would be greatly appreciated. 2. You have x+y+z=3, but you also have the original constraints. IF you shift the second center away from (2,2,2) this will be easier to observe. Look at $(x-2)^2+y^2+z^2=4$ and $x^2+y^2+z^2=4$ Then, when you solve for the intersection you have x=1, but NOT all points on the plane x=1. Letting x=1 into either/both of your orginal constraints you have $y^2+z^2=3$ so the intersection of these two spheres is x=1 and $y^2+z^2=3$ which is a circle in the plane x=1. I was playing around with the algebra in your case and it seems to be a circle. Insert z=3-x-y into the sphere $x^2+y^2+z^2=4$ and see what you get. 3. Originally Posted by matheagle I was playing around with the algebra in your case and I'm not confident that you have a circle. From where i see it, it's the same than having one sphere centered at $\left(2\sqrt{2},0,0\right)$. Just need a rotation. 4. Originally Posted by kid funky fried For quite sometime I have been intrigued by the curve created by the intersection of two spheres with the following properties/conditions: Let both spheres have the same radius. Let's say a radius of 2. Let one sphere be centered at (0,0,0) and the other sphere be centered at (2,2,2). The intersection two spheres, or of any plane with a sphere, is either empty or a circle. In this case, your two spheres each have radius 2 and the distance between their centers is $\sqrt{4+ 4+ 4}= 2\sqrt{3}< 4$ so they intersect in a circle. That circle will line in the plane that perpedicularly bisects the line between their centers. The midpoint of that line is (1, 1, 1) and a vector in the direction of that line is $\vec{i}+ \vec{j}+ \vec{k}$. The equation of that plane is 1(x-1)+ 1(y- 1)+ 1(z- 1)= 0 or x+ y+ z= 3. Since the distance from (2, 2, 2) to (1, 1, 1) is $\sqrt{3}$ and the radius of each sphere is 2, the radius of the circle of intersection is one leg of a right triangle having one leg of length $\sqrt{3}$ and hypotenuse 2: the radius is $\sqrt{(\sqrt{2^2- (\sqrt{3})^2}= 1$. That is, the intersection is a circle with center at (1, 1, 1), radius 1, in the x+ y+ z= 3 plane. I think the equations for the two spheres are: x^2+y^2+z^2=4 and (x-2)^2+(y-2)^2+(z-2)^2=4 I have seen a method finding the intersection when both spheres have the same coordinates on the y and z axis and differing coordinates on the x axis. This involves combining the two equations and solving for x. This would yield a circle parallel to the yz plane. So, here is what I did: (x-2)^2+(y-2)^2+(z-2)^2=x^2+y^2+z^2 Multiplying through and rearranging I got x+y+z=3. I think this is the equation of the plane where the two spheres intersect. I am not sure what to due next. I have read that it will be a circle and that there will be two equations describing the curve.(Now that i think about it i guess the intersection of 2 spheres will always be a circle) I had built a model about 10 years ago out of balsa wood and the curve really did not look circular. But maybe that was due to the inaccuracies of the model. How do I find an equation that describes this curve. I looked in my old calculus book but could find nothing. Any help would be greatly appreciated. 5. ## Thanks Thanks to all. I appreciate the timely response. For a given curve in space is there a way to find an equation or set of equations to define the curve? 6. I've been working on this for some time- mostly going down a blind alley! As I said before, the intersection of two spheres, each of radius 2, with centers at (0, 0, 0) and (2, 2, 2), is a circle of radius 1, with center at (1, 1,1), in the plane x+ y+ z= 3. As for finding parametric equations for that circle, matheagles remark, "IF you shift the second center away from (2,2,2) this will be easier to observe", gives a good idea. (Abu-Kahlil said much the same thing [his " $2\sqrt{2}$" should have been " $2\sqrt{3}$"] but it was his "Just need a rotation" that sent me down a blind alley looking for rotation matrices!) If the two spheres were centered at (0, 0, 0) and $(2\sqrt{3}, 0, 0)$, which is the same distance from (0, 0, 0), then their intersection is a circle of radius 1 with center at $(\sqrt{3}, 0, 0)$, in the plane $x= \sqrt{3}$, perpendicular to the x-axis. The parametric equations for that circle are easy- $x= \sqrt{3}$, and we can use the "usual" parametric equations for a circle of radius 1, y= cos(t) and z= sin(t). Now, we need to find a linear transformation that will change one geometric situation into the other (it doesn't have to be linear but that is simplest). That is, a linear transformation that will take the vectors <1, 0, 0>, <0, 1, 0>, and <0, 0, 1> into axes appropriate to this problem. The line through the center of one circle, perpendicular to its plane, is the x-axis, y= z= 0, while the line through the center of the other, perpendicular to its plane is x= y= z. A vector in that direction is <1, 1, 1> and a unit vector in that direction is $\left<1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3}\right>$. For a second coordinate axis, we need vector perpendicular to that, so in the plane x+ y+ z= 0. Just about any will do, so take z= 0. That makes the equation x+ y= 0 or y= -x so <-1, 1, 0> will do and a unit vector in that direction is $\left<-1/\sqrt{2}, 1/\sqrt{2}, 0\right>$. The third axis must be perpendicular to both, so we take the cross product of the first two: $\left<1/\sqrt{6}, 1/\sqrt{6}, -2/\sqrt{6}\right>$. Now we need a matrix that will map <1, 0, 0> into $\left<1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3}\right>$, <0, 1, 0> into $\left<-1/\sqrt{2}, 1/\sqrt{2}, 0\right>$, and <0, 0, 1> into $\left<1/\sqrt{6}, 1/\sqrt{6}, -2/\sqrt{6}\right>$. It is well known that the matrix that maps three basis vectors into three given vectors is the matrix having those given vectors as columns: $\begin{bmatrix} \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} & 0 & -\frac{2}{\sqrt{6}}\end{bmatrix}$. Take the product of that matrix with each of $\begin{bmatrix} 1 \\ 0 \\ 0\end{bmatrix}$, $\begin{bmatrix} 0 \\ 1 \\ 0\end{bmatrix}$, and $\begin{bmatrix} 0 \\ 0 \\ 1\end{bmatrix}$ to see why that works. To change the parametric equations $x= \sqrt{3}$, y= cos(t), z= sin(t) to this new "basis" and so give equations for the original circle, we multiply $\begin{bmatrix} \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} & 0 & -\frac{2}{\sqrt{6}}\end{bmatrix}\begin{bmatrix}\sqr t{3} \\ cos(t) \\ sin(t)\end{bmatrix}$ $= \begin{bmatrix} 1-\frac{1}{\sqrt{2}}cos(t)+ \frac{1}{\sqrt{6}}sin(t) \\ 1+ \frac{1}{\sqrt{2}}cos(t)+ \frac{1}{\sqrt{6}}sin(t) \\ 1- \frac{2}{\sqrt{6}}sin(t)\end{bmatrix}$. That is, parametric equations for the circle forming the intersection of the two spheres are $x= 1- 1/\sqrt{2} cos(t)+ 1/\sqrt{6}sin(t)$, $y= 1+ 1/\sqrt{2} cos(t)+ 1/\sqrt{6} sin(t)$, and $z= 1- 2/\sqrt{6} sin(t)$. It is easy to see that those do give the correct figure. $(x-1)^2= 1/2 cos^2(t)- 1/\sqrt{3} sin(t)cos(t)+ 1/6 sin^2(t)$ $(y-1)^2= 1/2 cos^2(t)+ 1/\sqrt{3} sin(t)cos(t)+ 1/6 cos^2(t)$ $(z-1)^2= 2/3 sin^2(t)$ $1/2 cos^2(t)+ 1/2 cos^2(t)= cos^2(t)$, $-1/\sqrt{3} sin(t)cos(t)+ 1/\sqrt{3} sin(t)cos(t)= 0$, and $1/6 sin^2(t)+ 1/6 sin^2(t)+ 2/3 sin^2(t)= sin^2(t)$ so $(x-1)^2+ (y-1)^2+ (z-1)^2= cos^2(t)+ sin^2(t)= 1$ Also, since $-\sqrt{2} cos(t)+ 1/\sqrt{2}cos(t)= 0$ and $1/\sqrt{6}sin(t)+ 1/\sqrt{6}sin(t)- 2/\sqrt{6}sin(t)= 0$, we have x+ y+ z= 1+ 1+ 1= 3. The points given by the parametric equations lie on the intersection of that sphere of radius 1 and that plane. Since the center of the sphere, (1, 1, 1) lies on the plane x+ y+ z= 3, there intersection is the circle of radius 1, with center at (1, 1, 1), in the plane x+ y+ z= 3. 7. ## Thank You HallsofIvy, Thank you so much. I sincerely appreciate the amount of time that you have spent on this problem. I will print your response, take it to work with a couple of my old text books and study until I understand fully. Best regards, Kid
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 42, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450230598449707, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/80134-using-central-limit-theorem-prove-limit.html
# Thread: 1. ## Using the Central Limit Theorem to prove a limit Use the central limit theorem to prove that, as M tends to infinite: exp(-2M) * SUM(exp(2Mn)/n!) --> 1/2 . The sum goes from n=0 to n=M. I don't know how to format it correctly, sorry. I sort of know what to do with this question. I need to find the right variables so the left side of the CLT boils down to what's above, and the right would be the normal distribution, presumably from 0 to infinite to get a half. A hint or a correction if I'm wrong will be brilliant to start me off, thanks James 2. Originally Posted by phillips101 Use the central limit theorem to prove that, as M tends to infinite: exp(-2M) * SUM(exp(2Mn)/n!) --> 1/2 . The sum goes from n=0 to n=M. I don't know how to format it correctly, sorry. I sort of know what to do with this question. I need to find the right variables so the left side of the CLT boils down to what's above, and the right would be the normal distribution, presumably from 0 to infinite to get a half. A hint or a correction if I'm wrong will be brilliant to start me off, thanks James There's a mistake in your formula (the left-hand side is greater than 1, and it diverges), I think it should be: $e^{-M}\sum_{n=0}^M \frac{M^k}{k!}\to_{M\to\infty} \frac{1}{2}$ Anyway, your intuition is correct ; consider applying the CLT to Poisson random variables of parameter 1 (remember the sum of n independent such variables is Poisson distributed with parameter n). 3. Originally Posted by Laurent There's a mistake in your formula (the left-hand side is greater than 1, and it diverges), I think it should be: $e^{-M}\sum_{n=0}^M \frac{M^k}{k!}\to_{M\to\infty} \frac{1}{2}$ Anyway, your intuition is correct ; consider applying the CLT to Poisson random variables of parameter 1 (remember the sum of n independent such variables is Poisson distributed with parameter n). That's what I thought! I really didn't think it would converge, and none of the standard distributions gave the right answer. Thanks for the confirmation. I'll inquire about the error.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996239304542542, "perplexity_flag": "head"}
http://wiki.math.toronto.edu/TorontoMathWiki/index.php/Calculus_of_Variations
# Calculus of Variations ### From TorontoMathWiki Calculus of variations is the study of extrema of functionals. This page provides a brief introduction to some elements of the theory. # Variation and differentiability Consider a linear functional $\ J:V \to \mathbb R$, where $\ V$ is a normed linear space. We define the differential of $\ J$ at $y \in V$ as $\ \Delta J[h] = J[y + h] - J[y]$ for all $h \in F$ We say that $\ J$ is differentiable at $\ y$ if there exist a linear functional $\ \delta J:V \to \mathbb R$ and $\ \epsilon$ so that $\Delta J[h] = \delta J[h] + \epsilon \|h\|$ and $\ \epsilon \to 0$ as $\|h\| \to 0$. Here, $\ \delta J$ is called the variation or differential of $\ J$ at $\ y$. # Uniqueness of variation Lemma: Let $\ \phi:V \to \mathbb R$ denote a linear functional, where $\ V$ is defined as above. If $\ {\phi[h] \over \|h\|} \to 0$ as $\|h\| \to 0$, then $\ \phi$ is identically zero for all $\ h \in V$. Proof: Suppose $\phi[y_0] \neq 0$ for some $y_0 \in V$. Then, for $\ n = 1, 2, 3, ...$ ${\phi[\frac {h_0}n] \over \|\frac {h_0}n\|} = {\frac 1n \phi[h_0] \over \frac 1n \|h_0\|} = {\phi[h_0] \over \|h_0\|} \neq 0$ and hence does not approach zero as $\frac {h_0}n \to 0$, a contradiction. QED Theorem: For $\ J$ defined as above, $\ \delta J$ is unique. Proof: Suppose, if possible, that there exist two distinct variations $\ \delta J_1$ and $\ \delta J_2$. Then, $\ \Delta J[h] = \delta J_1[h] + \epsilon_1 \|h\|$ $\ \Delta J[h] = \delta J_2[h] + \epsilon_2 \|h\|$ Comparing the two equations, we get $\ (\delta J_1 - \delta J_2)[h] = (- \epsilon_1 + \epsilon_2)\|h\|$ or ${(\delta J_1 - \delta J_2)[h] \over \|h\|} = - \epsilon_1 + \epsilon_2 \to 0$ as $\ h \to 0$. Hence, by the lemma, $\ \delta J_1 - \delta J_2 = 0$ or $\ \delta J_1 = \delta J_2$, a contradiction. QED # Function space norm Consider a space $\ C^0[a,b]$ consisting of all functions $\ f:[a,b] \to \mathbb R$ continuous on $\ [a,b]$. We define the norm as $\|y\|_0 = \max_{x \in [a,b]}|y(x)|$ Generally, a space $\ C^n[a,b]$ consists of all functions $\ f:[a,b] \to \mathbb R$ that are continuously differentiable $\ n$ times. In this case, we define the norm as $\|y\|_n = \sum_{i=0}^n{\max_{x \in [a,b]}|y^{(i)}(x)|}$ It can be shown that $\|y\|_n$ satisfies the conditions for a norm. # Strong and weak extrema Consider some linear functional $\ J:F \to \mathbb R$, where $\ F$ is a normed function space. Then, we say that $\ J$ has a strong minimum at some $\ y_0 \in F$ if there exists some $\ \delta > 0$ such that $\ J[y_0] \le J[y]$ for all $\|y - y_0 \|_0 < \delta$, where $y \in F$. Similarly, we define a weak minimum at some $\ y_0 \in F$ with $\|\cdot\|_1$ in place of $\|\cdot\|_0$. Notice that a strong minimum is also a weak minimum. Strong and weak maximum are defined similarly with $\ge$ in place of $\le$. We may also define extrema using norm $\| \cdot \|_n$ for $\ n > 1$. # A necessary condition for an extremum Theorem: If a differentiable functional $\ J$ has an extremum for some $\ y_0 \in F$, then $\ \delta J[h] = 0$ for all $h \in F$. Proof: Suppose, if possible, that $\ \delta J \neq 0$ for some $\ h_0 \in F$, where $\ h_0 \neq 0$. Let $\ \alpha > 0$ and consider ${\Delta J[\alpha h_0] \over {\| \alpha h_0\|}} = {\delta J[\alpha h_0] \over {\| \alpha h_0 \|}} + \epsilon = {\delta J[h_0] \over {\| h_0 \|}} + \epsilon$. As $\alpha \to 0$, we have $\|\alpha h_0\| = \alpha \|h_0\| \to 0$ and hence $\ \epsilon \to 0$. That is, if $\ \alpha$ is sufficiently small, then $\ \Delta J[\alpha h_0]$ and $\ \delta J[\alpha h_0]$ share the same sign. However, $\ \delta J[-\alpha h_0] = - \delta J[\alpha h_0]$. Hence, within any neighbourhood of $\ h = 0$, we have $\ \Delta J$ taking on either sign, a contradiction. QED # References 1. Gelfand, I.M. and Fomin, S.V.: Calculus of Variations, Dover Publ., 2000.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 80, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042831063270569, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/17857/how-do-i-troubleshoot-when-i-get-a-partpartd-or-a-partpartw-error/17862
# How do I troubleshoot when I get a Part::partd or a Part::partw error? Mathematica has produced the following error message: Part::partd: Part specification ... is longer than depth of object? What is `partd`? I think I may have flattened or (unflattened) arrays and am not sure how to locate the exact object? - 2 Did you click on the little `>>` at the end of the error message? – rm -rf♦ Jan 16 at 0:04 Probably means that you are trying to access an array with more dimensions than it has (e.g. myArray[[i,j]] when myArray is a single dimension). – Guillochon Jan 16 at 0:42 2 @rm-rf that has never worked on my installation, so I generally discount it. – rcollyer Jan 16 at 2:08 ## 1 Answer Two of the most common error messages that users encounter when working with parts of lists are `Part::partd` and `Part::partw` (look up `Message` for the error message syntax). Both of these are because the user is trying to access an invalid part of the expression (the "object" referred to in the error message), but there's a subtle difference between the two: ### `Part::partw` This error occurs when you're trying to obtain a part of an expression at a position longer than the `Length` of the expression (or subexpression). For example: ````exprw = {{a, b}, {c, d, e}}; exprw[[2, 3]] (* e *) exprw[[1, 3]] ```` `Part::partw:` "Part 3 of `{a,b}` does not exist" You can see that we tried to access the third element of a two element list and this results in an error. In general, if you get a `partw` error, look at the size of your index and check if it makes sense to your application. If not, you've found the source of your error. If it does, then your application/model is most likely faulty and you might want to check that instead. ### `Part::partd` This error occurs when you're trying to index the expression at a depth deeper than the maximum depth of the expression. In general, you can index the elements of an expression at `Level` $n$ by using exactly $n$ indices and you can index any element in an expression of `Depth` $d$ with most $d-1$ indices. However, when you try to index an expression at a depth greater than its maximum depth or you try to access an element at level $n$ with more than $n$ indices (which again translates to deeper depth), you get a `partd` error. For example, consider: ````exprd = {{a, b}, {c, d, {e}}}; ```` The maximum depth or `Depth[exprd]` is 4. Now try indexing with 4 indices: ````exprd[[1, 1, 1, 1]] ```` `Part::partd`: "Part specification `{{a,b},{c,d,{e}}}[[1,1,1,1]]` is longer than depth of object" So if you get a `partd` error, check to see if you're using the right number of indices for the given expression. - Thanks this explains it quite well. – sebastian c. Jan 16 at 10:04 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8992009162902832, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/47942/list
## Return to Question 3 open-problem tag added 2 edited tags 1 # Nonnegative to Positive Curvature. This questions asks for your intuition and insight as I'm surprised by how little is known about the difference between nonnegative and positive curvature. I don't want to be completely vague, so I could ask: What are the difficulties and currently blocked paths to solving the Hopf Conjecture? (Does $S^2\times S^2$ support a metric of positive curvature?). But in general, I would like to know what others might know on why it's difficult to determine if a given closed simply-connected space of nonnegative curvature can also admit positive curvature. As far as I know, there are no obstructions, how come? The amount of examples of nonnegative curvature compared to that of examples of positive curvature seem to suggest there should be something distinguishing the two.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683419466018677, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/55819/list
## Return to Answer 2 Changed "at least" to "at most". Here is a problem in convex geometry. Qualitatively it asks: If a finite dimensional normed space has the property that an arbitrary subspace is approximately Euclidean after throwing away a small number of dimensions, must the space itself be approximately Euclidean? Here is the question: Fix a constant $C>1$; let's use $C=10$. Suppose that $B$ is a convex symmetric body in $\mathbb{R}^n$ such that for every subspace $F_1$ of $\mathbb{R}^n$ there is a subspace $F$ of $F_1$ of codimension at least most $\log \log \dim F_1$ in $F_1$ and a centrally symmetric ellipsoid $E$ in $F$ with $E\subset B\cap F \subset 10 E$. The question is whether there is a constant $\gamma$, independent of everything except the constant $C=10$, so that there is an ellipsoid $E \subset \mathbb{R}^n$ with $E\subset B \subset \gamma E$. Can you answer this question using only finite dimensional considerations? AFAIK, the only answers use infinite dimensional tools. 1 Here is a problem in convex geometry. Qualitatively it asks: If a finite dimensional normed space has the property that an arbitrary subspace is approximately Euclidean after throwing away a small number of dimensions, must the space itself be approximately Euclidean? Here is the question: Fix a constant $C>1$; let's use $C=10$. Suppose that $B$ is a convex symmetric body in $\mathbb{R}^n$ such that for every subspace $F_1$ of $\mathbb{R}^n$ there is a subspace $F$ of $F_1$ of codimension at least $\log \log \dim F_1$ in $F_1$ and a centrally symmetric ellipsoid $E$ in $F$ with $E\subset B\cap F \subset 10 E$. The question is whether there is a constant $\gamma$, independent of everything except the constant $C=10$, so that there is an ellipsoid $E \subset \mathbb{R}^n$ with $E\subset B \subset \gamma E$. Can you answer this question using only finite dimensional considerations? AFAIK, the only answers use infinite dimensional tools.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311681985855103, "perplexity_flag": "head"}
http://en.m.wiktionary.org/wiki/Shannon_entropy
# Shannon entropy ## English ### Etymology Named after Claude Shannon, the "father of information theory". ### Noun Shannon entropy (countable and uncountable; plural ) 1. information entropy Shannon entropy H is given by the formula $H = - \sum_i p_i \log_b p_i$ where pi is the probability of character number i showing up in a stream of characters of the given "script". Consider a simple digital circuit which has a two-bit input (X, Y) and a two-bit output (X and Y, X or Y). Assuming that the two input bits X and Y have mutually independent chances of 50% of being HIGH, then the input combinations (0,0), (0,1), (1,0), and (1,1) each have a 1/4 chance of occurring, so the circuit's Shannon entropy on the input side is $H(X,Y) = 4\Big(-{1\over 4} \log_2 {1\over 4}\Big) = 2$. Then the possible output combinations are (0,0), (0,1), and (1,1) with respective chances of 1/4, 1/2, and 1/4 of occurring, so the circuit's Shannon entropy on the output side is $H(X \text{and} Y, X \text{or} Y) = 2\Big(-{1\over 4} \log_2 {1\over 4}\Big) - {1\over 2} \log_2 {1\over 2} = 1 + {1\over 2} = 1 {1\over 2}$, so the circuit reduces (or "orders") the information going through it by half a bit of Shannon entropy due to its logical irreversibility.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8745314478874207, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/gauge-theory?page=2&sort=newest&pagesize=50
# Tagged Questions The gauge-theory tag has no wiki summary. 2answers 260 views ### Gauge fixing and equations of motion Consider an action that is gauge invariant. Do we obtain the same information from the following: Find the equations of motion, and then fix the gauge? Fix the gauge in the action, and then find the ... 1answer 98 views ### Weak isospin confinement? According to the Wikipedia article on color confinement: The current theory is that confinement is due to the force-carrying gluons having color charge [...], i.e. because the gauge group is ... 1answer 58 views ### How is $g^2 N$ held fixed in the large N limit? In 't Hooft's original paper: http://igitur-archive.library.uu.nl/phys/2005-0622-152933/14055.pdf he takes $N \rightarrow \infty$ while $g^2 N$ is held fixed. Is this just a toy model? Or is there ... 3answers 129 views ### Quantizing first-class constraints for open algebras: can Hermiticity and noncommutativity coexist? An open algebra for a collection of first-class constraints, $G_a$, $a=1,\cdots, r$, is given by the Poisson bracket $\{ G_a, G_b \} = {f_{ab}}^c[\phi] G_c$ classically, where the structure constants ... 1answer 247 views ### Gauss law in classical U(1) gauge theory I can see that $a_{0}$ is not an independent field and Gauss law is a constraint on the theory arising from field equations. But, I don't get the geometrical picture. Let $A$ be the space of all ... 0answers 92 views ### Pseudo scalar mass and Pure scalar mass Since the only difference between pseudo scalar and a scalar term is just a change of sign under a parity inversion, is it possible that both of them be present in the same field and interact? For ... 2answers 263 views ### Intuition for gauge parallel transport (Wilson loops) I'm looking for a geometrical interpretation of the statement that "Wilson loop is a gauge parallel transport". I have seen QFT notes describe U(x,y) as "transporting the gauge transformation", and ... 1answer 144 views ### For nonabelian Yang-Mills in the Coulomb phase, can soft gluons render the charge orientation of charged particles indefinite? For nonabelian Yang-Mills in the Coulomb phase, can soft gluons render the charge orientation of charged particles indefinite? Let's say the gauge group is a nonabelian simple Lie group G. Suppose ... 2answers 93 views ### What exactly is the weak portion of the SM gauge group? This Wikipedia article: http://en.wikipedia.org/wiki/Left%E2%80%93right_symmetry states that the weak part of the SM gauge group is not $SU(2)_L \times U(1)_Y$ but \$ \frac{ SU(2)_L \times ... 2answers 213 views ### Counting degrees of freedom in presence of constraints In a $N$ dimensional phase space if I have $M$ 1st class and $S$ 2nd class constraints, then I have $N-2M-S$ degrees of freedom in phase space. How can I calculate the degrees of freedom in ... 4answers 395 views ### First class and second class constraints Hello I am working on a project that involves the constraints. I checkout the paper of Dirac about the constraints as well as some other resources. But still confuse about the first class and second ... 1answer 152 views ### Does spontanous symmetry breaking affect Noethers theorem? Does spontanous symmetry breaking affect the existence of a conserved charge? And how does depend on whether we look at a classical or a quantum field theory (e.g. the weak interacting theory)? ... 0answers 108 views ### Derivation of the enhancement of U(1)$_L$ x U(1)$_R$ to SU(2)$_L$ x SU(2)$_R$ at the self-dual radius Towards the end of the paragraph with the title String theory's added value 2: enhanced non-Abelian symmetries at self-dual radii and abstract C with current algebras of this article, it is explained ... 1answer 200 views ### Wilson loops and gauge invariant operators (Part 2) These questions are sort of a continuation of this previous question. I would like to know of the proof/reference to the fact that in a pure gauge theory Wilson loops are all the possible gauge ... 3answers 350 views ### Gauge fixing choice for the gauge field $A_0$ In many situations, I have seen that the the author makes a gauge choice $A_0=0$, e.g. Manton in his paper on the force between the 't Hooft Polyakov monopole. Please can you provide me a ... 2answers 124 views ### Is the Chern-Simons integral of gauge fields over black hole singularities zero? Suppose we have an evaporating black hole and a nonabelian Yang-Mills theory with a $\theta$ topological term. This counts the total number of instantons minus antiinstantons. Consider the total ... 1answer 248 views ### Wilson loops and gauge invariant operators (Part 1) I guess the Hilbert space of the theory is precisely the space of all gauge invariant operators (mod equations of motion..as pointed out in the answers) Is it possible that in a gauge theory the ... 1answer 305 views ### Taking the continuum limit of $U(N)$ gauge theories I would like to draw your attention to appendix $C$ on page 38 of this paper. The equation $C.2$ there seems to be evaluating the sum $\sum_R \chi _R (U^m)$ in equation 3.16 of this paper. I ... 1answer 125 views ### Is there any good gauge-fixing prescription for discrete gauge symmetries? Nearly all gauge-fixing prescriptions are based upon setting some function involving the gauge fields to be zero. That function is continuous and varies over the real/complex numbers. Trying the same ... 2answers 150 views ### Are timelike diffeomorphisms really redundancies in description in quantum gravity? Are timelike diffeomorphisms really redundancies in description in quantum gravity? Certainly Yang-Mills gauge transformations can be considered redundancies in description. Ditto for p-form ... 1answer 68 views ### How do we deal with Gribov ambiguities when calculating in quantum gauge theories? How do we deal with Gribov ambiguities when actually calculating in quantum gauge theories? Any literature references? 1answer 177 views ### Using the covariant derivative to find force between 't Hooft-Polyakov magnetic monopoles I am reading this research paper authored by NS Manton on the Force between 't Hooft-Polyakov monopoles. I have a doubt in equation 3.6 and 3.7. We assume the gauge field for a slowly accelerating ... 1answer 314 views ### Noether current for the Yang-mills-higgs lagrangian I am trying to calculate the Noether's current, more specifically, the energy density of the Yang-mills-Higgs Lagrangian. Please refer to the equations in the Harvey lectures on Magnetic Monopoles, ... 0answers 46 views ### Limit of the scalar field, and potential for a soliton ( finite energy, non dissipative) solution I want to prove that the the scalar field of the yang-mills lagrangian tends to some constant value which is a function of theta at infinity and that this value is a zero of the potential, when we ... 3answers 245 views ### Could general relativity and gauge theories in principle be covered in one course? It's always nice to point out the structural similarieties between (semi-)Riemannian geometry and gauge field theories alla Classical yang Mills theories. Nevertheless, I feel the relation between the ... 2answers 404 views ### What is the ontological status of Faddeev Popov ghosts? We all know Faddeev-Popov ghosts are needed in manifestly Lorentz covariant nonabelian quantum gauge theories. We also all know they decouple from the rest of matter asymptotically, although they ... 1answer 168 views ### What is the winding number of a magnetic monopole, and why is it conserved I had asked a similar question about a calculation involving the winding number here. But i haven't got a satisfactory response. So, I am rephrasing this question in a slightly different manner. What ... 2answers 184 views ### Counting degrees of freedom of gauge bosons Gauge bosons are represented by $A_{\mu}$, where $\mu = 0,1,2,3$. So in general there are 4 degrees of freedom. But in reality, a photon (gauge boson) has two degrees of freedom (two polarization ... 1answer 235 views ### Why are all observable gauge theories not vector-like? Why are all observable gauge theories not vector-like? Will this imply that the electron and/or fermions do not have mass? How is this issue resolved? Background: The Standard Model is a ... 2answers 318 views ### Winding number in the topology of magnetic monopoles I am reading on magnetic monopoles from a variety of sources, eg. the Jeff Harvey lectures.. It talks about something called the winding $N$, which is used to calculate the magnetic flux. I searched ... 0answers 75 views ### How do you simulate a quantum gauge theory in a gauge with negative norms on a quantum computer? How do you simulate a quantum gauge theory in a gauge with negative norms on a quantum computer? There are some gauges with negative norms. It's true that if restricted to gauge invariant states, the ... 0answers 239 views ### The meaning of Goldstone boson equivalence theorem The Goldstone boson equivalence theorem tells us that the amplitude for emission/absorption of a longitudinally polarized gauge boson is equal to the amplitude for emission/absorption of the ... 1answer 156 views ### Gauge symmetry description for $\phi^4$? That is a follow-up to this question: Gauge symmetry is not a symmetry? Ok, gauge symmetry is not a symmetry, but ... ... a redundancy in our description, by introducing fake degrees of freedom ... 2answers 179 views ### What evidence is there for the electroweak higgs mechanism? The wikipedia article on the Higgs mechanism states that there is overwhelming evidence for the electroweak higgs mechanism, but doesn't then back this up. What evidence is there? 0answers 154 views ### Is the U(1) gauge theory in 2+1D dual to a U(1) or an integer XY model? The compact U(1) lattice gauge theory is described by the action $$S_0=-\frac{1}{g^2}\sum_\square \cos\left(\sum_{l\in\partial \square}A_l\right),$$ where the gauge connection $A_l\in$U(1) is defined ... 0answers 188 views ### How to determine if an emergent gauge theory is deconfined or not? 2+1D lattice gauge theory can emerge in a spin system through fractionalization. Usually if the gauge structure is broken down to $\mathbb{Z}_N$, it is believed that the fractionalized spinons are ... 1answer 480 views ### Yukawa Coupling of a Scalar $SU(2)$ Triplet to a Left-Handed Fermionic $SU(2)$ Doublet Suppose we have a field theory with a single complex scalar field $\phi$ and a single Dirac Fermion $\psi$, both massless. Let us write $\psi _L=\frac{1}{2}(1-\gamma ^5)\psi$. Then, the Yukawa ... 0answers 43 views ### How to have quark condensation, gaugino condensation, ghost condensation, and gluon condensation? For each of those condensation process to happen, what special conditions should Are there any other condensations from elementary fields? What are the significances/effects of each condensation? ... 2answers 325 views ### The Faddeev-Popov Lagrangian This is a non-abelian continuation of this QED question. The Lagrangian for a non-abelian gauge theory with gauge group $G$, and with fermion fields and ghost fields included is given by ... 4answers 586 views ### What's the distinctions between Yang-Mills theory and QCD? So Yang-Mills theory is a non-abelian gauge theory, and we used a lot in QCD calculation. But what are the distinctions between Yang-Mills theory and QCD? And distinctions between supersymmetric ... 1answer 245 views ### QED BRST Symmetry This is a homework problem that I am confused about because I thought I knew how to solve the problem, but I'm not getting the result I should. I'll simply write the problem verbatim: "Consider QED ... 1answer 22 views ### Time Evolution of a Manifold Embedding Given a smooth manifold $\mathcal{M}$ with a simplicial complex embedding $\mathsf{S}$, what specific tools or methods can be used to give an analysis of the time evolution of the manifold given some ... 2answers 267 views ### Why do we like gauge potentials so much? Today I read articles and texts about Dirac monopoles and I have been wondering about the insistence on gauge potentials. Why do they seem (or why are they) so important to create a theory about ... 2answers 73 views ### Gauge invariant scalar potentials If $\Phi$ is a multi-component scalar field which is transforming in some representation of a gauge group say $G$ then how general a proof can one give to argue that the potential can only be a ... 1answer 184 views ### Gauge invariance and the form of the Rarita-Schwinger action in Weinberg Vol. I section 5.9 (in particular p. 251 and surrounding discussion), it is explained that the smallest-dimension field operator for a massless particle of spin-1 takes the form of a field ... 2answers 215 views ### Can an Electromagnetic Gauge Transformation be Imaginary? The Hamiltonian of a non-relativistic charged particle in a magnetic field is $$\hat{H}~=~\frac{1}{2m} \left[\frac{\hbar}{i}\vec\nabla - \frac{q}{c}\vec A\right]^2$$. Under a gauge transformation ... 0answers 149 views ### Attempts to explain Higgs coupling as a gauge transformation symmetry As is (supposedly) well known, Electromagnetic coupling can be "explained" as a closure term to a langrangian comprising a free Dirac field and a free vector field that are required to be invariant ... 2answers 408 views ### How does non-Abelian gauge symmetry imply the quantization of the corresponding charges? I read an unjustified treatment in a book, saying that in QED charge an not quantized by the gauge symmetry principle (which totally clear for me: Q the generator of $U(1)$ can be anything in ... 1answer 177 views ### SU(2) yang-mills EOM I'm just playing around tonight trying to better myself, but I'm having trouble with some indices on my yang-mills lagrangian. I have a gauge group $SU(2)$ and a field strength tensor ... 1answer 211 views ### Can a photon see ghosts? Does it make sense to introduce Faddeev–Popov ghost fields for abelian gauge field theories? Wikipedia says the coupling term in the Lagrangian "doesn't have any effect", but I don't really know ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.895867645740509, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/article-writing?sort=active&pagesize=15
# Tagged Questions The article-writing tag has no wiki summary. 1answer 66 views ### Math Behind the Game “Quoridor” I'm going to write an article for middle school students to introduce them to the game "quoridor". Tha game certainly is interesting, but it will be great to add to the article some serious "math ... 4answers 712 views ### How to write a good mathematical paper? I hesitate to ask this question. However I read many advices from math.stackexchange, and I couldn't find anything similar. A good time always goes too fast! Two years are fled. In the third year of ... 3answers 88 views ### Should I put interpunction after formulas? I am presently doing my first substantial piece of mathematical writing, hence this, probably somewhat silly, question. How does display-style mathematics interact with punctuation? More ... 2answers 90 views ### Is there something wrong with this statement? Is this sentence OK or is there something wrong? "Hausdorff is a very weak separation axiom for one to discuss whether the space is metrizable." Do you have a better expression? 1answer 107 views ### Looking for a better way of expressing this statement I'm writing an article on topology recently. I'm not good at English writing. It may be good for me to post this question somewhere else; however, I prefer to post here, as it is related to math. My ... 0answers 78 views ### Example of well-written theses Among the books of pure mathematics, Rotman's book on Group Theory and any book of J. P. Serre are best examples, as I feel, of writing a mathematics book (many experts in different areas of ... 4answers 140 views ### Are there rules in the useage of prepositions in Math? It is often to use prepositions in various expressions. E.g. $2$ is in the set of natural numbers $\mathbb N$ The symmetric group on 3 letters $S_3$ is the group consisting of all possible ... 1answer 70 views ### An article for some undergraduate journal I'm a student and last year I attended some courses about algebraic geometry. Referring to many books, I wrote personal notes explaining the Cech cohomology of sheaves and some of its applications in ... 4answers 430 views ### Advice for writing good mathematics? It's been a (far-fetched, possibly) goal of mine to some day write a math Textbook. I've been thinking about writing this question for a while, but reading an exceedingly mediocre text on Mathematical ... 3answers 298 views ### Looking for philosophical subject for my Bachelor Thesis In may 2013 I have to write a Bachelor Thesis for my bachelor Mathematics. I prefer to choose a subject which involves philosophy. At the same time I have the feeling that my university wants me to ... 5answers 1k views ### How are mathematicians taught to write with such an expository style? I wasn't sure if this question was appropriate for MSE. One of the major complaints we see in industry is a person's ability to communicate which includes writing. We see the same thing on questions ... 4answers 370 views ### Short forms like “haven't”, “don't”, “let's” [closed] Is it true that short forms like "haven't", "don't", "let's" should not be used in serious mathematical texts? 4answers 235 views ### Should I include obvious steps in mathematical paper? I am writing a formal school paper (by formal I mean, with an abstract, a bibliography and 20 pages in length) on mathematics (it is called 'Extended Essay,' for those having knowledge of the ... 1answer 148 views ### Writing small numbers in Articles [closed] In primary school I learned that numbers less than 13 should be spelled out in writing. I am suddenly not sure anymore, how I should handle this situation when it comes to article writing in ... 1answer 127 views ### LaTeX help - multiple answers using set bracket / cases [closed] I'm brand new to LaTeX and am writing up a dissertation. I have read through several LaTeX guides but have not been able to find the answer there. Does anybody know how to use this ... 0answers 27 views ### How would you write this sentence I'm writing a short document about an integer programming (IP) problem instance. I've mentioned that IP is known to be NP-Hard, but that being NP-Hard doesn't automatically qualify this particular ... 0answers 107 views ### Creating a book based on an article (copyright issues) [closed] Elsevier explicitly permits me to make a book based on my article published with Elsevier. What's about other publishers? May these forbid me to make a book based on my earlier article? 3answers 164 views ### Publishing an article after a book? If I first publish an article, afterward I may publish a book containing materials from the article. What's about the reverse: If I first publish a book does it make sense to publish its fragment as ... 8answers 449 views ### Book about technical and academic writing I'm in the process of writing my Master's Thesis on automata theory. The writing must be in English which isn't my mother tongue. So the question is, given that this is my first time long (hundred ... 2answers 141 views ### Can mathematical definitions of the form “P if Q” be interpreted as “P if and only if Q”? [duplicate] Possible Duplicate: Alternative ways to say “if and only if”? So when I come across mathematical definitions like "A function is continuous if...."A space is compact ... 2answers 132 views ### Shorthand for $0<i<1$ , $0<j<1$, $0<k<1$ Is it good style to write $0<i<1$, $0<j<1$, $0<k<1$ as $0<i,j,k<1$? The following does not seem so clear: $0<i,j<1$ as it may be interpreted as: $0<i$ and \$ ... 2answers 109 views ### Is it bad math style to use the same index symbol in different indexed objects? The title says it all (I'm referring to the case, when writing for example an article to be published or something similar - something where the writing itself should also be of quality). Example: ... 2answers 195 views ### Basic guidance to write a mathematical article. I'm trying to put together a mathematical article on how to obtain certain infinite series for some well known functions by a method of integrals (I like to call it "The Integral Method" - thank you), ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940161943435669, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/225603/intervals-in-the-real-numbers-and-cardinality-are-they-infinite
# Intervals in the real numbers and cardinality - are they infinite? [duplicate] Possible Duplicate: Bijection between an open and a closed interval I'm a little confused, maybe i'm over thinking something so basic. But a closed interval [a,b] of real numbers has an infinite number of elements, correct? And the same goes for (a,b)? Then, [a,b] has the same cardinality as (a,b)? Thanks. - 1 Not if $a=b{}{}$ – wj32 Oct 30 '12 at 22:38 1 Yes, you are correct (provided a < b). But "infinite" is a bit of an understatement: they each have uncountably many elements. (i.e. there are countably infinite ($\mathbb{Z}$) sets and uncountably infinite (e.g. $\mathbb{R}$) sets. Each of the intervals you give have the same cardinality as the set of Reals. – amWhy Oct 30 '12 at 22:40 1 All of your statements are correct provided that $a<b$. – Brian M. Scott Oct 30 '12 at 22:40 This is at least one amongst a myriad of duplicates. – Asaf Karagila Oct 30 '12 at 22:47 Thank you. I was a little confused because my instructor said "finite intervals" when describing them. – Alti Oct 30 '12 at 22:55 ## marked as duplicate by Asaf Karagila, user17762, Phira, Norbert, Douglas S. StonesOct 31 '12 at 10:39 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 2 Answers Yes, they have same cardinality. For convenience, here is a bijection $[0,1]\to[0,1)$: $$f(x)=\begin{cases}\frac1{n+1}&\text{if }x=\frac1n\text{ for some }n\in\mathbb N\\x&\text{otherwise}\end{cases}$$ Then $$g(x)=f(1-f(x))$$ is a bijection $[0,1]\to(0,1)$. - Yes, just as the integers greater than 0 have the same cardinality as the integers greater than 1. Deleting a finite number of elements from any infinite set doesn't change its cardinality. In this case we can find an explicit bijection. Define $f(x): [0,1) \to (0,1)$ as follows: If $x \ne 0, \frac 1{3^k}, f(x)=x \\ f(0)=\frac 13 \\f(\frac 1{3^k})=\frac 1{3^{k+1}}$ You can do the same at the other end. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416680932044983, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33259/calculate-mass-of-air-in-a-tyre-from-pressure/33263
# Calculate mass of air in a tyre from pressure How can one calculate the mass of air inside a tyre, given a particular tyre size; a pressure, in $kPa = \frac{1000kg}{m\cdot s^2}$; and assuming room temperature, and normal air composition? I can't quite work out what part of the equations I'm missing to remove the $s^2$ component. I realise that surface/volume ratio is important, but for the purposes and approximation to a torus would be fine. For example, assuming a 700cx25 bicycle tyre, we might assume a torus where the diameter between centre of the two cross-section circles is about 630mm, and the diameter of the circles themselves is about 30mm. Let's assume a tyre pressure of 500kPa (~73psi). Rough volume would be $630$mm$\times\pi \times \pi (15$mm$)^2 = 1.4\times10^6$mm$^3 = 0.0014$m$^3$ Rough surface area: $630$mm$\times\pi \times 30$mm$\times\pi = 1.87\times10^5$mm$^2 = 0.187$m$^2$ - ## 1 Answer You need an equation for the density of the gas as a function of temperature and pressure. Assuming the tyre is full of air, this is reasonably close to an ideal gas so the molar volume is given by: $$V_m = \frac{RT}{P}$$ where R is the ideal gas constant and and the average molecule weight of air (20% oxygen, 80% nitrogen) is about 14.4. From this you can work out the density at the pressure and temperature of your car tyre, and since you know the volume this immediately gives you the mass. - Or you could just use the experimental density of air at room TP is around 1.2 kg/m^3. Together with how many Bar you inflate your tire to – Martin Beckett Aug 1 '12 at 12:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181681275367737, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/136345-finding-mean-standard-deviation-without-given-data.html
# Thread: 1. ## Finding mean and standard deviation without given data X | 0 1 2 3 4 5 6 7 8 9 10 p(x)|.01 .0 .01 .02 .02 .09 .07 .17 .29 .14 .18 the above set is a data set from a survey response, it is a rating scale from 0 to 10, 0 being the worst, 10 being the best. The problem is how to find a mean and standard deviation of the ratings, when I'm not even sure of the size of the sample, or even if it IS a sample, although if it's a survey, it most likely is. I'm wondering if I should be looking at it as a point estimator for p, or something else, I'm just confused and lost on where I should begin. 2. Originally Posted by kolorbynumber X | 0 1 2 3 4 5 6 7 8 9 10 p(x)|.01 .0 .01 .02 .02 .09 .07 .17 .29 .14 .18 the above set is a data set from a survey response, it is a rating scale from 0 to 10, 0 being the worst, 10 being the best. The problem is how to find a mean and standard deviation of the ratings, when I'm not even sure of the size of the sample, or even if it IS a sample, although if it's a survey, it most likely is. I'm wondering if I should be looking at it as a point estimator for p, or something else, I'm just confused and lost on where I should begin. Your textbook or classnotes have the required formulae .... You should know that $\overline{X} = \sum_x x \cdot p(x)$ etc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210711717605591, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/96784/is-it-possible-to-split-a-single-sample-from-a-discrete-uniform-distribution-int
# Is it possible to split a single sample from a discrete uniform distribution into two samples from two smaller distributions? Suppose I have a single integer sample $k$ from a discrete uniform distribution such that $0 \le k \lt 2^{32}$. Is it always possible to interpret this sample as a pair of samples $m, n$ from two other discrete uniform distributions such that $0 \le m \lt r$ (for any integer $r$ less than, say, $2^{30}$) and $0 \le n \lt 2^p$ (where $p$ is any integer at all, but ideally as large as possible)? I know that this is trivial when $r = 2^x$; you can take $p = 32 - x, m = k \bmod r, n = \lfloor k / r \rfloor$. But I don't know how to do it in general and still preserve uniformity. - My guess is that it should only be possible when $r*2^p | 2^{32}$ --- at least, assuming you don't want to place any other conditions on $k$ (for example, you might resample from $[0, 2^{32}-1]$ whenever $k > z * r * 2^p$, where $z = \lfloor 2^{32}/(r*2^p) \rfloor$. – duncanm Jan 5 '12 at 23:44 1 Take for example $r=2^{30}-1$, or much more modestly, $r=3$. If we want to simulate a uniform on $\{0,1,2\}$ using a uniform on a set of size $2^{32}$, there will have to be instances when we throw the sampled number away and repeat. For $3$, it can be arranged that this hardly ever happens, since the number of integers between $1$ and $2^{32}-1$ (inclusive) is divisible by $3$, so we need a new sample only if in our sampling we get the number $0$. – André Nicolas Jan 5 '12 at 23:51 ## 1 Answer Let $\Omega$ be a sample space of cardinality $2^n$, and suppose that all the elements $\alpha\in \Omega$ are equally likely. Then any event $E \subseteq \Omega$ has probability of the shape $\frac{m}{2^n}$ for some integer $m$. Thus $P(E)$ is a dyadic rational. If $p$ is not a dyadic rational, we cannot find an event $E\subseteq \Omega$ such that $P(E)=p$. In particular, if the positive integer $r$ is not a power of $2$, we cannot find an $E\subseteq \Omega$ that has probability (exactly) equal to $\frac{1}{r}$, and interpreting as a pair of samples in the desired way is not possible. Sampling from $\Omega$ a bounded number of times will produce the same result. If we do not put a priori bounds on the number of times that we sample, then any probability $p$ can be simulated. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441967010498047, "perplexity_flag": "head"}
http://torus.math.uiuc.edu/cal/math/cal?year=2012&month=08&day=20&interval=year&regexp=Algebraic+Geometry+Seminar&use=Find
Seminar Calendar for Algebraic Geometry Seminar events the year of Monday, August 20, 2012. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. ``` July 2012 August 2012 September 2012 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 1 2 3 4 1 8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8 15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15 22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22 29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29 30 ``` Thursday, March 15, 2012 Joint number theory / algebraic geometry seminar 11:00 am   in 217 Noyes,  Thursday, March 15, 2012 Del Edit Copy Submitted by ford. Noam Elkies (Harvard Math)On the areas of rational trianglesAbstract: By a "rational triangle" we mean a plane triangle whose sides are rational numbers. By Heron's formula, there exists such a triangle of area $\sqrt{a}$ if and only if $a > 0$ and $x y z (x + y + z) = a$ for some rationals $x, y, z$. In a 1749 letter to Goldbach, Euler constructed infinitely many such $(x, y, z)$ for any rational $a$ (positive or not), remarking that it cost him much effort, but not explaining his method. We suggest one approach, using only tools available to Euler, that he might have taken, and use this approach to construct several other infinite families of solutions. We then reconsider the problem as a question in arithmetic geometry: $xyz(x+y+z) = a$ gives a K3 surface, and each family of solutions is a singular rational curve on that surface defined over $\mathbb{Q}$. The structure of the Neron-Severi group of that K3 surface explains why the problem is unusually hard. Along the way we also encounter the Niemeier lattices (the even unimodular lattices in $\mathbb{R}^{24}$) and the non-Hamiltonian Petersen graph. Tuesday, April 3, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, April 3, 2012 Del Edit Copy Submitted by katz. Sheldon Katz   [email] (Department of Mathematics, University of Illinois at Urbana-Champaign)Quantum Cohomology of Toric VarietiesAbstract: The structure of the quantum cohomology ring of a smooth projective toric variety was described by Batyrev and proven by Givental as a consequence of his work on mirror symmetry. This talk is in part expository since some details were never written down by Givental. I conclude with some open questions related to the quantum cohomology ring and the quantum product. An extension of these questions play a foundational role in the development of quantum sheaf cohomology which has been undertaken jointly with Donagi, Guffin, and Sharpe. Given a smooth projective variety X and a vector bundle E with $c_i(E)=c_i(X)$ for i=1,2, the quantum sheaf cohomology ring of string theory is supposed to be a deformation of the algebra $H^*(X,\Lambda^*E^*)$. If E=TX, quantum sheaf cohomology is the same as ordinary quantum cohomology. Wednesday, April 4, 2012 Algebraic Geometry Seminar 3:00 pm   in 145 Altgeld Hall,  Wednesday, April 4, 2012 Del Edit Copy Submitted by seminar. Alain Couvreur (INRIA Saclay and Ecole Polytechnique Paris)A construction of codes based on the Cartier operatorAbstract: We present a new construction of codes from algebraic curves which is suitable to provide codes on small fields. The approach involves the Cartier operator and can be regarded as a natural generalisation of classical Goppa codes. As for algebraic geometry codes, lower bounds on the parameters of these codes can be obtained by algebraic geometric methods. Tuesday, August 28, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, August 28, 2012 Del Edit Copy Submitted by choi29. Julius Ross (University of Cambridge)Maps in Kahler Geometry associated to Okounkov BodiesAbstract: The Okounkov body is a convex body in Euclidean space that can be associated to a projective manifold with a given flag of submanifolds. This convex body generalises certain aspects of the familiar Delzant polytope for toric varieties, although the Okounkov body will not be polyhedral or rational in general. In this talk I will discuss some joint work with David Witt-Nystrom that involves the study of maps from a manifold to its Okounkov body coming from Kahler geometry that are similar to the moment map in toric geometry. I will start by introducing the Okounkov body and the kind of maps that one might like to have, and then give an inductive construction that works in a neighbourhood of the flag. This is acheived through a homogeneous Monge-Ampere equation associated to the degeneration to the normal cone of a divisor, that can be thought of as a kind of "tubular neighbourhood" theorem in complex geometry. Tuesday, September 25, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, September 25, 2012 Del Edit Copy Submitted by katz. Sheldon Katz (Illinois Math)Refined Stable Pair Invariants on Local Calabi-Yau ThreefoldsAbstract: A refinement of the stable pair invariants of Pandharipande and Thomas is introduced, either as an application of the equivariant index recently introduced by Nekrasov and Okounkov or as "motivic" Laurent polynomial based on what we call the virtual Bialynicki-Birula decomposition, specializing to the usual stable pair invariants. We propose a product formula for the refined invariants extending the motivic product formula of Morrison, Mozgovoy, Nagao, and Szendroi for local $P^1$, based on the refined BPS invariants of the string theorists Huang, Kashani-Poor, and Klemm. We explicitly compute the invariants in low degree for local $P^2$ and local $P^1 \times P^1$ and check that they agree with the predictions of string theory and with our conjectured product formula. This is joint work with Jinwon Choi and Albrecht Klemm. Tuesday, October 2, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, October 2, 2012 Del Edit Copy Submitted by choi29. Gabriele La Nave (UIUC Math)Abramovich-Vistoli vs. Alexeev/Kollar--Shepherd-BarronAbstract: I will discuss why Kontsevich stable maps into DM stacks are stacky in nature and discuss Abramovich-Vistoli's theory of twisted curves and their consequent description of the compactification of the moduli space of "fibered surfaces" in contrast with Kollar--Shepherd-Barron MMP type of compactifications. I will then describe how to use these tools along with some toric geometry to give complete explicit description of the boundary of the moduli space of elliptic surfaces with sections. Tuesday, October 9, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, October 9, 2012 Del Edit Copy Submitted by katz. David Smyth (Harvard)Stability of finite Hilbert pointsAbstract: The classical construction of the moduli space of stable curves via Geometric Invariant Theory relies on the asymptotic stability result of Gieseker and Mumford that the m-th Hilbert Point of a pluricanonically embedded curve is GIT-stable for all sufficiently large m. Several years ago, Hassett and Keel observed that if one could carry out the GIT construction with non-asymptotic linearizations, the resulting models could be used to run a log minimal model program for the space of stable curves. A fundamental obstacle to carrying out this program has been the absence of a non-asymptotic analogue of Gieseker's stability result, i.e. how can one prove stability of the m-th Hilbert point for small values of m? In recent work with Jarod Alper and Maksym Fedorchuk, we prove that the the m-th Hilbert point of a general smooth canonically or bicanonically embedded curve is GIT-semistabe for all m>1. For (bi)canonically embedded curves, we recover Gieseker-Mumford stability by a much simpler proof. Tuesday, October 30, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, October 30, 2012 Del Edit Copy Submitted by choi29. Luke Oeding   [email] (University of California, Berkeley)Hyperdeterminants of polynomialsAbstract: Hyperdeterminants were brought into a modern light by Gelʹfand, Kapranov, and Zelevinsky in the 1990's. Inspired by their work, I will answer the question of what happens when you apply a hyperdeterminant to a polynomial (interpreted as a symmetric tensor). The hyperdeterminant of a polynomial factors into several irreducible factors with multiplicities. I identify these factors along with their degrees and their multiplicities, which both have a nice combinatorial interpretation. The analogous decomposition for the μ-discriminant of polynomial is also found. The methods I use to solve this algebraic problem come from geometry of dual varieties, Segre-Veronese varieties, and Chow varieties; as well as representation theory of products of general linear groups. Tuesday, November 6, 2012 Algebraic Geometry Seminar 3:00 pm   in Altgeld Hall,  Tuesday, November 6, 2012 Del Edit Copy Submitted by choi29. Izzet Coskun (UIC)The birational geometry of the Hilbert scheme of points on surfaces and Bridgeland stabilityAbstract: In this talk, I will discuss the cones of ample and effective divisors on Hilbert schemes of points on surfaces. I will explain a correspondence between the Mori chamber decomposition of the effective cone and the Bridgeland decomposition of the stability manifold. This is joint work with Daniele Arcara, Aaron Bertram and Jack Huizenga. Tuesday, November 13, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, November 13, 2012 Del Edit Copy Submitted by choi29. Peng Shan (MIT)Affine Lie algebras and Rational Cherednik Algebras Abstract: Varagnolo-Vasserot conjectured an equivalence between the category O of cyclotomic rational Cherednik algebras and a parabolic category O of affine Lie algebras. I will explain a proof of this conjecture and some applications on the characters of simple modules for cyclotomic rational Cherednik algebras and the Koszulity of its category O. This is a joint work with R. Rouquier, M. Varagnolo and E. Vasserot. Tuesday, November 27, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, November 27, 2012 Del Edit Copy Submitted by choi29. Daniel Erman (University of Michigan)Syzygies and Boij--Soederberg TheoryAbstract: For a system of polynomial equations, it has long been known that the relations (or syzygies) among the polynomials provide insight into the properties and invariants of the corresponding projective varieties. Boij--Soederberg Theory offers a powerful perspective on syzygies, and in particular reveals a surprising duality between syzygies and cohomology of vector bundles. I will describe new results on this duality and on the properties of syzygies. This is joint work with David Eisenbud. Tuesday, December 4, 2012 Algebraic Geometry Seminar 3:00 pm   in 243 Altgeld Hall,  Tuesday, December 4, 2012 Del Edit Copy Submitted by choi29. Dawei Chen (Boston College)Extremal effective divisors on the moduli space of curves Abstract: The cone of effective divisors plays a central role regarding the birational geometry of a variety X. In this talk we discuss several approaches that verify the extremality of a divisor, with a focus on the case when X is the moduli space of curves.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9027190208435059, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/61474-polynomial.html
# Thread: 1. ## polynomial The remainder when -4x^2 + 2x + 7 is divided by (x - c) is -5. Find a possible whole number value of c. 2. Well... why not divide and see what you get! Its too complicated for me to write out but I get x - c goes into -4x^2 +2x +7, -4x + (2-4c) times with a remainder of 7+(2 - 4c)*c Can you figure it out from there? 3. Not really...I'd rather someone show me a complete worked out solution (I'd understand how to do this if the -c in x - c was a number). And be quick please everybody... the faster I get a response the more math I can do! Sorry, if that seemed rude anyway, I'd appreciate any more responses. 4. Also, I don't think you need to divide -4x^2 + 2x + 7 by the x - c to find out the answer. I am looking for this other way. 5. The remainder theorem says: If a polynomial $p(x)$ is divided by a linear factor $x - a$, then the remainder is equal to $p(a)$ So, we have the solve the equation: $p(a) = -5 \ \iff \ -4a^2+2a + 7 = -5$ Move -5 to the other side to get: $-4a^2 + 2a + 12 = 0$ This is a simple, factorable quadratic. See if you can go on from here. 6. I don't understand. 7. Originally Posted by tyco3c I don't understand. Quote what you are refering to so that we know what help you need. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414089918136597, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/4476/fixed-point-fourier-transform-and-similar-transforms
# Fixed point Fourier transform (and similar transforms) The Fourier transform can be defined on $L^1(\mathbb{R}^n) \cap L^2(\mathbb{R}^n)$, and we can extend this to $X:=L^2(\mathbb{R}^n)$ by a density argument. Now, by Plancherel we know that $\|\widehat{f}\|_2 = \|f\|_2$, so the Fourier transform is an isometry on this space. My question now is, what is a theorem that guarantees that the Fourier transform has a fixed point on $L^2$? I know the Gaussian is a fixed point, but I'm also interested in other integral transforms, but I just take the Fourier transform as an example. The Banach Fixed Point Theorem does not work here since we don't have a contraction (operator norm $< 1$). Can we apply the Tychonoff fixed point theorem? Then we would need to show that there exists a non-empty compact convex set $C \subset X$ such that the Fourier transform restricted to $C$ is a mapping from $C$ to $C$. Is this possible? If we have a fixed point, what would be a way to show it is unique? By linearity we obviously have infinitely many fixed points of we have at least two of them. - 3 – anonymous Sep 12 '10 at 13:52 There they directly compute it but I just want to know if we can use one of the fixed point theorems. – Jonas Teuwen Sep 12 '10 at 14:29 2 I don't think you can get the results you want with fixed point theorems. – John D. Cook Sep 12 '10 at 22:29 ## 1 Answer My Functional Analysis Fu has gotten bit weak lately, but I think the following should work: The Schauder fixed point theorem says, that a continuous function on a compact convex set in a topological vector space has a fixed point. Because of isometry, the Fourier transform maps the unit ball in $L^2$ to itself. Owing to the Banach Alaoglu theorem, the unit ball in $L^2$ is compact with respect to the weak topology. The Fourier transform is continuous in the weak topology, because if $( f_n, \phi ) \to (f, \phi)$ for all $\phi \in L^2$, then $$(\hat{f}_n, \phi) = (f_n, \hat{\phi}) \to (f, \hat{\phi}) = (\hat{f}, \phi).$$ - 2 True, but if say the continuous function is $f\mapsto -f$ then the only fixed point is zero, which I don't think Jonas is looking for. – Robin Chapman Sep 13 '10 at 6:53 @Robin Chapman: Could you elaborate on that remark? What function do you mean? – Jonas Teuwen Sep 13 '10 at 17:24 2 @Jonas: Robin is quite rightly pointing out, that my approach just shows that there is a fixed point in the unit ball of L^2. However, since 0 is in the unit ball and it is trivially a fixed point of the Fourier transform, this does not tell us anything new. – Michael Ulm Sep 14 '10 at 5:02 1 Ah, right. Okay, then this is not the answer I'm looking for, sorry ;-). Maybe John D. Cook has a point that it might not work with fixed-point theorems? – Jonas Teuwen Sep 14 '10 at 8:52 Since nobody else answers I will accept this as answer. The used technique can be of some value anyway. – Jonas Teuwen Oct 11 '10 at 18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343177080154419, "perplexity_flag": "head"}
http://mathoverflow.net/questions/68385/calculate-camera-position-from-3x4-projection-matrix
## Calculate camera position from 3x4 projection matrix ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a 3 x 4 projection matrix $P$ given that calculates a homogeneous 2-Vector ${\bf i}=(u,v,w)^T$ on some screen (e.g.) from a homogeneous 3-Vector ${\bf x}=(x,y,z,w)^T$ in world space by $P \cdot {\bf x} = {\bf i}$. How can I calculate the position of the camera in world space from that? - 1 Dear Jakob, no offense intended, but this forum is for research-level math questions, and I fear that yours isn't quite. You might have more luck at the related site: math.stackexchange.com – André Henriques Jun 21 2011 at 14:12 1 thanks for your hint - i really was not aware that there are two maths sites in the stackoverflow universe. i'll give it a try. – Jakob Jun 21 2011 at 16:23 ## 2 Answers Sorry for answering my own question, but just now a colleague told me the solution and I want to share it - maybe it is of some use for anybody else some day. 1. Separate $P$ into a 3x3 matrix $P'$ (including the first three columns) and a vector $\bf F'$ (the last column). 2. Invert $P'$ 3. The projection reference point (i.e., the 'camera') is then ${\bf F}=P'^{-1} \cdot {\bf F'}$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Check out Computer Graphics: Principles and Practice in C (2nd Edition) by James D. Foley, Andries van Dam, Steven K. Feiner and John F. Hughes (Hardcover - Aug 14, 1995) And all will be revealed. - thank you for your suggestion. i will have a deeper look into it, but at first sight the book does not really seem to answer my question. – Jakob Jun 21 2011 at 16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185968041419983, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/88719?sort=oldest
Asymptotic Geodesic Flow on Planar Graphs Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume you have a simple and infinite graph. Choose $x_{0}$ an arbitrary vertex and consider $$G_{n}:=\{x\in G:d(x_0,x)\leq n\}$$ with the graph metric (hop metric). Now for each pair of nodes there is a unit flow that travels through the minimum path between nodes (if there is more than one minimum path it splits equally along all such paths). The total flow in $G_{n}$ is equal to $|G_{n}|^2$. Given a node $v\in G_{n}$ we define $T_{n}(v)$ as the total flow generated in $G_{n}$ passing through $v$. In other words, $T_{n}(v)$ is the sum off all the geodesic paths in $G_{n}$ which are carrying flow and contain the node $v$. Let $M_{n}$ be the maximum flow $$M_{n}(v):=\max_{v\in G_{n}}{T_{n}(v)}.$$ For any graph $|G_{n}|\leq M_{n}\leq |G_{n}|^2$. It is not very difficult to see that for $G=\mathbb{Z}^2$ $$M_{n}=O(|G_{n}|^{3/2}).$$ My question is: can we do better than this in a planar graph? In other words, does there exist a infinite planar graph $G$ such that $M_{n}=o(|G_{n}|^{3/2})$? If so, how low we can go. Note that the same problem could be reformulated by instead of having an infinite graph $G$ having a sequence of graphs $\{G_{n}\}$ such that $|G_n|\to\infty$. - 1 Answer You can't do better. Lipton and Tarjan's planar separator theorem says that any $n$-node planar graph $G=(V,E)$ contains a set $S$ of $O(\sqrt{n})$ vertices whose removal separates the graph into components all of which have size at most $2n/3$. We can then partition $V \setminus S$ into sets $X,Y$ each containing at least $(1/3-o(1))n$ vertices; there are order $n^2$ pairs $(u,v) \in X \times Y$, and any path between such a pair $(u,v)$ contains a vertex of $S$. Since $|S|=O(\sqrt{n})$, by the pigeonhole principle it follows that some element of $S$ is in order $n^{3/2}$ paths. between $X$ and $Y$. - 1 And this works for any flow, not just geodesic. – Ori Gurel-Gurevich Feb 20 2012 at 2:33 Thanks Louigi! It makes total sense. – ght Feb 20 2012 at 2:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430304765701294, "perplexity_flag": "head"}
http://mathoverflow.net/questions/79819/a-particular-isomorphism-of-graded-algebras-over-a-regular-local-ring
## A particular Isomorphism of graded algebras over a regular local ring ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In Hartshorne's "Algebraic Geometry", the following statement is a weaker form of Theorem 8.21A (e), which he quotes from Matsumuura's book on commutative algebra: Proposition. Let $R$ be a regular local ring and $I=(x_1,\ldots,x_r)\subset R$ an ideal generated by a regular sequence. Let $A:=R/I$. Then, ```$$\begin{eqnarray*} \phi: A[T_1,\ldots,T_r] &\overset{_\sim}{\longrightarrow} & \mathrm{gr}_I(R) = \bigoplus\nolimits_{d\ge 0} {I^d}/{I^{d+1}} \\ T_i & \longmapsto & x_i \end{eqnarray*}$$``` is an isomorphism of graded $A$-algebras. In Hartshorne, the condition of being regular and local is strengthened to Cohen-Macaulay. However, I only need the above. I tried to look up the proof for the general statement in Matsumuura's book, but it seems rather involved (and honestly, a bit convoluted). I would like to use results about regular local rings and in turn, avoid introducing terminology like Hilbert-Samuel polynomials. So, I guess I am asking for an "easy" proof of the above proposition. It seems rather easy for $r=1$, but I am somehow stuck trying to prove it by induction. - 1 One reference is Bruns-Herzog, Theorem 1.1.8. You don't really need any assumption on $R$. – Hailong Dao Nov 2 2011 at 19:24 That's just perfect. Make it an answer and I'll accept. – Jesko Hüttenhain Nov 2 2011 at 23:13 ## 1 Answer One reference is Bruns-Herzog, Theorem 1.1.8. You don't really need any assumption on $R$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8860535025596619, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/127310/root-calculation-by-hand
# Root Calculation by Hand Is it possible to calculate and find the solution of $\; \large{105^{1/5}} \;$ without using a calculator? Could someone show me how to do that, please? Well, when I use a Casio scientific calculator, I get this answer: " 2.536517482 ". With WolframAlpha, I can able to get a much more detailed query return actually. - 1 $(105)^{1/5}$ is not square root of $105$. – Salech Alhasov Apr 2 '12 at 17:37 Yes, sorry about that, you're right. I'm editing it right now... – Kerim Atasoy Apr 2 '12 at 17:39 2 I think it's better now, right...? :) – Kerim Atasoy Apr 2 '12 at 17:41 Perfect! +1 to the question! – Salech Alhasov Apr 2 '12 at 17:47 1 @KerimAtasoy Good for you then! – Peter Tamaroff Apr 30 '12 at 16:43 show 4 more comments ## 5 Answers You can try using binomial theorem for real exponents. You can write this as $$((2.5)^5 + (105 - (2.5)^5))^{1/5} = 2.5 \left(1 + \frac{105 - (2.5)^5}{2.5^5}\right)^{1/5} = \frac{5}{2} \left(1 + \frac{47}{625}\right)^{1/5}$$ Taking first three terms of the binomial series $$(1+x)^r = 1 + rx + \frac{r(r-1)x^2}{2!} + \frac{r(r-1)(r-2)x^3}{3!} + \dots$$ using $r = \frac{1}{5}$ and $x = \frac{47}{625}$ gives us $$\frac{5}{2} \left(1 + \frac{47}{5*625} - \frac{4 * 47^2}{2*5^2*625^2}\right) = \frac{4954041}{1953125} \approx 2.5365$$ If you need a better approximation, you can include more terms. All this can be done by hand using integer arithmetic, but is tedious. - 1 VERY GOOD!... :) Thank you very much!... I should better study about these terms and subjects also... :) Thank you again... – Kerim Atasoy Apr 2 '12 at 18:55 @KerimAtasoy: You are welcome. But I am curious, what prompted you to ask this question? Are you trying to figure out how calculators work under the covers? – Aryabhata Apr 2 '12 at 18:55 :) Sure, it's a nice pleasure to have a such talk with you, sir. Well, I like calculators very much, but calculating and solving some problems without using a calculator seems better sometimes... :) I've already noticed that there're just a few people concern about these approachings in real... :) Yes, I've been also curious about calculators, sometimes, I try to study about them using some scripting or programming languages. But, actually, it's a very hard work to create a very good calculator application, see... :) – Kerim Atasoy Apr 2 '12 at 19:21 2 Notice that using the 1st term of the binomial expansion is similar to a Newton update. To wit, suppose $x$ is an approximation to $105^{\frac{1}{5}}$, then we can write $105^{\frac{1}{5}} = (x^5+(105-x^5))^{\frac{1}{5}} = x(1+\frac{105-x^5}{x^5})^{\frac{1}{5}}$. Expanding the term in parentheses to the first term gives $x(1+\frac{1}{5}\frac{105-x^5}{x^5})$ which simplifies to the Newton update. – copper.hat Apr 2 '12 at 19:27 2 @KerimAtasoy: Yes, there is a lot of theory involved which can be quite interesting. Good luck with your studies! – Aryabhata Apr 2 '12 at 19:28 show 3 more comments I go back to the days BC (before calculators). We did have electricity, but you had to rub a cat's fur to get it. We also had slide rules, from which a $2$ to $3$ place answer could be found quickly, with no battery to go dead in the middle of an exam. Engineering students wore theirs in a belt holster. Unfortunately, slide rules were expensive, roughly the equivalent of two meals at a very good restaurant. For higher precision work, everyone had a book of tables. My largish book of tables has the entry $021189$ beside $105$. This means that $\log(105)=2.021189$ (these are logarithms to the base $10$, and of course the user supplies the $2$). Divide by $5$, which is trivial to do in one's head (multiply by $2$, shift the decimal point). We get $0.4042378$. Now use the tables backwards. The log entry for $2536$ is $404149$, and the entry for $2537$ is $414320$. Note that our target $0.4042378$ is about halfway between these. We conclude that $(105)^{1/5}$ is about $2.5365$. The table also has entries for "proportional parts," to make interpolation faster. As for using the table backwards, that is not hard. Each page of the $27$ page logarithms section has in a header the range of numbers, and the range of logarithms. The page I used for reverse lookup is headed "Logs $.398\dots$ to $.409\dots$." There are other parts of the book of tables that deal with logarithms, $81$ pages of logs of trigonometric functions (necessary for navigation, also for astronomy, where one really wants good accuracy). And of course there are natural logarithms, only $17$ pages of these. And exponential and hyperbolic functions, plus a few odds and ends. - 5 Very interesting... Glad that you made this far and sharing your experiences world wide... :) – Kerim Atasoy Apr 2 '12 at 20:37 I'm not exactly sure what you mean by 'without a calculator'. You could try Newton's method to solve $f(x) = 0$, where $f(x) = x^5-105$. The Newton update is then $x_{n+1} = \frac{4}{5}x_n + \frac{1}{5} \frac{105}{{x_n}^4}$. This converges very quickly. Of course, this involves computing the 4th power, and dividing... - :) Well, How about "... without using a calculator" ..? OK, I'm editing it again... :) – Kerim Atasoy Apr 2 '12 at 17:53 I didn't get this method... Could you explain and show it more, please? For example, where that $\; \large{\frac{4}{5}} \;$ comes from...? – Kerim Atasoy Apr 2 '12 at 18:20 11 Newton's method is based on a linear approximation of a function using the Maclaurin series: $$f(x)=f(x_0)+f'(x_0)(x-x_0)+O(x-x_0)^2\tag{1}$$ where $O(x-x_0)^2$ is considered insignificant. Trying to find the $x$ so that $f(x)=0$, $(1)$ says $$x-x_0=-\frac{f(x_0)}{f'(x_0)}+O(x-x_0)^2\tag{2}$$ which, ignoring $O(x-x_0)^2$, leads to the iteration $$x=x_0-\frac{f(x_0)}{f'(x_0)}\tag{3}$$ Plugging $f(x)=x^5-a$ into $(3)$ yields $$\begin{align} x &=x_0-\frac{x_0^5-a}{5x_0^4}\\ &=\frac45x_0+\frac{a}{5x_0^4}\tag{4} \end{align}$$ – robjohn♦ Apr 2 '12 at 19:02 2 My advice if using this and other formulae like it would be to rewrite $x_n$ as $p_n/q_n$, then plug these into the formula to get an expression for $x_{n+1}=p_{n+1}/q_{n+1}$ - you can then divide this into two series for the $p$s and $q$s. – James Fennell Apr 3 '12 at 19:11 2 In our case we would have $$\frac{p_{n+1}}{q_{n+1}}=\frac{4}{5}\frac{p_n}{q_n} + \frac{a}{5} \frac{q_n^4}{p_n^4} = \frac{4 p_n^5 + aq_n^5}{5p_n q_n^4}$$ Giving the two series $$p_{n+1} = 4 p_n^5 + aq_n^5 ,\;\; q_{n+1} = 5p_n q_n^4$$ Doing this has the advantage that all the subsequent calculations will involve integers (as the $p$s and $q$s are integers) and not messy fractions. You can then long divide at any time to yield a decimal answer – James Fennell Apr 3 '12 at 19:11 show 3 more comments Another way of doing this would be to use logarithm, just like Euler did: $$105^{1/5} = \mathrm{e}^{\tfrac{1}{5} \log (105)} = \mathrm{e}^{\tfrac{1}{5} \log (3)} \cdot \mathrm{e}^{\tfrac{1}{5} \log (5)} \cdot \mathrm{e}^{\tfrac{1}{5} \log (7)}$$ Use $$\log(3) = \log\left(\frac{2+1}{2-1}\right) = \log\left(1+\frac{1}{2}\right)-\log\left(1-\frac{1}{2}\right) = \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{2^{2k+1}} = 1 + \frac{1}{12} + \frac{1}{80} + \frac{1}{448} = 1.0.83333+0.0125 + 0.0022 = 1.09803$$ $$\log(5) = \log\frac{4+1}{4-1} + \log(3) = \log(3) + \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{4^{2k+1}} = \log(3) + \frac{1}{2} + \frac{1}{96} +\frac{1}{2560}$$ $$\log(7) = \log\frac{8-1}{8+1} + 2 \log(3) = 2 \log(3) - \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{8^{2k+1}} = 2 \cdot \log(3) - \frac{1}{4} - \frac{1}{768}$$ Thus $$\frac{1}{5} \left( \log(3) + \log(5) + \log(7)\right) = \frac{4}{5} \log(3) + \frac{1}{5} \left( \frac{1}{2} - \frac{1}{4} + \frac{1}{96} - \frac{1}{768} + \frac{1}{2560} \right) = \frac{4}{5} \log(3) + \frac{1993}{38400}= 0.9303 = 1-0.0697$$ Now $$\exp(0.9303) = \mathrm{e} \cdot \left( 1 - 0.0697 \right) = 2.71828 \cdot 0.9303 = 2.5288$$ - :) AMAZING!... I hadn't even noticed about these approachings untill now, sir. Thank you very much!... :) – Kerim Atasoy Apr 2 '12 at 18:59 You can just do it by trial, but it gets tiring: $2^5\lt 105 \lt 3^5$ so it is between $2$ and $3$. You might then try $2.5^5 \approx 98$ so the true value is a bit higher and so on. An alternate is to use the secant method. If you start with $2^5=32, 3^5=243$, your next guess is $2+\frac {243-105}{243-32}=2.654$ Then $2.654^5=131.68$ and your next guess is $2.654-\frac {131.68-105}{131.68-32}=2.386$ and so on. Also a lot of work. Added: if you work with RF engineers who are prone to use decibels, you can do this example easily. $105^{0.2}=100^{0.2}\cdot 1.05^{0.2}=10^{0.4}\cdot 1.01=4 dB \cdot 1.01= (3 dB + 1 dB)1.01=2 \cdot 1.25 \cdot 1.01=2.525$, good to $\frac 12$%, where $1.05^{0.2}\approx 1.01$ comes from the binomial $(1+x)^n\approx 1+nx$ for $x \ll 1$ - Interesting... I'm wondering how do you get these results... :) You guys have very good skills actually... :) – Kerim Atasoy Apr 2 '12 at 18:13 4 @KerimAtasoy: The strategies are well known in numerical analysis. In a sense, I cheated, as I used Wolfram Alpha as a calculator to get the numerics. But I believe they are doable by hand if you are determined enough. – Ross Millikan Apr 2 '12 at 19:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411239624023438, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/108372/translation-distance-in-the-curve-complex/116618
Translation distance in the curve complex Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a map $\psi: S\rightarrow S,$ for $S$ a closed surface, is there any algorithm to compute its translation distance in the curve complex? I should say that I mostly care about checking that the translation distance is/is not very small. That is, if the algorithm can pick among the possibilities: translation distance is 0, 1, 2, 3, many, then I am happy... I know there are algorithms for computing distances IN the curve complex, but this is not quite the same... - 2 Well, there is an algorithm to check whether the translation distance of $\psi$ is zero versus nonzero, using the fact that Thurston's trichotomy is computable, that pseudo-Anosov elements have nonzero translation distance, and that in the finite order and reducible cases you can decide whether there is a fixed simple closed curve. – Lee Mosher Sep 28 at 20:26 @Lee, I did know that, but even there, what is the complexity of the computation (with $\psi$ given as, say, a simplicial map)? – Igor Rivin Sep 28 at 20:35 1 This depends on knowing the computational complexity of train track algorithms such as the Bestvina-Handel algorithm, about which we know almost nothing. – Lee Mosher Sep 28 at 20:44 4 Answers In the case that $\psi$ is pseudo-Anosov, the best one can do in general, as far as I know, is to get upper and lower bounds which are linear in translation length. These come from train track considerations. Assuming you have an invariant train track $T$ for $\psi$ in your hands (obtained by some algorithmic method of currently unknowable complexity as per my comment), factor it into a sequence of train track splits $$T=T_0, T_1, \ldots, T_k = \psi(T)$$ then using the method in the Masur-Minsky paper "Geometry of the curve comples I: hyperbolicity", one can algorithmically break the split sequence into blocks $$T_0 = T_{m_0}, ..., T_{m_1}, ..., T_{m_a}=T_k$$ such that the diameter of the subsequence from $T_{m_i}$ to $T_{m_{i+1}}$ has a certain constant upper bound and the diameter from $T_{m_i}$ to $T_{m_j}$ has a certain lower bound which is a constant times $|i-j|$. The material needed to do this is described in the section of their paper entitled "the nested train track argument". Other than that, Shackleton's paper "An acylindricity theorem for the mapping class group" contains some algorithmic detail, but not enough to answer your question. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know an algorithm, but here's a possible approach. As Richard and Lee have observed, one may assume that $\psi$ is pseudo-Anosov. In that case, the mapping torus $T_\psi$ is a hyperbolic 3-manifold fibering over $S^1$, with fiber $S$. There is a short exact sequence $\pi_1(S)\to\pi_1(T_\psi)\to \mathbb{Z}$. Here's a characterization of the translation length on the curve complex in terms of the topology of $T_\psi$. The fiber $S$ represents a homology class $[S]\in H_2(T_\psi)$. Let $\Sigma \looparrowright T_\psi$ be an immersed connected surface, such that $[\Sigma]=k[S]$ and such that $\chi(\Sigma)=k\chi(S)$. Moreover, assume that the composite map $\pi_1(\Sigma)\to \pi_1(T_\psi) \to \mathbb{Z}$ is non-trivial, so that $\Sigma$ is not homotopic to a finite-sheeted cover of $S$. Let $K(\psi)$ be the minimal such $k$, and let $D(\psi)$ be the minimal such $k$ so that the surface has only double curves of intersection. Clearly $K(\psi)\leq D(\psi)$. Claim: The curve complex translation distance of $\psi$ is $=D(\psi)$. One direction: Let $k$ be the translation length of $\psi$. There exists a sequence of non-separating curves $c_1,c_2,\ldots, c_k \subset S$, such that $\psi(c_1)=c_k$, and $c_i \cap c_{i+1}=\emptyset$. One creates a surface $\Sigma\subset T_\psi$ by taking $k$ copies of $S$, $S_1 \sqcup \cdots \sqcup S_k \subset T_\psi$ in circular order. Cut out annular neighborhoods of $c_i, c_{i+1}$ inside $S_i$, and insert cross annuli between $S_{i-1}$ and $S_i$ (taking indices $\mod k$) between the 4 copies of $c_i$. This construction generalizes a construction of Cooper-Long-Reid. One can see that the resulting surface has the properties above. Conversely, if one has such an immersed surface with only double curves, one may cut and paste the self-intersection curves to get $k$ parallel copies of $S$. The cross cut curves gives a sequence of closed curves in $S$, which one can prove using the homology condition forms a closed loop in the curve complex $\mod \psi$. I don't know yet how to make this criterion into an algorithm. I think there is an algorithm to compute $K(\psi)$. For a given genus $g$, Canary proved that there are only finitely many homotopy classes of immersed surfaces of genus $g$. I think this proof can be made effective, and should give one a method to compute $K(\psi)$. This would at least give an algorithmic lower bound, since $K(\psi)\leq D(\psi)$. Also, there is a constant $0< c_S <1$ such that $D(\psi)\leq c_S K(\psi)$ (this may be proved using hyperbolic geometry techniques). One could try to algorithmically to construct all surfaces realizing $K(\psi)$, and then try to homotope them to have only double curves of intersection, e.g. using normal surfaces. However, there is a result of Gulliver-Scott that an immersed surface with only double curves of intersection might have a minimal area representative which has triple points. So I don't know yet how to make an algorithm by computing $D(\psi)$ using this approach. - In the braid group, Ko and Lee have given a polynomial time test of reducibility using the Garside structure. (See http://arxiv.org/abs/math/0610746) - Please specify what translation length" is here. Is it $\lim_{n\rightarrow\infty} \frac{1}{n} d(x,\psi^n x)$, or, $\min_x d(x,\psi x)$; or something else that you have in mind? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431989192962646, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1070/why-is-hkx-not-a-secure-mac-construction
# Why is H(k||x) not a secure MAC construction? If H(m) is a secure hash function, can't we implement a MAC using H(k||m)? However, it seems the more widely used MACs, such as NMAC and HMAC (both originally defined in Keying hash functions for message authentication) use a much more complicated scheme. Why is this concatenation scheme insecure? - ## 2 Answers The word "secure hash function" usually means (for a function $H$) • Preimage resistance: Given a value $h$, it is hard to find a message $x$ so that $h = H(x)$. • Second preimage resistance: Given a message $x$, it is hard to find a message $x' \neq x$ such that $H(x) = H(x')$. • Collision resistance: It is hard to find two messages $x$, $x'$ such that $H(x) = H(x')$. For a secure MAC function $M$, we want: • Unforgability: Without knowing the key $k$, it is hard to find a message $x$ and authentication tag $m$ such that $m = M(k, x)$, even if given some other such valid message-tag pairs (which are not allowed as answers). Unfortunately, defining $M(k,x) = H(k || x)$ for a secure hash function does not guarantee that the MAC function is unforgeable. In fact, with the hash constructions used in practice (i.e. the Merkle-Damgard construction without a finalizing round, used in MD5 and SHA-1), it is quite easy, given a valid pair $(x,m)$, to create an $(x', m')$ which is still valid: To create a hash with Merkle-Damgard, the message is padded to some block size, and then each block in sequence is feeded to a compression function, which updates an internal state. The final state is then output as the hash. So, $H(k||x)$ is the state of the hash machine after inputting $k||x||pad_x$. If we set our hash machine to this state, and then input arbitrary other data $y$, followed by another pad $pad_y$, we reach the state $m' = H(k||x||pad_x||y) = M(k, x||pad_x||y)$. Forgery is done, with $x' = x || pad_x || y$. The HMAC construction is not suspectible to this attack, as the secret key $k$ is applied both before and after the main message, which makes the internal state non-reconstructible. HMAC does not guarantee unforgability for general secure hash functions, either, but it has a security proof for the Merkle-Damgard construction, if the internal compression function is collision-resistant. - Thanks to the anonymous editor who corrected my $H(x) \neq H(x')$ to $H(x) = H(x')$. – Paŭlo Ebermann♦ May 14 at 19:58 The reason $H(k|m)$ (where $|$ is concatenation) is not the standard comes from the message extension attack. If I, as an attacker, have $H(k|m)$ and $m$, I can compute $H(k|m|p|m')$ (where $p$ is the padding that $H$ would have applied to $k|m$ in computing the digest, and $m'$ is an arbitrary message) without knowing $k$. I would then send $H(k|m|p|m')$ and $m|p|m'$ to the user. The message authentication check would succeed. Clearly this is an issue. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8997053503990173, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/16323/pull-force-of-an-electromagnet
# Pull Force of an electromagnet How do I calculate the pull force of a cylindrical electromagnet to Iron as a function of distance? Is there any difference betweeen magnets and electromagnets? - – Mascarpone Oct 29 '11 at 10:38 Btw I already found a reference for force between two magnets... cant find a reference between a magnet and a non magnet – Mascarpone Oct 29 '11 at 11:01 2 I deleted a few inappropriate/irrelevant comments. – David Zaslavsky♦ Oct 30 '11 at 21:24 ## 2 Answers It's a very hard problem. If magnetism behaved like gravity, you could break up the plane and magnet into small dipoles and sum up all of the dipole-dipole interaction energies for all possible pairs (this would make a hex-tuple integral!). The force is simply the rate of change of the total energy. However, you need to first calculate the induced magnetization of iron, which is complicated and has a "saturation" behavior. The paper: http://www.gris.uni-tuebingen.de/people/staff/spabst/magnets/Magnets_in_Motion.html Lays out the needed calculations. Of course, the cylindrical symmetry will reduce the calculations needed. - To quote the K&J Magnetics calculator page: Most online calculators determine pull force based on a theoretical calculation of the flux density. With a few assumptions, flux density (in Gauss) can be related to the expected pull force. Unfortunately, this simplification often fails to match experimentally measured data. Theoretically, you compute the magnetization that the magnet induces in the iron plate and compute $dE/dx$, the change in the interaction energy between the permanent magnet and the iron plate with increasing separation distance, in order to find the pull force. What they seem to be saying is that their calculator uses empirical methods instead, based on a history of discrepancies between experiment and theory. This is not implausible; ferro-magnetism is quite complex, and discrete magnetic domain size and hysteresis (among other effects) can prevent simple versions of theory from matching experiment. It is worth noting that if you enter a separation distance of 0 inches in the calculator that the pull force for the iron plate configurations and the magnet-magnet separation force are the same. This is because the magnet essentially creates a copy of its magnetic field configuration in the iron plate. This, of course, assumes a thick iron plate (compared to the thickness of the magnet), and no longer works when there is a gap between the plate and the magnet. In order to calculate the pull force as a function of distance, you have to compute a consistent magnetic field solution. In other words, you have to solve for the magnetic field configuration that exists when a magnet is a given distance from the iron plate. You can compute the interaction energy of the magnetized iron with the permanent magnet as a function of minute changes in distance, and use the relation \begin{equation} F=\frac{\partial E}{\partial x} \end{equation} to solve for the pull force. Solving for the self-consistent magnetic field configuration is not trivial, as any student of E&M will tell you. J.D. Jackson addresses this subject in his canonical text book, Classical Electrodynamics. In chapter 5, $\S 5.9$, he discusses the equations necessary in order to solve the boundary value problem. This treatment does not address the microscopic vs. macroscopic nature of magnetic fields and only briefly touches on the subject of hysteresis in $\S 5.11$. However, these are the equations that one would use to compute the field due to a permanent magnet at a given distance from a "virgin" piece of iron. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046586155891418, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/66721/greedy-approach-to-0-1-knapsack-problem-in-specific-instances
## Greedy approach to 0-1 Knapsack problem in specific instances ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The 0-1 knapsack problem is known to be NP-complete, and the greedy approach by Dantzig (based on choosing on the basis of density or value/weight) can be shown to be suboptimal using counterexamples. Many of the counterexamples seem to rely on a worst case feature where the weights of the elements are huge and comparable to the capacity of the knapsack. Suppose in a given instance, this is not so i.e. weights of all elements are very small compared to the capacity of the knapsack. In this case, will the greedy algorithm perform optimally (or atleast near optimally for practical purposes)? I apologize for such a qualitative description, and am seeking a precise formulation for the question, and a possible solution. - ## 2 Answers I don't think this works. Suppose you have a counterexample to the greedy heuristic, with a knapsack of size $S$ and all elements of density at most $D$ and weight at most $W$. Now add a large number $N$ of objects of weight $W$ and density $1000D$, and consider a knapsack of size $S+NW$. Clearly the $N$ 'dense' objects are included in the optimal solution, and after adding them you're left with an instance of the previous counterexample. By making $N$ large enough, you can ensure that the weights of all objects are arbitrarily small compared to the size of the knapsack. - Your counterexample seems correct. So is there any characterization or property of knapsack instances which makes the simple greedy approach optimal in their cases? – Bratt Jun 3 2011 at 11:57 1 Also, the greedy algorithm's performance here seems near optimal here, as the difference between its solution and the optimal is just the difference in its performance in the much smaller counterexample we started out with. Can this be quantified? – Bratt Jun 3 2011 at 12:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As Frederico has already shown, this isn't enough to make the greedy heuristic immune to counterexamples. It's worth mentioning that there is a simple dynamic programming algorithm for the knapsack problem that can be turned into a polynomial time approximation scheme (PTAS) without much difficulty. This can be found in the book by Papadimtiriou and Steiglitz among other places. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377966523170471, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Buffon's_Needle&oldid=12391
# Buffon's Needle ### From Math Images Revision as of 09:45, 14 June 2010 by 144.118.94.68 (Talk) Buffon's Needle The Buffon's Needle problem is a mathematical method of approximating the value of pi $(\pi = 3.1415...)$involving repeatedly dropping needles on a sheet of lined paper and observing how often the needle intersects a line. Buffon's Needle Field: Geometry Created By: Wolfram MathWorld # Basic Description The method was first used to approximate π by Georges-Louis Leclerc, the Comte de Buffon, in 1777. Buffon was a mathematician, and he wondered about the probability that a needle would lie across a line between two wooden strips on his floor. To test his question, he apparently threw bread sticks across his shoulder and counted when they crossed a line. Calculating the probability of an intersection for the Buffon's Needle problem was the first solution to a problem of geometric probability. The solution, in the case where the needle is not greater that the width of the wood strips, can be used to design a method for approximating the number π. Subsequent mathematicians have used this method with needles instead of bread sticks, or with computer simulations. We will show that when the distance between the lines is equal the length of the needle, an approximation of π can be calculated using the equation $\pi \approx {2*\mbox{number of drops} \over \mbox{number of hits}}$ # A More Mathematical Explanation [Click to view A More Mathematical Explanation] #### Will the Needle Intersect a Line? [[Image:willtheneedlecros [...] [Click to hide A More Mathematical Explanation] #### Will the Needle Intersect a Line? To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. The distance between the lines is 1 and the needle length is 1. Finally, d is the distance between the center of the needle and the nearest line. Also, there is no reason why the needle is more likely to fall at a certain angle or distance, so we can consider all values of θ and d equally probable. We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the green arrow, d, is shorter than the leg opposite θ. More precisely, it will intersect when $d \leq \left( \frac{1}{2} \right) \sin(\theta). \$ See case 1, where the needle falls at a relatively small angle with respect to the lines. Because of the small angle, the center of the needle would have to fall very close. In case 2, the needle intersects even though the center of the needle is far from both lines because the angle is so large. #### The Probability of an Intersection In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the possible values of θ along the X axis and d along the Y, we have the sample space for the trials. In the diagram below, the sample space is contained by the dashed lines. Each point on the graph represents some combination of an angle and a distance that a needle might occupy. There will be an intersection if $d \leq \left ( \frac{1}{2} \right ) \sin(\theta) \$, which is represented by the blue region. The area under this curve represents all the combinations of distances and angles that will cause the needle to intersect a line. The area under the blue curve, which is equal to 1/2 in this case, can found by evaluating the integral $\int_0^{\frac {\pi}{2}} \frac{1}{2} \sin(\theta) d\theta$ Then, the area of the sample space can be found by multiplying the length of the rectangle by the height. $\frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}$ The probability is equal to the ratio of the two areas in this case because each possible value of θ and d is equally probable. The probability of an intersection is $P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197...$ #### Using Random Samples to Approximate Pi The original goal of the Buffon's needle method, approximating π, can be achieved by using probability to solve for π. If a large number of trials is conducted, the proportion of times a needle intersects a line will be close to the probability of an intersection. That is, the number of line hits divided by the number of drops will equal approximately the probability of hitting the line. $\frac {\mbox{number of hits}}{\mbox{number of drops}} \approx P_{hit}$ Also, recall from above that $P_{hit} = \frac {2}{\pi}$ So $\frac {\mbox{number of hits}}{\mbox{number of drops}} \approx \frac {2}{\pi}$ Therefore, we can solve for π: $\pi \approx \frac {2 * {\mbox{number of drops}}}{\mbox{number of hits}}$ #### Watch a Simulation http://mste.illinois.edu/reese/buffon/bufjava.html # Why It's Interesting #### Monte Carlo Methods The Buffon's needle problem was the first recorded use of a Monte Carlo method. These methods employ repeated random sampling to approximate a probability, instead of computing the probability directly. Monte Carlo calculations are especially useful when the nature of the problem makes a direct calculation impossible or unfeasible, and they have become more common as the introduction of computers makes randomization and conducting a large number of trials less laborious. π is an irrational number, which means that its value cannot be expressed exactly as a fraction a/b, where a and b are integers. As a result, π cannot be written as an exact decimal and mathematicians have been challenged with trying to determine increasingly accurate approximations. The timeline below shows the improvements in approximating pi throughout history. In the past 50 years especially, improvements in computer capability allow mathematicians to determine more decimal places. Nonetheless, better methods of approximation are still desired. A recent study conducted the Buffon's Needle experiment to approximate π using computer software. The researchers administered 30 trials for each number of drops, and averaged their estimates for π. They noted the improvement in accuracy as more trials were conducted. These results show that the Buffon's Needle approximation is relatively tedious. Even when a large number of needles are dropped, this experiment gave a value of pi that was inaccurate in the third decimal place. Compared to other computation techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Regardless of the impracticality of the Buffon's Needle method, the historical significance of the problem as a Monte Carlo method means that it continues to be widely recognized. #### Generalization of the problem The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. For a needle shorter than the distance between the lines, it can be shown by a similar argument to the case where d = 1 and l = 1 that the probability of a intersection is $\frac {2*l}{\pi*d}$. Note that this agrees with the normal case, where l =1 and d =1, so these variables disappear and the probability is $\frac {2}{\pi}$. The generalization of the problem is useful because it allows us to examine the relationship between length of the needle, distance between the lines, and probability of an intersection. The variable for length is in the numerator, so a longer needle will have a greater probability of an intersection. The variable for distance is in the denominator, so greater space between lines will decrease the probability. To see how a longer needle will affect probability, follow this link: http://whistleralley.com/java/buffon_graph.htm #### Needles in Nature Applications of the Buffon's Needle method are even found in nature. The Centre for Mathematical Biology at the University of Bath found uses of the Buffon's Needle algorithm in a recent study of ant colonies. The researchers found that an ant can estimate the size of an anthill by visiting the hill twice and noting how often it recrosses its first path. Ants generally nest in groups of about 50 or 100, and the size of their nest preference is determined by the size of the colony. When a nest is destroyed, the colony must find a suitable replacement, so they send out scouts to find new potential homes. In the study, scout ants were provided with "nest cavities of different sizes, shapes, and configurations in order to examine preferences" [2]. From their observations, researchers were able to draw the conclusion that scout ants must have a method of measuring areas. A scout initially begins exploration of a nest by walking around the site to leave tracks. Then, the ant will return later and walk a new path that repeatedly intersects the first tracks. The first track will be laced with a chemical that causes the ant to note each time it crosses the original path. The researchers believe that these scout ants can calculate an estimate for the nest's area using the number of intersections between its two visits. The ants can measure the size of their hill using a related and fairly intuitive method: If they are constantly intersecting their first path, the area must be small. If they rarely reintersects the first track, the area of the hill must be much larger so there is plenty of space for a non-intersecting second path. "In effect, an ant scout applies a variant of Buffon's needle theorem: The estimated area of a flat surface is inversely proportional to the number of intersections between the set of lines randomly scattered across the surface." [7] This idea can be related back to the generalization of the problem by imagining if the area between the lines was increased by making the parallel lines much further apart. A larger distance between the two lines would mean a much smaller probability of intersection. We can see in case 3 that when the distance between the lines is greater than the length of the needle, even very large angle won’t necessarily cause an intersection. This method of random motion in nature allows the ants to gauge the size of their potential new hill regardless of its shape. Scout ants are even able to asses the area of a hill in complete darkness. The animals show that algorithms can be used to make decisions where an array of restrictions may prevent other methods from being effective. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References [1] http://www.maa.org/mathland/mathtrek_5_15_00.html [2] http://mste.illinois.edu/reese/buffon/bufjava.html [3] http://www.absoluteastronomy.com/topics/Monte_Carlo_method [4] The Number Pi. Eymard, Lafon, and Wilson. [5] Monte Carlo Methods Volume I: Basics. Kalos and Whitlock. [6] Heart of Mathematics. Burger and Starbird [7] http://math.tntech.edu/techreports/TR_2001_4.pdf Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936316192150116, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/159422-reccurence-relations.html
# Thread: 1. ## reccurence relations In my notes I have the expression: $a_n = 3a_{n-1} + 2a_{n-2}$ with $a_0 = 1, a_1 = 2$ but I can't see how we get $a_0 = 1$ When I try I get $3a_{0-1} + 2a_{0-2}<br /> = 3*-1 + 2*-2<br /> = -3 + -4<br /> = -7<br />$ so why is $+ 2a_{n-2}<br /> = 4$ to get -3 + 4 = 1 obviously I am not understanding something... can you point out where I am going wrong please? Thanks 2. Go to this web link. You can change values for $f(1)$ click the equals sign. You will see how the recursion changes. Now why? Well we need to know how the initial values because we have two definiting terms. 3. That's a great link, I'm sorry but I still don't get it. if I'm looking for f(4) then f(2) = 3a(2-1) + 2a(2-2) f(2) = 3*a(1) + 2*a(0) <---- where a(1) = 2 and a(0) = 1 f(2) = 3*2 + 2*1 f(2) = 8 f(3) = 3a(3-1) + 2a(3-2) f(3) = 3*a(2) + 2*a(1) <----a(2) = 8, a(1) =2 f(3) = 3*8 + 2*2 f(3) = 28 f(4) = 3a(4-1) + 2a(4-2) f(4) = 3*a(3) + 2*a(2) <-----a(3)=28, a(2) = 8 f(4) = 3*28 + 2*8 f(4) = 100 is that right? (I think I was reading too deep into things when i was trying to figure out how a(0) = 1, I guess it's just a given because a(0) =1 and a(1) = 2 are initial values, we don't need to work out how to get them, we just use them to find higher values??) Just wandering if I have worked through this correctly or not?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232121706008911, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/121060/does-this-subgroup-of-even-braids-have-a-name/121069
## Does this subgroup of “even braids” have a name? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The full braid group on $n$ strands $B_n$ admits a surjective homomorphism $p\colon\thinspace B_n\to \Sigma_n$ onto the symmetric group on $n$ letters, which takes a braid to the induced permutation of its ends. The kernel $P_n$ is well understood; it is the pure braid group on $n$ strands. What about $p^{-1}(A_n)$, where $A_n$ is the alternating group on $n$ letters? Let me call this group $E_n$ for now, because I think it should be called the even braid group. However an internet search using this name (and others such as "orientation preserving braids", "positive braids" and so on) came up blank. Does this group $E_n$ have a name, and has it been studied anywhere in the literature? I would be particularly interested in computations of the cohomology rings of these groups. Update: It occurred to me that there was one obvious name I hadn't searched for, which was "alternating subgroups of braid groups". This led me to the following preprint, http://arxiv.org/abs/1207.3947 which has a section on finding presentations for these groups (the alternating subgroup of the braid group associated to a Coxeter system $(G,S)$ is denoted $\mathcal{B}^+(G)$ in Section 5). So it seems that they do indeed appear in the literature, although not until surprisingly recently! - 3 I'm sure Andy Putman can tell you the current state of knowledge on their cohomology rings. – HW Feb 7 at 12:42 It's a very interesting question. What's your personal motivation for studying these? – Vladimir Dotsenko Feb 7 at 13:56 2 You have a lot of faith in me HW...thankfully I do know something about their cohomology rings... – Andy Putman Feb 7 at 13:57 1 @Vladimir: I like to try to compute the topological complexity (in the sense of Farber arxiv.org/abs/math/0111197) of $K(\pi,1)$'s. My current idea is to use the trace and knowledge of the cohomology of $P_n$ to get information for $B_n$. My methods weren't working, but showed signs that they might if I had half as many elements...which led me to these $E_n$'s! – Mark Grant Feb 7 at 14:19 2 So it seems my faith was justified! – HW Feb 8 at 22:07 ## 3 Answers I don't know if these groups have been studied before, but I can say something about their cohomology rings, at least over $\mathbb{Q}$. Namely, we have $H^k(E_n;\mathbb{Q}) = \mathbb{Q}$ if $k=0,1$ and $H^k(E_n;\mathbb{Q}) = 0$ for $k \geq 2$. Of course, this is the same as the cohomology of the ordinary braid group as computed by Arnold in V. I. Arnold, On some topological invariants of algebraic functions, Trudy Moscov. Mat. Obshch. 21 (1970), 27-46 (Russian), English transl. in Trans. Moscow Math. Soc. 21 (1970), 30-52. Recall that if $H$ is a finite-index normal subgroup of $G$, then $G$ acts on $H^k(H;\mathbb{Q})$ and using the transfer map we have that $H^k(G;\mathbb{Q})$ is equal to the invariants of this action. For braid groups, the action of $B_n$ on $H^k(PB_n;\mathbb{Q})$ factors through an action of the symmetric group $S_n$, so $H^k(PB_n;\mathbb{Q})$ is a representation of $S_n$ and $H^k(B_n;\mathbb{Q})$ is the trivial subrepresentation `$\{\text{$v \in H^k(PB_n;\mathbb{Q})$ $|$ $\sigma v = v$ for all $\sigma \in S_n$}\}$`. Let's now consider $E_n$. In this case, the above argument shows that $H^k(E_n;\mathbb{Q})$ is the subrepresentation `$\{\text{$v \in H^k(PB_n;\mathbb{Q})$ $|$ $\sigma v = v$ for all $\sigma \in A_n$}\}$`. Now, representations of finite groups over $\mathbb{Q}$ decompose into direct sums of irreducible representations. The only two irreducible representations of $S_n$ that restrict to the identity on $A_n$ are the trivial representation and the alternating representation. As we said above, the trivial representation corresponds to $H^k(B_n;\mathbb{Q})$, so we conclude that $$H^k(E_n;\mathbb{Q}) = W \oplus H^k(B_n;\mathbb{Q}),$$ where $W \subset H^k(PB_n;\mathbb{Q})$ is the direct sum of all alternating subrepresentations. The above calculation is thus equivalent to the assertion that the alternating representation does not occur in $H^k(PB_n;\mathbb{Q})$. This follows from the calculation of $H^k(PB_n;\mathbb{Q})$ as a representation of $S_n$ which was done by in the paper "Coxeter group actions on the complement of hyperplanes and special involutions" by Felder-Velesov; see here. The above ref to Felder-Velesov was suggested by Vladimir Dotsenko; I originally included the argument below, which only works for $n \gg k$. It's quite hard to decompose $H^k(PB_n;\mathbb{Q})$ into irreducibles; however the paper "Representation Theory and Homological Stability" (see here) by Church and Farb introduces a recipe that they call "representation stability" which describes how the decomposition of $H^k(PB_{n+1};\mathbb{Q})$ into irreducibles can be constructed from the decomposition of $H^k(PB_n;\mathbb{Q})$ into irreducibles, at least for $n$ large. Their results are hard to summarize briefly, but they do imply that the alternating representation does not occur (it is not "stable" in their sense), again at least for $n$ large. - 1 That's very nice. A quick remark on what you say ("it's quite hard to decompose $H^k(PB_n;\mathbb{Q})$ into irreducibles"): a description of $H^*(PB_n;\mathbb{Q})$ as a representation is done by Felder and Veselov (e.g. arxiv.org/abs/math/0311190), they prove that it is isomorphic to $2 Ind^{S_n}_{\langle (12)\rangle}(1)$. This clearly shows that the alternating representation does not occur at all (Frobenius reciprocity). – Vladimir Dotsenko Feb 7 at 16:23 @Vladimir Dotsenko : Thanks for letting me know about that paper; I had not seen it before! It shows that the "large $n$" condition I included is not necessary. – Andy Putman Feb 8 at 21:29 Thanks Andy (and Vladimir) for this very interesting and informative answer. – Mark Grant Feb 9 at 10:10 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As well as Andy's answer above, they have another property in common with the full braid groups $B_n$: they satisfy homological stability. There are two ways (that I know of) that one can prove this. The first is in the paper: M. A. Guest, A. Kozlowsky, K. Yamaguchi, Homological stability of oriented configuration spaces, J. Math. Kyoto Univ. 36 (1996), no. 4, 809--814. A sketch of the argument is as follows. It's enough to consider rational coefficients and $\mathbb{F}_p$ coefficients, for each prime $p$, separately. With $\mathbb{F}_2$ coefficients $E_n$ automatically inherits homological stability from $B_n$; this is actually true for any family of index-2 subgroups, or more generally double covering spaces, by using the mod-2 Gysin sequence and considering the double covers as 0-sphere bundles. (This only works for mod-2 coefficients since in general there is no Gysin sequence for 0-sphere bundles.) When $F$ is a field of characteristic not 2, the homology $H_k(E_n;F)$ splits as $$H_k(E_n;F) \cong H_k(B_n;F) \oplus H_k(B_n;F^{(-1)})$$ where $F^{(-1)}$ is the local coefficient system where the odd braids act by multiplication by $-1$. A model of the classifying space of $B_n$ is the configuration space $C_n(\mathbb{R}^2)$ of $n$ unordered points on the plane. There is a result of Boedigheimer, Cohen, Milgram and Taylor which calculates $H_k(C_n(M);F^{(-1)})$ in the range $k<\mathrm{dim}(M)n$ for any even-dimensional manifold $M$, and the above paper uses this to compute the $H_k(B_n;F^{(-1)})$ summand in this range. It turns out to be zero in a stable range, and so homological stability for $E_n$ follows from homological stability for $B_n$. The calculations of the Guest-Kozlowsky-Yamaguchi paper actually work more generally, for configuration spaces on any open connected surface, so homological stability is also true for alternating surface braid groups''. The stable range is worse than that for the full braid groups, however: $H_k(B_n;\mathbb{Z})$ is independent of $n$ for approximately $n\geq 2k$, whereas $H_k(E_n;\mathbb{Z})$ only becomes independent of $n$ for approximately $n\geq 3k$. The obstruction to $E_n$ having the better range lies entirely in the 3-torsion. The second way I know of proving homological stability for the alternating braid groups is kind of a shameless plug, as it's something that I wrote (apologies if this is inappropriate; this is my first MO answer). It works more generally for what could be called alternating configuration spaces'' (but which are actually called oriented configuration spaces since that's what they were called by the GKY paper above). It's in the preprint Martin Palmer, Homological stability for oriented configuration spaces, arXiv:1106.4540. and uses a method of taking resolutions'' adapted from that of Oscar Randal-Williams, Resolutions of moduli spaces and homological stability, arXiv:0909.4278. Just for historical completeness, I should also mention that homological stability for the alternating groups $A_n$ (which can be thought of as the alternating braid groups on $\mathbb{R}^\infty$ if one is so inclined) was proved much earlier, as Proposition A on page 130 of the paper: J.-C. Hausmann, Manifolds with a given homology and fundamental group, Comment. Math. Helv. 53 (1978), no. 1, 113--134. using the same kind of decomposition as in the GKY paper. - Dear Martin, thanks very much for the interesting answer and references, and welcome to MO! – Mark Grant Feb 12 at 18:47 Even braid groups are dealt with in Stephen James Tawn: Plat closure of braids, PhD Thesis, University of Warwick, 2008, http://wrap.warwick.ac.uk/854/1/WRAP_THESIS_Tawn_2008.pdf. Is this what you are looking for? - 1 I don't think it's the same, at least superficially. In the OP's question, braids do not necessarily are on the even number of strands. – Vladimir Dotsenko Feb 7 at 13:56 2 I think Vladimir is right, the thesis you link seems to be using "even" only in the sense of number of strands. In particular I couldn't see anything about the alternating group in there. – Mark Grant Feb 7 at 14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 2, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466095566749573, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/114069/characterization-of-totally-categorical-theories
# Characterization of totally categorical theories I have what I am sure is a trivial question, but I can't seem to answer it for myself. In model theory, there is a theorem of Hrushovski which shows that if T is a totally categorical theory (i.e., T is complete and has exactly one model of each infinite cardinality up to isomorphism), then (i) T is not finitely axiomatizable, but (ii) T is finitely axiomatizable modulo infinity; that is, there is some sentence p such that T is precisely the set of sentences true in every infinite model of p. My question is to what extent the converse holds. That is, let T be a (EDIT: complete) theory which is finitely axiomatizable modulo infinity, but which is not finitely axiomatizable. Then is T necessarily totally categorical, and if not, what sort of assumptions on T are enough to ensure total categoricity? (The assumption that T is not actually finitely axiomatizable is clearly necessary: otherwise, take the theory DLO of dense linear orders without endpoints, which is countable categorical but not uncountable categorical.) - By "sentence" $p$ you presumably mean an axiom scheme, not a sentence in the traditional sense. – André Nicolas Feb 27 '12 at 20:21 No! That's what fascinating about it: you only need a single first-order sentence - if you restrict your attention to infinite structures only. (Another way of saying this is that there is some sentence p such that T is the theory of consequences of p together with the scheme of sentences $q_n$, where $q_n$ asserts that the domain has size at least $n$.) – user13568 Feb 27 '12 at 20:34 My confusion is that if $T$ is complete, and has infinite models, and $p$ is true in some or all infinite models, then by completeness $p$ is true in all models. – André Nicolas Feb 27 '12 at 20:48 @André: The important part is that $p$ must be false in all non-models. That does not follow from completeness alone. – Henning Makholm Feb 28 '12 at 14:44 ## 1 Answer Assuming that your logic includes equality, the answer to the first part of the question is "no". For a counterexample, let $T$ be the theory with axioms $$(\exists x_1)(\exists x_2)\cdots(\exists x_n) \bigwedge_{1\le i<j\le n} x_i\ne x_j$$ for all integers $n\ge 2$. Then the models of $T$ are exactly all infinite interpretations of its language. This is not finitely axiomatizable, because in the pure predicate calculus with equality, any sentence with an infinite model also has finite models. However, $T$ is finitely axiomatizable modulo infinity (you can take $p$ to be any propositional tautology, for example), and -- provided we add some relation apart from equality to its vocabulary -- it is obviously very far from from categorical; so far that it seems doubtful that there is any natural way to extend your condition to a sufficient one, without the additional conditions being sufficient in themselves. - I assume that your language has at least one relation symbol distinct from $=$. – Arthur Fischer Feb 27 '12 at 20:46 Yes; otherwise everything is trivially categorical. But (waving hands frantically here) the additional relations cannot be of any use in an attempt to axiomatize the theory without also rejecting some infinite models. – Henning Makholm Feb 27 '12 at 20:48 The problem here is that if your language includes other symbols, then T is not a complete theory. In my question, I intended to focus on only complete theories; this has now been fixed. – user13568 Feb 29 '12 at 0:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428106546401978, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/5569/sigma-protocol-for-3sat-problem?answertab=oldest
# Sigma-protocol for 3SAT problem I have some questions from previous years exams, I hope you could help me with them. :) Let $g,h$ denote generators of a group $G$ of large prime order $n$ such that $\log_g h$ is unknown to anyone. Consider an instance of the 3SAT problem for Boolean variables $v_1, \ldots , v_l$, given by a Boolean formula $\Phi$ consisting of $m$ clauses, which each consist of $3$ literals: $\Phi = (l_{1,1} \vee l_{1,2} \vee l_{1,3}) \wedge \ldots \wedge (l_{m,1} \vee l_{m,2} \vee l_{m,3})$. Each literal is of the form $l_{i,j}=v_k$ or $l_{i,j}=\overline{v_k}=1-v_k$ (negation of $v_k$), $1 \le k \le l$. Construct a $\Sigma$-protocol for the following relation: $R_{\Phi}=\{ (B_1, \ldots, B_l;x_1,y_1,\ldots,x_l,y_l)\colon \Phi(x_1,\ldots,x_l), \forall_{k=1}^l B_k=g^{x_k}h^{y_k}, x_k \in \{ 0,1 \} \}$. Thanks, Peter. - 1 Could you add the definition of $\Sigma$-protocol? – Paŭlo Ebermann♦ Dec 4 '12 at 13:11 Welcome to Crypto.SE! This looks like an interesting question, but it would help to have a little more information. What have you tried so far? What research have you done on your own so far? As the faq suggests, it is important to "do your homework" and show us what you've done so far. I encourage you to read the links in this comment -- they may provide helpful background about this site! – D.W. Dec 4 '12 at 23:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287261962890625, "perplexity_flag": "middle"}
http://ulissesaraujo.wordpress.com/2011/02/12/correct-sorting-with-frama-c-and-some-thoughts-on-formal-methdos/
# Ulisses Costa Blog ## Correct sorting with Frama-C and some thoughts on Formal Methdos 12 02 2011 A couple of years ago, during my masters on Formal Methods I have been working with automatic provers and I also used Frama-C, this is a tool that allow the user to prove C code directly in the source code, using a special notation in the comments, called ACSL notation. Frama-C allows you to make two kinds of proofs, security and safety ones. The safety ones are related with arrays index out of bounds access, and so. This kind of proofs are related to the language itself and they are easy to do if you use loop invariants, pre and post conditions. If you use a high level language, like JAVA you won’t have almost none safety problems. Because C is too close to machine level code, we can do things that we do not intend (or maybe we do and we use C exactly because it allows this kind of things). For example: ```// foo.c file #include <stdio.h> int main() { char *a = "I like you"; char *b = "I hate you"; if(&a < &b) a = *(&a + 1); else a = *(&a - 1); printf("%s\n", a); } ``` As you can see, I never used the $b$ variable for nothing, just have declared it. And the result is: ```[ulissesaraujocosta@maclisses:c]-$ gcc -o foo foo.c [ulissesaraujocosta@maclisses:c]-$ ./foo I hate you ``` This lack of security of language C is one of the reasons we need to write safety statements. Of course this kind of things is why C is so fast and powerful, the person in charge is always the programmer. If you are interested in this kind of tricks and want to understand more about this and smashing the stack and so, feel free to read more posts in my blog about this subject. The other kind of statements (security ones) are related to the functionality of the program and that’s basically where the problem or the effort is, I will talk about this later on. First let’s see the algorithm and the implementation in C. ## Code The algorithm I use here is just a simple example. I used bubble sort, this is a sort algorithm not very efficient, but it uses none more memory then the needed to store the structure you want to sort. To get a visual understanding of the algorithm (and to see it inefficiency) check out this youtube video. This is the implementation of the algorithm: ```void swap(int *i, int *j) { int tmp = *i; *i = *j; *j = tmp; } void bubbleSort(int *vector, int tam) { int j, i; j = i = 0; for(i=0; i<tam; i++) { for(j=0; j<tam-i-1; j++) { g_swap = 0; if (vector[j] > vector[j+1]) { swap(&vector[j],&vector[j+1]); } } } } ``` ## Pre, Post conditions and thinking formally So, as you can see in the video (or in the code) the algorithm is pretty much simple, we pick the $i$ element cross the array $n$ times and for each element we compare with $i$, this $n$ times. We have as pre conditions: The size of the $vector$ must be greater than zero, and all the positions in that vector exists, so in Frama-C we use the $valid\_range(vector, i, j)$, where $i$ and $j$ are indexes of the $vector$ to say that all elements exist. $tam > 0$ $valid\_range(vector,0,tam-1)$ Ans as pos conditions we must ensure that the array is sorted ( I will talk predicate this later). You may think that this by itself is enough to make a complete proof, but you are wrong. Image that my function clear all the elements in the array and fill the array with $\{1,2,..,tam\}$, our code will be proved and its wrong! So, we need to say more… First thing that can pop to your head is OK, we will say that we have the same numbers in the beginning and in the end and you write this: $\forall_a : 0 \leq a < tam : (\exists_b : 0 \leq b < tam : old(vector(b)) \equiv vector(a))$ In fact this is closer (not yet right), imagine that you give as input: $\{4,7,9,1,0,3,4\}$. If your code returns $\{0,1,3,4,7,9\}$ (we miss the repeated $4$) the code will be proved. So, the solution if to make a $Permut$ predicate and prove for the multi set. So, this are the post conditions: $sorted(vector,0,tam-1)$ $Permut\{Old,Here\}(vector,0,tam-1);$ Frama-C is so cool because for example at the pos condition if we want to refer to the state in the beginning (before call the function) we use $Old$ and if we want to refer to the moment after the call we heave the $Here$ keyword, remember we are at the post condition, so this wil be executed in the end (so $Here$ means the end of the function call). ## Predicates So, here is the $Sorted$ predicate. Predicates receive a state $L$ and the parameters (just like a function) and they return bool values (true or false). Inside we use regular ACSL notation. Here I define that for an array to be sorted each element must be less or equal to the next one. ```/*@ predicate Sorted{L}(int a[], integer l, integer h) = @ \forall integer i; l <= i < h ==> a[i] <= a[i+1]; @*/ ``` The $Permut$ is defined inductively, so we receive two states $L1$ and $L2$ and the array $a$ and the range where we want to permute. We write multiple rules for the permutation, reflection, symmetry, transitivity and finally the most important one, the $Swap$. So basically here we say that a permutation is a set of successive swaps. ```/*@ inductive Permut{L1,L2}(int a[], integer l, integer h) { @ case Permut_refl{L}: @ \forall int a[], integer l, h; Permut{L,L}(a, l, h) ; @ case Permut_sym{L1,L2}: @ \forall int a[], integer l, h; @ Permut{L1,L2}(a, l, h) ==> Permut{L2,L1}(a, l, h) ; @ case Permut_trans{L1,L2,L3}: @ \forall int a[], integer l, h; @ Permut{L1,L2}(a, l, h) && Permut{L2,L3}(a, l, h) ==> @ Permut{L1,L3}(a, l, h) ; @ case Permut_swap{L1,L2}: @ \forall int a[], integer l, h, i, j; @ l <= i <= h && l <= j <= h && Swap{L1,L2}(a, i, j) ==> @ Permut{L1,L2}(a, l, h) ; @ } @ @ predicate Swap{L1,L2}(int a[], integer i, integer j) = @ \at(a[i],L1) == \at(a[j],L2) @ && \at(a[j],L1) == \at(a[i],L2) @ && \forall integer k; k != i && k != j ==> \at(a[k],L1) == \at(a[k],L2); @*/ ``` So, as you can see the bubble sort function itself have 18 lines of code, and in the end with the annotations for the proof we end with 90 lines, but we proved it! ## Thoughts My main point here is to show the thinking we need to have if we want to prove code in general. Pick what language you want, this is the easiest way you will have to prove software written in C. Sometimes if your functions are too complex you may need to prove it manually. The problem is not on the Frama-C side, Frama-C only generates the proof obligations to feed to automatic provers, like Yices, CVC3, Simplify, Z3, Alt-Ergo and so. My point here is to show the cost of proving software. Proving software, specially if the language is too low level (like C – you need to care about a lot more things) is hard work and is not easy to a programmer without theoretical knowledge. On the other side, you end up with a piece of software that is proved. Of course this proof is always requirements oriented, ny that I mean: if the requirements are wrong and the program is not doing what you expect the proof is along with that. I do not stand to proof of all the code on the planet, but the proper utilization of FM (formal methods) tools for critical software. I steel been using Frama-C since I learned it in 2009, nowadays I use it for small critical functions (because I want, I’m not encouraged to do so) and I have to say that the use of FM in the industry is far. As I told you Frama-C is the easiest automatic proof tool you will find at least that I know. Talking with Marcelo Sousa about the use of FM in industry, we came to the conclusion that the people that are making this kind of tools and have the FM knowledge don’t make companies. I think if more brilliant people like John Launchbury make companies, definitely FM will be more used. ## Source code Here is all the code together if you want to test it: ```// #include <stdio.h> /*@ predicate Sorted{L}(int a[], integer l, integer h) = @ \forall integer i; l <= i < h ==> a[i] <= a[i+1]; @ @ predicate Swap{L1,L2}(int a[], integer i, integer j) = @ \at(a[i],L1) == \at(a[j],L2) @ && \at(a[j],L1) == \at(a[i],L2) @ && \forall integer k; k != i && k != j ==> \at(a[k],L1) == \at(a[k],L2); @*/ /*@ inductive Permut{L1,L2}(int a[], integer l, integer h) { @ case Permut_refl{L}: @ \forall int a[], integer l, h; Permut{L,L}(a, l, h) ; @ case Permut_sym{L1,L2}: @ \forall int a[], integer l, h; @ Permut{L1,L2}(a, l, h) ==> Permut{L2,L1}(a, l, h) ; @ case Permut_trans{L1,L2,L3}: @ \forall int a[], integer l, h; @ Permut{L1,L2}(a, l, h) && Permut{L2,L3}(a, l, h) ==> @ Permut{L1,L3}(a, l, h) ; @ case Permut_swap{L1,L2}: @ \forall int a[], integer l, h, i, j; @ l <= i <= h && l <= j <= h && Swap{L1,L2}(a, i, j) ==> @ Permut{L1,L2}(a, l, h) ; @ } @*/ /*@ requires \valid(i) && \valid(j); @ //assigns *i, *j; //BUG 0000080: Assertion failed in jc_interp_misc.ml @ ensures \at(*i,Old) == \at(*j,Here) && \at(*j,Old) == \at(*i,Here); @*/ void swap(int *i, int *j) { int tmp = *i; *i = *j; *j = tmp; } /*@ requires tam > 0; @ requires \valid_range(vector,0,tam-1); @ ensures Sorted{Here}(vector, 0, tam-1); @ ensures Permut{Old,Here}(vector,0,tam-1); @*/ void bubbleSort(int *vector, int tam) { int j, i; j = i = 0; //@ ghost int g_swap = 0; /*@ loop invariant 0 <= i < tam; @ loop invariant 0 <= g_swap <= 1; //last i+1 elements of sequence are sorted @ loop invariant Sorted{Here}(vector,tam-i-1,tam-1); //and are all greater or equal to the other elements of the sequence. @ loop invariant 0 < i < tam ==> \forall int a, b; 0 <= b <= tam-i-1 <= a < tam ==> vector[a] >= vector[b]; @ loop invariant 0 < i < tam ==> Permut{Pre,Here}(vector,0,tam-1); @ loop variant tam-i; @*/ for(i=0; i<tam; i++) { //@ ghost g_swap = 0; /*@ loop invariant 0 <= j < tam-i; @ loop invariant 0 <= g_swap <= 1; //The jth+1 element of sequence is greater or equal to the first j+1 elements of sequence. @ loop invariant 0 < j < tam-i ==> \forall int a; 0 <= a <= j ==> vector[a] <= vector[j+1]; @ loop invariant 0 < j < tam-i ==> (g_swap == 1) ==> Permut{Pre,Here}(vector,0,tam-1); @ loop variant tam-i-j-1; @*/ for(j=0; j<tam-i-1; j++) { g_swap = 0; if (vector[j] > vector[j+1]) { //@ ghost g_swap = 1; swap(&vector[j],&vector[j+1]); } } } } /*@ requires \true; @ ensures \result == 0; @*/ int main(int argc, char *argv[]) { int i; int v[9] = {8,5,2,6,9,3,0,4,1}; bubbleSort(v,9); // for(i=0; i<9; i++) // printf("v[%d]=%d\n",i,v[i]); return 0; } ``` If you are interested in the presentation me and pedro gave at our University, here it is: ### Information • Date : February 12, 2011 • Tags: acsl, algorithm, bubbleSort, code, English, formal methods, frama-c, linux, mathematics, msc, programming, proof, research, sort, Theoretical fundations, tools • Categories : Uncategorized Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241413474082947, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/196104-second-order-partial-derivative-question.html
Thread: 1. Second order partial derivative question Hello Experts I have a simple question. Given V as the function of Z and Y, Given Z as the function of R and L, $Z=R+s*L$ Given Y as the function of G and C, $Y=G+s*C$ Assume we also know $\frac{\partial V}{\partial Z}$ and $\frac{\partial^2 V}{\partial Z \partial Y}$ If we want to know $\frac{\partial V}{\partial R}$ then it will be equal to $\frac{\partial V}{\partial Z} * \frac{\partial Z}{\partial R}$ Question: How can we find $\frac{\partial^2 V}{\partial R\partial G}$ by using what is given and known? Regards Aman
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318327903747559, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=121369
Physics Forums ## All groups of order 99 are abelian. Prove all groups of order 99 are abelian: I'm stuck right now on this proof, here's what I have so far. proof: Let G be a group such that |G| = 99, and let Z(G) be the center of G. Z(G) is a normal subgroup of G and |Z(G)| must be 1,3,9,11,33, or 99. Throughout I will make repeated use of the theorem which states if the factor group G/Z(G) is cyclic, then G is abelian. Case 1: Assume |Z(G)| = 99, then Z(G) = G, and G is abelian. Case 2: Assume |Z(G)| = 33, then |G/Z(G)| = 3, a prime, so G/Z(G) is cyclic, and thus G is abelian. Case 3: Assume |Z(G)| = 9, then |G/Z(G)| = 11, a prime, so G/Z(G) is cyclic and G is abelian. Case 4: Assume |Z(G)| = 3, then |G/Z(G)| = 33 which factors into (3)(11). There is a theorem which states that if a group is order of a product of two distinct primes p,q with p<q, then G is cyclic if q is not congruent to 1 modulo p. Since 11 is not congruent to 1 mod 3 G/Z(G) is cyclic, and so G is abelian. OK, here's where I get stuck! Case 5: Assume |Z(G)| = 11, then |G/Z(G)| = 9 = 3^2. Since G/Z(G) is order of a prime squared, G/Z(G) is abelian. Thus by the theorem of finitely generated abelian groups, G/Z(G) is either isomorphic to Z_9 or Z_3 x Z_3. If its isomorphic to Z_9, then G/Z(G) is cyclic and were done. But if its isomorphic to Z_3 x Z_3 then I don't know how to proceed. Case 6: Assume |Z(G)| = 1, the |G/Z(G)| = 99. I'm not sure how to proceed from here. Any suggestions? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor How many Sylow-3 subgroups does a group of order 99 have? How many Sylow-11 subgroups? There is only 1 Sylow 3-subgroup and 1 Sylow 11-subgroup in a group of order 99. Denote these as $S_3$ and $S_{11}$. $|S_3| = 9$ and $|S_{11}| = 11$. $S_{11}$ is cyclic and and every nonzero element of $S_{11}$ is of order 11. This implies that $S_3 \cap S_{11} = \{e\}$. Therfore $|S_3S_{11}| = 99$ so $S_3S_{11} = G$. Also $G \simeq S_3 \times S_{11}$. $S_3$ is order of a prime squared and thus is abelian, and $S_{11}$ is abelian because it is cyclic. Therefore $S_3 \times S_{11}$ is abelian and hence G is abelian. QED Is this right? Wow, thanks much for your help Matt! Recognitions: Homework Help Science Advisor ## All groups of order 99 are abelian. When ever you're given some group of order a small product of 2 primes (or possibly three primes) it is almost always going to be the case that looking at Sylow subgroups will help. Eg show that every group of order pqr where p<q<r are primes is solvable. Thanks, thats useful advice. Now that I think of it though, the proof above doesn't rely on the fact that S_3 and S_11 are Sylow p-subgroups. I could have just stated that there exist subgroups of orders 9 and 11 in G (since 9 and 11 are powers of a prime dividing |G|), and the same results would follow. Is there another proof you had in mind when you gave that hint? Recognitions: Homework Help Science Advisor No, you can't just conclude that. You need to know that both subgroups are normal, and you do that because you can count the number of conjugates of them using Sylow's theorems. Consider a group of order 6. It has a subgroup of order 2 and a subgroup of order 3 (whcih is normal) but it is not necessarily isomorphic to C_3xC_2. And how are you going to state that there is a subgroup of order 9 in G if you aren't going to use the fact that it is a sylow subgroup? Oh, your right. I got two theorems mixed up. Thanks again for your help Recognitions: Homework Help Not really relevant, but I always hated the wording of the theorem about G/Z(G) being cyclic implying G is abelian. If G is abelian, Z(G)=G and G/Z(G) = 1. So a much better way to state this theorem is that G/Z(G) may not be a cyclic group of order greater than one. Thread Tools | | | | |----------------------------------------------------------|----------------------------|---------| | Similar Threads for: All groups of order 99 are abelian. | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Calculus & Beyond Homework | 3 | | | Calculus & Beyond Homework | 1 | | | Beyond the Standard Model | 5 | | | Linear & Abstract Algebra | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370295405387878, "perplexity_flag": "middle"}
http://nrich.maths.org/1948/note
### Roots and Coefficients If xyz = 1 and x+y+z =1/x + 1/y + 1/z show that at least one of these numbers must be 1. Now for the complexity! When are the other numbers real and when are they complex? ### Target Six Show that x = 1 is a solution of the equation x^(3/2) - 8x^(-3/2) = 7 and find all other solutions. ### 8 Methods for Three by One This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different? Which do you like best? # Cube Roots ### Why do this problem? It provides experience of using algebra and working with surds. It leads into complex cube roots if you want to explore the patterns arising with those solutions. ### Key questions We have to evaluate $c^{1\over 3} - d^{1\over 3}$ what are $c^3$, $d^3$, $cd$ and $(c-d)^3$? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487515091896057, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/2703?sort=oldest
## Beilinson conjectures ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Continuing an amazingly interesting chain of answers about motivic cohomology, I thought I should learn about the Beilinson conjectures, referred there. I have found some references, and they seem to present the conjectures from different sides, e.g. there's the statement about vanishing but then there are also connections to motivic polylogarithms. What I miss from these articles in a general picture that would allow us to start somewhere natural. So, how would you describe an introduction into Beilinson conjectures in motivic homotopy? Sorry for such a loaded question — I really don't know how to make it fit MathOverflow format better. One could theoreticlly post lost of specific questions on the topic, but to ask the right questions in this case you might need to know more than I do. Also, I know there are some technical developments, e.g. the language of derived stacks, and my hope would be that somebody could make a connection to these conjectures using some clear and suitable language. - 4 Could you ask a more precise question please? "What's your perspective about X?" provides very little information about what you want to know. It invites people to spend a lot of time coming up with an answer without any indication that you've given any thought to the question. – Anton Geraschenko♦ Oct 27 2009 at 1:37 1 I'd like to second Anton's sentiment above, and suggest the following more precise question, to which I'd personally like an answer: how do we state Beilinson's conjectures in terms of motives? – Thanos D. Papaïoannou Oct 27 2009 at 2:10 That's a very good specific question I didn't think of! You can post it in that form, if you wish. – Ilya Nikokoshev Oct 27 2009 at 7:40 The thing is, initially I thought about posting something more specific, but then I would have to post 10 specific questions! My hope was that somebody would answer on the same level of generality as in mathoverflow.net/questions/2146/…. Seeing that there's no answer yet, I'll edit this or do something different. – Ilya Nikokoshev Oct 27 2009 at 7:41 ## 1 Answer Let me talk about Beilinson's conjectures by beginning with $\zeta$-functions of number fields and $K$-theory. Space is limited, but let me see if I can tell a coherent story. ### The Dedekind zeta function and the Dirichlet regulator Suppose $F$ a number field, with $$[F:\mathbf{Q}]=n=r_1+2r_2,$$ where $r_1$ is the number of real embeddings, and $r_2$ is the number of complex embeddings. Write $\mathcal{O}$ for the ring of integers of $F$. Here's the power series for the Dedekind zeta function: $$\zeta_F(s)=\sum|(\mathcal{O}/I)|^{-s},$$ where the sum is taken over nonzero ideals $I$ of $\mathcal{O}$. 1. This power series converges absolutely for $\Re(s)>1$. 2. The function $\zeta_F(s)$ can be analytically continued to a meromorphic function on $\mathbf{C}$ with a simple pole at $s=1$. 3. There is the Euler product expansion: $$\zeta_F(s)=\prod_{0\neq p\in\mathrm{Spec}(\mathcal{O}_F)}\frac{1}{1-|(\mathcal{O}_F/p)|^{-s}}.$$ 4. The Dedekind zeta function satisfies a functional equation relating $\zeta_F(1-s)$ and $\zeta_F(s).$ 5. If $m$ is a positive integer, $\zeta_F(s)$ has a (possible) zero at $s=1-m$ of order $$d_m=\begin{cases}r_1+r_2-1&\textrm{if }m=1;\ r_1+r_2&\textrm{if }m>1\textrm{ is odd};\ r_2&\textrm{if }m>1\textrm{ is even}, \end{cases}$$ and its special value at $s=1-m$ is $$\zeta_F^{\star}(1-m)=\lim_{s\to 1-m}(s+m-1)^{-d_m}\zeta_F(s),$$ the first nonzero coefficient of the Taylor expansion around $1-m$. Our interest is in these special values of $\zeta_F(s)$ at $s=1-m$. At the end of the 19th century, Dirichlet discovered an arithmetic interpretation of the special value $\zeta_F^{\star}(0)$. Recall that the Dirichlet regulator map is the logarithmic embedding $$\rho_F^D:\mathcal{O}_F^{\times}/\mu_F\to\mathbf{R}^{r_1+r_2-1},$$ where $\mu_F$ is the group of roots of unity of $F$. The covolume of the image lattice is the the Dirichlet regulator $R^D_F$. With this, we have the Dirichlet Analytic Class Number Formula. The order of vanishing of $\zeta_F(s)$ at $s=0$ is $\operatorname{rank}_\mathbf{Z}\mathcal{O}_F^\times$, and the special value of $\zeta_F(s)$ at $s=0$ is given by the formula $$\zeta_F^{\star}(0)=-\frac{|\mathrm{Pic}(\mathcal{O}_F)|}{|\mu_F|}R^D_F.$$ Now, using what we know about the lower $K$-theory, we have: $$K_0(\mathcal{O})\cong\mathbf{Z}\oplus\mathrm{Pic}(\mathcal{O})$$ and $$K_1(\mathcal{O}_F)\cong\mathcal{O}_F^{\times}.$$ So the Dirichlet Analytic Class Number Formula reads: $$\zeta_F^{\star}(0)=-\frac{|{}^{\tau}K_0(\mathcal{O})|}{|{}^{\tau}K_1(\mathcal{O})|}R^D_F,$$ where ${}^{\tau}A$ denotes the torsion subgroup of the abelian group $A$. ### The Borel regulator and the Lichtenbaum conjectures Let us keep the notations from the previous section. Theorem [Borel]. If $m>0$ is even, then $K_m(\mathcal{O})$ is finite. In the early 1970s, A. Borel constructed the Borel regulator maps, using the structure of the homology of $SL_n(\mathcal{O})$. These are homomorphisms $$\rho_{F,m}^B:K_{2m-1}(\mathcal{O})\to\mathbf{R}^{d_m},$$ one for every integer $m>0$, generalizing the Dirichlet regulator (which is the Borel regulator when $m=1$). Borel showed that for any integer $m>0$ the kernel of $\rho_{F,m}^B$ is finite, and that the induced map $$\rho_{F,m}^B\otimes\mathbf{R}:K_{2m-1}(\mathcal{O})\otimes\mathbf{R}\to\mathbf{R}^{d_m}$$ is an isomorphism. That is, the rank of $K_{2m-1}(\mathcal{O})$ is equal to the order of vanishing $d_m$ of the Dedekind zeta function $\zeta_F(s)$ at $s=1-m$. Hence the image of $\rho_{F,m}^B$ is a lattice in $\mathbf{R}^{d_m}$; its covolume is called the Borel regulator $R_{F,m}^B$. Borel showed that the special value of $\zeta_F(s)$ at $s=1-m$ is a rational multiple of the Borel regulator $R_{F,m}^B$, viz.: $$\zeta_F^{\star}(1-m)=Q_{F,m}R_{F,m}^B.$$ Lichtenbaum was led to give the following conjecture in around 1971, which gives a conjectural description of $Q_{F,m}$. Conjecture [Lichtenbaum]. For any integer $m>0$, one has $$|\zeta_F^{\star}(1-m)|"="\frac{|{}^{\tau}K_{2m-2}(\mathcal{O})|}{|{}^{\tau}K_{2m-1}(\mathcal{O})|}R_{F,m}^B.$$ (Here the notation $"="$ indicates that one has equality up to a power of $2$.) ### Beilinson's conjectures Suppose now that $X$ is a smooth proper variety of dimension $n$ over $F$; for simplicity, let's assume that $X$ has good reduction at all primes. The question we might ask is, what could be an analogue for the Lichtenbaum conjectures that might provide us with an interpretation of the special values of $L$-functions of $X$? It turns out that since number fields have motivic cohomological dimension $1$, special values of their $\zeta$-functions can be formulated using only $K$-theory, but life is not so easy if we have higher-dimensional varieties; for this, we must use the weight filtration on $K$-theory in detail; this leads us to motivic cohomology. Write $\overline{X}:=X\otimes_F\overline{F}$. Now for every nonzero prime $p\in\mathrm{Spec}(\mathcal{O})$, we may choose a prime $q\in\mathrm{Spec}(\overline{\mathcal{O}})$ lying over $p$, and we can contemplate the decomposition subgroup $D_{q}\subset G_F$ and the inertia subgroup $I_{q}\subset D_{q}$. Now if $\ell$ is a prime over which $p$ does not lie and $0\leq i\leq 2n$, then the inverse $\phi_{q}^{-1}$ of the arithmetic Frobenius $\phi_{q}\in D_{q}/I_{q}$ acts on the $I_{q}$-invariant subspace $H_{\ell}^i(\overline{X})^{I_{q}}$ of the $\ell$-adic cohomology $H_{\ell}^i(\overline{X})$. We can contemplate the characteristic polynomial of this action: $$P_{p}(i,x):=\det(1-x\phi_{q}^{-1}).$$ One sees that $P_{p}(i,x)$ does not depend on the particular choice of $q$, and it is a consequence of Deligne's proof of the Weil conjectures that the polynomial $P_{p}(i,x)$ has integer coefficients that are independent of $\ell$. (If there are primes of bad reduction, this is expected by a conjecture of Serre.) This permits us to define the local $L$-factor at the corresponding finite place $\nu(p)$: $$L_{\nu(p)}(X,i,s):=\frac{1}{P_{p}(i,p^{-s})}$$ We can also define local $L$-factors at infinite places as well. For the sake of brevity, let me skip over this for now. (I can fill in the details later if you like.) With these local $L$-factors, we define the $L$-function of $X$ via the Euler product expansion $$L(X,i,s):=\prod_{0\neq p\in\mathrm{Spec}(\mathcal{O})}L_{\nu(p)}(X,i,s);$$ this product converges absolutely for $\Re(s)\gg 0$. We also define the $L$-function at the infinite prime $$L_{\infty}(X,i,s):=\prod_{\nu|\infty}L_{\nu}(X,i,s)$$ and the full $L$-function $$\Lambda(X,i,s)=L_{\infty}(X,i,s)L(X,i,s).$$ Here are the expected analytical properties of the $L$-function of $X$. 1. The Euler product converges absolutely for $\Re(s)>\frac{i}{2}+1$. 2. $L(X,i,s)$ admits a meromorphic continuation to the complex plane, and the only possible pole occurs at $s=\frac{i}{2}+1$ for $i$ even. 3. $L\left(X,i,\frac{i}{2}+1\right)\neq 0$. 4. There is a functional equation relating $\Lambda(X,i,s)$ and $\Lambda(X,i,i+1-s).$ Beilinson constructs the Beilinson regulator $\rho$ from the part $H^{i+1}_{\mu}(\mathcal{X},\mathbf{Q}(r))$ of rational motivic cohomology of $X$ coming from a smooth and proper model $\mathcal{X}$ of $X$ (conjectured to be an invariant of the choice of $\mathcal{X}$) to Deligne-Beilinson cohomology $D^{i+1}(X,\mathbf{R}(r))$. This has already been discussed here. It's nice to know that we now have a precise relationship between the Beilinson regulator and the Borel regulator. (They agree up to exactly the fudge factor power of $2$ that appears in the statement of the Lichtenbaum conjecture above.) Let's now assume $r<\frac{i}{2}$. Conjecture [Beilinson]. The Beilinson regulator $\rho$ induces an isomorphism $$H^{i+1}_{\mu}(\mathcal{X},\mathbf{Q}(r))\otimes\mathbf{R}\cong D^{i+1}(X,\mathbf{R}(r)),$$ and if $c_X(r)\in\mathbf{R}^{\times}/\mathbf{Q}^{\times}$ is the isomorphism above calculated in rational bases, then $$L^{\star}(X,i,r)\equiv c_X(r)\mod\mathbf{Q}^{\times}.$$ - 4 Thanks a lot for your long expository answer. – Anweshi Jan 17 2010 at 21:55 Your answers set a new high standard for MathOverflow, please keep going! – Ilya Nikokoshev Jan 18 2010 at 10:55 1 As a minor note, for $F$ totally real and abelian, the "fudge factor" power of $2$ in Lichtenbaum's conjecture is known to be $2^{r_1}$ (on the right hand side) when $m>0$ is even. This follows from Wiles' proof of Iwasawa's Main Conjecture for these number fields, as explained in: J. Rognes and C. Weibel, Two-primary algebraic K-theory of rings of integers in number fields. Appendix A by M. Kolster. J. Amer. Math. Soc. 13 (2000), no. 1, 1--54. – John Rognes Oct 3 2010 at 12:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168880581855774, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3781021
Physics Forums "Integration" on operators Hi! I am having some difficulty in finding a definition about some kind of reverse operation (integration) of a derivative with respect to an operator which may defined as follows. Suppose we have a function of n, in general non commuting, operators $$H(q_1 ,..., q_n)$$ then differentiation with respect to one of them can be defined as, [tex] \begin{equation} \frac{\partial H}{\partial q_i}=\lim_{\lambda \rightarrow 0}\frac{\partial H}{\partial \lambda}(q_1 ,...,q_i +\lambda,..., q_n) \end{equation} [/tex] I have found the above definition in the paper, "Exponential Operators and Parameter Differentiation in Quantum Physics", R. M. Wilcox, J. Math. Phys. 8, 962 (1967) for which a book by Louisell, "Radiation and noise in Quantum Electronics" is cited. Unfortunately I do not have access to that book at the moment where there is a good chance I can find what I am looking for. I would appreciate if someone could provide a definition, if there is one, and in addition point out some references for further studying. Thank you in advance. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Let my rephrase my question to make things a bit simpler. Suppose that you have an equation, $$\begin{equation} \frac{\partial H}{\partial q_1}(q_1 ,..., q_n)=1 \end{equation}$$ where the derivative is defined according to my previous post. Would that imply that, $$\begin{equation} H(q_1 ,..., q_n)=q_1 + f(q_2 ,..., q_n) \end{equation}$$ or not? Hello qasdc, I think one has to be careful here. When it comes to operators, there can always be some problem hiding there. Consider function $H(p,q) = q + pq - qp$. I'm not familiar with this kind of differentiation, but your definition seems to give $$\frac{\partial H}{\partial q} = 1 + p - p = 1.$$ But clearly $pq -qp \neq f(p)$. "Integration" on operators Quote by Jano L. Hello qasdc, I think one has to be careful here. When it comes to operators, there can always be some problem hiding there. Consider function $H(p,q) = q + pq - qp$. I'm not familiar with this kind of differentiation, but your definition seems to give $$\frac{\partial H}{\partial q} = 1 + p - p = 1.$$ But clearly $pq -qp \neq f(p)$. Jano L. , thank you for your answer. However, I do not understand what your point is. The definition of the above derivative is very clear and the result you got for $H(p,q) = q + pq - qp$ is the correct one since, [tex] \begin{equation} H(p,q) = q + pq - qp = q - [q,p] = q - i \hbar \end{equation} [/tex] from where it is obvious that only the first term can survive, giving $$\frac{\partial H}{\partial q} = 1$$ However, I now believe that the "antiderivative" that I seek is not at all trivial to define, which also explains the lack of answers... Recognitions: Science Advisor Quote by qasdc However, I now believe that the "antiderivative" that I seek is not at all trivial to define, which also explains the lack of answers... A lot depends on the details of the commutation relations among the various $q_i$ operators. In your initial post, you said they're noncommuting but gave no other details, so it's not clear whether one can assume canonical commutation relations (which would simplify things a lot). Can you post more detail on that? There are general formulas which might be relevant somehow. Here's one... Let f,g be noncommuting quantities and let F(f,g) be any smooth function of f,g. Then [tex] \def\eps{\epsilon} \Big[ f \,,\, F(f,g) \Big] ~=~ \lim_{\eps\to 0} \frac{F(f, g + \eps [f,g]) ~-~ F(f,g)}{\eps}. [/tex] (where I might have missed a factor of i, or something). Looking at the proof of the above (which proceeds by induction on the number of operations in F), it might be possible to prove your desired result by similar means. If you post a bit more detail on your $q_i$, I might give it a go... Supposing however, that one can give a sensible meaning to the differentiation-by-operator, I don't see how you could do much better for an "antiderivative" than something that works like this: [tex] \frac{\partial H}{\partial q_i} ~=~ 1 ~~~\Rightarrow~~~ H ~=~ q_i + X [/tex] where X is an arbitrary quantity such that [tex] \frac{\partial X}{\partial q_i} ~=~ 0 [/tex] I.e., take an ansatz for H as a general analytic expression and "differentiate" term by term to see what you get. Depending on the details of your Lie algebra, you might get lucky. qasdc, my point was that in general, for any two finite matrices, $AB-BA$ is a function of both $A$ and $B$, which is not a multiple of unit matrix. The case where $AB-BA = i\hbar$ is a very special one. For finite matrices, such case is even impossible. Thank you both for your replies. strangerep, your reply is much more than I hoped for. Sorry for not mentioning that I am interested in the case where canonical commutation relations hold. So let me sort things out. First of all, the definition of the derivative that I provided above, should hold whatever the commutation relations of the $q_i$ are, as soon as someone keeps in mind that he is dealing with noncommuting operators. So, Jano L., that definition will give a result of 1 for the $H(q,p)$ that you provided even for the general case where $[q,p]=c$, where $c$ some operator. In that sense you could say that the partial derivative of $c$ wrt $q$ equals 0, for that kind of derivative. So, this is consistent. However, I am most interested in the very specific case in which, $$H:=H(q,p)$$ where $q,p$ satisfy the usual canonical commutation relations. So, in that case, what would a formal definition of an antiderivative will look like? Now, let me give an example of why, I believe, it is not at all trivial to define some operation of that kind. Let, $$H(q,p)=qpq$$ and $$F(q,p)=F(q)=q^2$$ It is obvious that $H(q,p)$ satisfies the condition to be an integral of $F(q)$ but so do, $$G_1(q,p)=q^2 p$$ $$G_2(q,p)=pq^2$$ and generally any, $$I(q,p)=qpq+f(q)$$ In that sense, it seems that $I(q,p)$ is the most general form of the "integral" but my problem is how can one define that "integration" operation formally so that we arrive naturally at this result. Finally, strangerep I would appreciate if you could provide me with any related bibliography that you are aware of? Recognitions: Science Advisor Quote by qasdc [...] I am interested in the case where canonical commutation relations hold. Oh. That makes everything much easier, since $[q,p]$ commutes with both p and q. If $[q,p] = 1$, then it can be shown (reasonably easily by induction) that: [tex] \def\Pdrv#1#2{\frac{\partial #1}{\partial #2}} [q, F(p)] ~=~ F'(p) ~;~~~~~ [G(q),p] ~=~ G'(q) [/tex] So when you have these sorts of "derivative by operators", you can translate it back to commutators and sometimes that helps to get your thinking straight. Consider also that the commutator is a "derivation" (meaning it satisfies the Leibniz product rule), just as a ordinary derivative does. Also note that in this formalism, q and p are to be treated like independent "variables", i.e., [tex] \Pdrv{p}{q} = 0 = \Pdrv{q}{p} [/tex] Slightly more generally, if f,g are noncommuting quantities such that $[f,g]$ commutes with both f and g, then [tex] [f, F(f,g)] ~=~ \Pdrv{F}{g} [f,g] ~=~ [f,g] \Pdrv{F}{g} [/tex] (The proof is a corollary of the other result I posted earlier.) $$H(q,p)=qpq$$ and $$F(q,p)=F(q)=q^2$$ It is obvious that $H(q,p)$ satisfies the condition to be an integral of $F(q)$ but so do, $$G_1(q,p)=q^2 p$$ $$G_2(q,p)=pq^2$$ and generally any, $$I(q,p)=qpq+f(q)$$ In that sense, it seems that $I(q,p)$ is the most general form of the "integral" but my problem is how can one define that "integration" operation formally so that we arrive naturally at this result. It may help to always rearrange each term in your initial expression into a "standard" order, e.g., with all q's standing in front of p's (or vice versa -- as long as you're consistent). So instead of working with the H you wrote above, rewrite it as [tex] H = qpq = q(qp-1) = q^2p - q [/tex] Then your I(q,p) idea above should get you close to the answer. I.e., work out the integral as if for independent commuting variables, but use a function of the conjugate variable as the integration constant, and maintain your "standard" ordering convention carefully. Also, have a think about how the Leibniz rule would work if you were going in the other direction. (I.e., write it out in terms of commutators, and then again in terms of formal derivatives.) Finally, strangerep I would appreciate if you could provide me with any related bibliography that you are aware of? The thing about commutators involving functions of canonically conjugate variables is quite common in QM textbooks -- but I can't recommend any particular one since I've always just worked out that stuff by hand when needed. The more sophisticated formulas I mentioned were taught to me by Arnold Neumaier via unfinished unpublished papers, so I'm not at liberty to say much more than I have already. But I'm not sure you really need that stuff anyway if canonical commutations relations are sufficient. Do you have a more specific Hamiltonian in mind? What is the underlying problem or application? Sorry for my late reply and thank you for another very informative post. I have been aware of most of what you mention in your last post, except for the formula: $$[f, F(f,g)] ~=~ \frac{\partial F}{\partial g} [f,g] ~=~ [f,g] \frac{\partial F}{\partial g}$$ Normal ordering, for example, is treated in the book of Luisell that I mentioned on my first post. Using that, one could then define an anti-derivative, at least for functions that can be expanded in a series in $q, p$. But one might not want to restrict himself that much... I do not think there is any point in getting in any more details about this, we would probably exceed the purpose of this forum. If, however, I find anything interesting regarding it or if I feel I need someone to further discuss about these, I will certainly contact you by a pm. I made this thread because I wanted to ensure that this "integral" definition is not something trivial that I just happened to be ignoring... But as it turned out this is not the case. PS: By the way, by mentioning Arnold Neumaier, I came across this book: "Classical and quantum mechanics via Lie algebras", which seems interesting and I might have a look at it. Recognitions: Science Advisor Quote by qasdc [...] one could then define an anti-derivative, at least for functions that can be expanded in a series in $q, p$. But one might not want to restrict himself that much... The formulas I've mentioned are not necessarily restricted to functions that can be expanded in a series. With care, they can be applied to quotients, continued fractions, and even general analytic functions with poles. I do not think there is any point in getting in any more details about this [...] OK. Thread Tools | | | | |-------------------------------------------------|----------------------------|---------| | Similar Threads for: "Integration" on operators | | | | Thread | Forum | Replies | | | Computers | 14 | | | Calculus & Beyond Homework | 7 | | | Advanced Physics Homework | 2 | | | Calculus & Beyond Homework | 11 | | | Calculus & Beyond Homework | 9 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509109854698181, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/24360/how-to-fit-the-coefficients-of-a-first-order-linear-auto-regressive-function-wit
# How to fit the coefficients of a first order linear auto-regressive function without noise in the model, and the coefficient are equal? I have a simple auto-regressive function: $x_{i+1} = c - cx_{i}$ It is `linear` and `first order`. There is no noise in the model although there is in the data. I am using `Matlab` and have a vector (time series) of numbers that has arisen from a process governed by the above equation. I would like to obtain the value of the coefficient $c$ for the least squares fit to this data. How can this be done in Matlab by using a library function or toolbox? I have looked at the ar command, but fail to see how this can be used in my case. I consider writing instead a function to take the data and calculate the root-mean-squared error over a set of possible values for $c$ choosing the one that is smallest. - 1 For any $1 \leq i < n$, we have $c = x_{i+1} / (1-x_i)$. – cardinal Mar 9 '12 at 0:53 @cardinal, I think that I made a mistake in the question which I edited, there is no noise modeled although the data contains noise. – Vass Mar 9 '12 at 0:57 2 What do you mean "There is no noise in the model but there is in the data"? That doesn't seem to make a whole lot of sense on the surface. – cardinal Mar 9 '12 at 0:59 @cardinal, I want to say is that the data I am fitting to has noise, and I want to find the best value of `c` without including any extra parameters to account for the noise – Vass Mar 9 '12 at 1:03 4 Even in the regression setting, the gist of my statement still stands. If $\mathbb E( X_{i+1} \mid X_i ) = c - c X_i$, then the least-squares estimate of $c$ is $$\hat c = \frac{\sum_i x_{i+1}(1-x_i)}{\sum_i (1-x_i)^2} \> .$$ – cardinal Mar 9 '12 at 1:05 ## 1 Answer It makes no sense to say there is no noise in the model but there is in the data. I think your model should be a standard AR(1) with a parameter constraint: $$x_{i+1} = c(1-x_i) + e_i$$ where $e_i$ is white noise. Then the least squares estimate of $c$ is $$\hat{c} = \frac{\sum x_{i+1}(1-x_i)}{\sum (1-x_i)^2}.$$ - (+1) Is this a subtle hint I should be posting my comments as an answer? :) – cardinal Mar 9 '12 at 1:37 Sorry. I think your comment must have gone up at about the same time as my answer. In any case, I didn't see it before I posted. If we both derived the same result, it is probably correct! – Rob Hyndman Mar 9 '12 at 2:41 No reason to be sorry at all! :) It was interesting how similar it was. I'm glad you posted it! Cheers. – cardinal Mar 9 '12 at 2:42 @RobHyndman and @cardinal, is this available in Matlab to be computed as a function (just out of curiosity). Also, in the case where the equation has 2 parameters and a squared `x_i` term is there an analogous method of evaluation? – Vass Mar 9 '12 at 7:52 1 @Vass. You will need to write your own Matlab function. It should be about 3 lines at most. For the version with squared x_i, you will need to derive the estimators and then write the corresponding function. – Rob Hyndman Mar 9 '12 at 10:17 show 1 more comment default
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370516538619995, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/7640/expected-number-of-tosses-so-that-1-out-of-3-bins-has-2-balls-in-it
# Expected number of tosses so that 1 out of 3 bins has 2 balls in it You have 3 bins. You can toss balls one at a time to the bins until any one of them has 2 balls in it, and then you stop. The tosses are independent, and each bin is equally likely to be hit. It's impossible to not make it in a bin on any given toss. What is the expected number of tosses so that 1 bin has 2 balls in it? -- Not sure if it's important, but by the pigeon hole principle we know that the maximum number of balls tosses required to do this would be 4. - ## 2 Answers It can't happen in 1 toss. In 2 tosses, the first toss doesn't matter (probability 1), but the second must match the first (probability $\frac{1}{3}$), so it happens in 2 tosses with probability $1\cdot\frac{1}{3}=\frac{1}{3}$. In 3 tosses, the first toss doesn't matter, the second toss must not match the first toss ($\frac{2}{3}$), and the third toss must match one of the first two ($\frac{2}{3}$), so it happens in exactly 3 tosses with probability $1\cdot\frac{2}{3}\cdot\frac{2}{3}=\frac{4}{9}$. In 4 tosses, the first toss doesn't matter, the second toss must not match the first toss, the third toss must not match either of the first two tosses ($\frac{1}{3}$), and the fourth toss will match one of the first three no matter what (1), so it happens in exactly 4 tosses with probability $1\cdot\frac{2}{3}\cdot\frac{1}{3}\cdot 1=\frac{2}{9}$. Note that the sum of these three probabilities is 1, which confirms that there is no need to look past 4 tosses. The expected number of tosses is $2\cdot\frac{1}{3}+3\cdot\frac{4}{9}+4\cdot\frac{2}{9}=\frac{26}{9}$. - Hint: So what is the chance that you get success in two throws? Call it x. In exactly three throws? Call it y. And that tells you the chance that you will need four is 1-x-y. Then what is the expected number of throws? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513418078422546, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/118607-what-am-i-missing-here.html
# Thread: 1. ## What am i missing here? (All integrations are in respect of dx) Its known that: $<br /> e^x=\sum_{n=0}^{\infty} \frac{x^n}{n!} \quad \quad (1)$ Now if we integrate both parts we have that: $<br /> \int e^x=\int \sum_{n=0}^{\infty} \frac{x^n}{n!} \Rightarrow$ $<br /> e^x=\int\frac{x^0}{0!}+\int\frac{x^1}{1!}+...\Righ tarrow$ and correct me if i am wrong but: $<br /> \int\frac{x^n}{n!}=\frac{x^{n+1}}{(n+1)!}$ So $<br /> e^x=\frac{x^1}{1!}+\frac{x^2}{2!}+...\Rightarrow$ $<br /> <br /> e^x=\sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1)!}\quad \quad (2)$ And comparing 1 and 2 $<br /> \sum_{n=0}^{\infty} \frac{x^{n}}{(n)!}=\sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1)!}$ What am i missing here? Thank you a lot 2. Originally Posted by gdmath Its known that: $<br /> e^x=\sum_{n=0}^{\infty} \frac{x^n}{n!} \quad \quad (1)$ Now if we integrate both parts we have that: $<br /> \int e^x=\int \sum_{n=0}^{\infty} \frac{x^n}{n!} \Rightarrow$ $<br /> e^x=\int\frac{x^0}{0!}+\int\frac{x^1}{1!}+...\Righ tarrow$ and correct me if i am wrong but: $<br /> \int\frac{x^n}{n!}=\frac{x^{n+1}}{(n+1)!}$ So $<br /> e^x=\frac{x^1}{1!}+\frac{x^2}{2!}+...\Rightarrow$ $<br /> <br /> e^x=\sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1)!}\quad \quad (2)$ And comparing 1 and 2 $<br /> \sum_{n=0}^{\infty} \frac{x^{n}}{(n)!}=\sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1)!}$ What am i missing here? Thank you a lot You are missing the fact that $\int f(x) dx= F(x)+ C$. That is, what you have shown is that $\sum_{n=0}^{\infty} \frac{x^{n}}{(n)!}=\sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1)!}+ C$ and that's obviously true: let k= n+1 on the right and each term becomes $\frac{x^k}{k!}$. When n= 0, k= 1 so your equation becomes $\sum_{n=0}^{\infty} \frac{x^{n}}{(n)!}=\sum_{k=1}^{\infty} \frac{x^{k}}{(k)!}+ C$ which is true with C= 1, the missing "first term" on the right. 3. Thank you Indeed this explains the result. However i rely on the fact that $e^x$ is the "neutral" element in calculus (as unit is at multiplication / division). So what about if we derive the initial relation??? I thing we lead to a similar inconvinient result. $<br /> \sum_{n=0}^{\infty} \frac{x^{n}}{(n)!}=\sum_{n=0}^{\infty} \frac{x^{n-1}}{(n-1)!}$ of course in this last case $n!$ (for negative n) can be defined properly so the last relation to be valid 4. Originally Posted by gdmath Thank you Indeed this explains the result. However i rely on the fact that $e^x$ is the "neutral" element in calculus (as unit is at multiplication / division). So what about if we derive the initial relation??? I thing we lead to a similar inconvinient result. $<br /> \sum_{n=0}^{\infty} \frac{x^{n}}{(n)!}=\sum_{n=0}^{\infty} \frac{x^{n-1}}{(n-1)!}$ of course in this last case $n!$ (for negative n) can be defined properly so the last relation to be valid $\sum_{n=0}^{\infty}\frac{x^n}{n!} = \frac{1}{1} + \frac{x}{1} + \frac{x^2}{2} + \frac{x^3}{6} + ...$ Differentiating that, we get: $0 + 1 + x + \frac{x^2}{2} + ... = \sum_{n=0}^{\infty} \frac{x^n}{n!} = e^x$ Why was your result wrong? The derivate of a constant, c, is 0 -- not $\frac{x^{n-1}}{(n-1)!}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201467633247375, "perplexity_flag": "middle"}
http://nrich.maths.org/6739/solution
### Pebbles Place four pebbles on the sand in the form of a square. Keep adding as few pebbles as necessary to double the area. How many extra pebbles are added each time? ### Great Squares Investigate how this pattern of squares continues. You could measure lengths, areas and angles. ### Square Areas Can you work out the area of the inner square and give an explanation of how you did it? # Weekly Problem 40 - 2009 ##### Stage: 3 Short Challenge Level: Label the vertices of the quadrilateral $A$, $B$, $C$ and $D$ as shown. By Pythagoras theorem, $AC^2=AD^2+DC^2=7^2+9^2=130$, so again by Pythagoras, $AB^2=130-BC^2=130-3^2=121$. Therefore $AB=11$cm. The area of the quadrilateral is the area of the top right angled triangle plus the area of the bottom right angled triangle $=\frac{7\times 9}{2}+\frac{3\times 11}{2}=\frac{63+33}{2}=\frac{96}{2}=48$cm$^2$. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978753685951233, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/244494/motivating-isomorphism-of-varieties/251343
# Motivating (iso)morphism of varieties I am reading course notes on algebraic geometry, where a morphism of varieties is defined as follows ($k$ is an algebraically closed field): Let $X$ be a quasi-affine or quasi-projective $k$-variety, and let $Y$ be a quasi-affine or quasi-projective $k$-variety. A map $f:X\to Y$ is called a morphism of $k$-varieties if $f$ is continuous, and if for every open subvariety $U$ of $Y$ and every regular function $f:U\to k$ the composition $h\circ f$ is a regular function on $f^{-1}(U)$. I have trouble seeing the motivation for this definition. The above notion of morphism seems to imply that the 'structure' of a variety (what distinguishes it from a mere set) is the following: • A topology, the Zariski topology, which is extremely coarse (weak) compared to the Euclidean topology in the case $k=\mathbb{C}$. • For every Zariski-open $U\subseteq X$ a specification of which functions $U\to k$ are considered 'nice', i.e. regular. Given that this is indeed the structure we want to assign to a variety, I agree with the above notion of morphism. In particular, I can see that two varieties are isomorphic exactly when their structure is 'the same' (so that the difference between them is just that their points have different names). However, I fail to see why the above two bullets accurately capture what we want them to. In fact, I do not know when we want to consider two varieties as isomorphic, and why. I do not know why we want the curves defined by $x^2+y^2=1$ and $x^2+y^2=2$ to be deemed isomorphic, but not the affine plane and the punctured affine plane (except for just pointing back to the definition I am trying to motivate, and showing that no isomorphism exists algebraically, but that's not enlightening). I do know this in the category of smooth manifolds: I expect two spheres to be diffeomorphic because I can stretch one smoothly to exactly match the other. I expect a sphere and a torus not to be diffeomorphic because no matter how hard I try, I cannot strech the sphere and make it coincide with a torus. Another example: the affine line and the cusp are not isomorphic, and the difference lies exactly in the singularity of the cusp (its... well, cusp). Is this what we want to encode, the behaviour of varieties near singularities (I suspect this is only part of what we want to encode)? Do we want two varieties to be isomorphic if there is a bicontinuous bijection between them that maps singularities to singularities of the same kind? (Here, I do not know what I mean by the 'kind' of a singularity, and in fact I don't even know what exactly I mean by a singularity.) I expect that isomorphic varieties will have 'analogous' singularities at corresponding points, but I suppose there is more to the structure of a variety than this (indeed, not all smooth varieties are isomorphic). What do we want the structure of a variety to entail intuitively? What intuitive/geometric information is encoded in the 'structure' as outlined in the bullets above? Edit. I want to know what is encoded in the structure of a variety on a vague and intuitive level. I do not require mathematical justification for the answers at all (no need to prove that this is what we encode). - 7 This is the sort of question that will answer itself once you know more algebraic geometry... The definition you gave is the result of a couple of centuries of refining the notion, so it is quite understandable that it is not immediately compherensibe. The same thing happens with the definition of manifolds or connection, which would have seemed pretty arcane even to Gauss himself. – Mariano Suárez-Alvarez♦ Nov 25 '12 at 20:31 The best way to deal with your question is for you to read up a bit ahead, browse books likes Reid's book for undergrads, and so on (as opposed to studying those books, say...), to get a feeling of what people actually do, which is the best way to understand what they want to do. – Mariano Suárez-Alvarez♦ Nov 25 '12 at 20:34 An isomorphism is a morphism which is a homeomeorphism and for which the inverse homeomorphism is also a morphism. What could be simpler? [Singularities will care about themselves : don't worry about them ] – Georges Elencwajg Nov 25 '12 at 22:47 @GeorgesElencwajg: I agree with your statement, but I don´t really understand the point you´re making. My problem is not with the definition of an isomorphism, or with any technical issue. My problem is motivating that the structure of a variety carries interesting information, and getting some intuition for what information it carries. – Daan Michiels Nov 25 '12 at 23:51 @MarianoSuárez-Alvarez: You make a fair point. However, it seems to me that there should be some intuitive way of explaining what the structure of a variety encodes without justifying that this is what it encodes. When first reading the axioms defining a topological space, it is completely unclear what the structure (the set of opens satisfying some weird rules) encodes, and it takes a lot of experience and getting-used-to before one realizes that these axioms encode what you want them to. Nevertheless, there are vague but very useful formulations of what the structure of a topological (...) – Daan Michiels Nov 26 '12 at 0:03 show 1 more comment ## 2 Answers Mariano Suarez-Alvarez's point about understanding the intuition as you learn the theory more is correct, but I'd like to give a partial answer to help guide your intuition. After all, it is possible to spend months or years learning algebraic geometry and come away with little intuition of what the whole subject is about. First, algebraic varieties are geometric spaces which look locally like affine varieties. In this sense, the theory is developed similar to, say, the theory of manifolds where a manifold is defined to be a space that is locally Euclidean. Of course, that limits the local study of manifolds - any two manifolds are locally isomorphic. Not so for algebraic varieties, as there is a wide variety of affine varieties. So I think you should begun by restricting your question to affine varieties. And the key is that affine varieties are completely determined by their ring of globally regular functions. In other words, two (irreducible) closed subsets of affine space are isomorphic iff we can find a global 'change of variables' that identifies the global regular functions on the two spaces. Rescaling $(x,y) \mapsto (\sqrt{2}x,\sqrt{2}y)$ yields the isomorphism between $x^2+y^2=1$ and $x^2+y^2=2$. I'll modify your non-example (because $\mathbb{A}^2 \setminus \{0\}$ is not affine) and explain why $\mathbb{A}^1$ and $\mathbb{A}^1 \setminus \{0\}$ are not isomorphic. Their rings of regular functions are $k[T]$ and $k[T,T^{-1}]$ respectively, which are not isomorphic. So there can be no 'changes of variables' that identifies the two spaces. One important caveat: when I say there is global 'change of variables' from $X \subset \mathbb{A}^n$ and $X' \subset \mathbb{A}^{n'}$, I am talking about using polynomial maps that are restricted from the respective affine spaces, but they only needed to be defined on the spaces $X$ and $X'$. For example $\mathbb{A}^1 \setminus \{0\}$ (viewed as $t \neq 0$) and $xy=1$ are isomorphic via $t \mapsto (t, 1/t)$ and $(x,y) \mapsto x$. Of course, $1/t$ is only a valid change of variables when $t \neq 0$, but fortunately we are only looking at points where $t \neq 0$. The global story is similar, except that we cannot just compare globally regular functions. (For example, the only globally regular functions on any projective variety are the constant functions, yet intuitvely there ought to be many different projective varieties up to isomorphism.) So now we require a global 'change of variables' so that regular functions on local pieces match up with the regular functions on the corresponding local pieces. I am not sure if this explanation is what you are looking for. Algebraic geometry is very much a function oriented theory. We compare spaces by looking at the functions on them. One can take such an approach to manifolds as well. But for manifolds we also have an intuition for what the possible change of variables are ('stretching' and 'twisting' and the like). It's much harder to tell such a story in algebraic geometry because algebraic varieties are so much more diverse. There are still some basic intutions such as you can't have an isomorphism between a smooth variety and a singular variety because isomorphisms give rise to (vector space) isomorphisms of tangent spaces. But there are lots of possible singularities, and getting a hold on them is a major on-going project in the field. For example, you could study plane curves in depth and learn to tell apart singularities in this case (using blowups). But then you'll quickly discover the singularities on surfaces are more complicated and those on higher dimensional varieties still more complicated and hard to get a handle on. - 1 Yes, this is quite helpful. There are two things I struggle with in your answer. First (detail, I suppose): you write 'polynomial' and then use 1/t. This confuses me. Second: I understand that manifolds are locally euclidean, and that varieties are locally affine varieties, but in the latter the Zariski topology is used (correct?). For me, 'locally in the Zariski topology' is intuitively very different from 'locally' as 'near a point' because the Zariski topology is a very unusual topology (at least for me, as I'm used to working in spaces that are at least Hausdorff). Any thoughts? – Daan Michiels Dec 4 '12 at 15:09 1 'Polynomial' is the loose way of describing regular maps - functions which can locally (in the Zariski topology) be represented as ratios of polynomials. I was trying not to be sloppy, but I wasn't careful enough in my writing. For the second point, yes the Zariski topology is unintuitive at first, but that gets alleviated in part through experience. Many things that may seem intuitive now were not always so. Most of us have at least come to terms with the abstract definition of compactness via open covers, even though 'closed and bounded' is a lot more intuitive characterization ... – Michael Joyce Dec 4 '12 at 15:39 2 for Euclidean spaces. But we learn the abstraction both for the ability to prove things more easily with the abstract definition as well as to provide a theory which generalizes past Euclidean spaces. Similarly, the Zariski topology feels more natural once you learn the various algebraic tricks to recover intuition as much as possible in this more abstract setting. And over time you will appreciate the power of this framework for its generality. And eventually you will come full circle to the sticking point that there are not enough open sets in the Zariski topology, so ... – Michael Joyce Dec 4 '12 at 15:42 1 you'll learn about things like the etale topology and, more generally, Grothendieck topologies to address that problem while staying in an algebraic setting. Ultimately, this is a long-winded attempt at saying that there are many geometric problems where it is both helpful and natural to stay with an algebraic framework. It takes a good amount of time and study to appreciate the reasons. – Michael Joyce Dec 4 '12 at 15:47 And one last comment on the Zariski topology: it's the only topology that is universal enough to be able to work with algebraic varieties defined over any (algebraically closed, if you like) field. There is no way to generalize the other open sets in the classical Euclidean topology over $\mathbb{R}$ or $\mathbb{C}$ to work over arbitrary fields, though there are generalizations to some fields (such as the $p$-adic numbers, $\mathbb{Q}_p$) which are very important in their own right. – Michael Joyce Dec 4 '12 at 15:51 ## Did you find this question interesting? Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). email address One theorem you might have come across is the set of morphisms $X \to Y$, for $X$ and $Y$ affine varieties, is in canonical bijection with the set of homomorphisms $A(Y) \to A(X)$ of their coordinate rings. In particular, up to isomorphism, there is a unique affine variety with a given coordinate ring (so long as that ring is actually of the right form to appear as a coordinate ring, that is, a finitely-generated, reduced algebra over the base field $k$). More generally, if $Y$ is affine but $X$ may not be, the morphisms $X \to Y$ are given by the homomorphisms of rings from $A(Y)$ to the ring of global regular functions on $X$. The reason that this is is that we've defined affine varieties to have just enough structure to be controlled by their coordinate rings. The Zariski topology and the definition of a morphism are just ways of formalizing this. Now, not all varieties are affine, but they're all locally affine, so again, a morphism of varieties $X \to Y$ can be thought of as a bunch of ring homomorphisms from affine coordinate rings of affine subsets of $Y$ that agree on the necessary intersections. An affine variety can be thought of as telling you (1) a geometric object and (2) what polynomial functions on that geometric object look like. Under this point of view, the reason that the affine line and a cuspidal curve aren't isomorphic is that their coordinate rings aren't isomorphic. The coordinate ring of the line is just $k[t]$, but, for example, the coordinate ring $k[x,y]/(y^2 - x^3) \cong k[t^2,t^3]$ of the cuspidal curve $y^2 - x^3$ in $\mathbb{A}^2$ differs by not being integrally closed. That is, there is a rational function (here $y/x$) that behaves like a regular function on the curve, but doesn't make sense as a regular function when extended to the plane. The non-isomorphism between the two coordinate rings precisely tells you about the cusp and the various problems it causes. At this point, there are two motivations. One is that a lot of geometric constructions have nice, algebraic definitions. For instance, the actions of passing to a closed subvariety or omitting a closed subvariety, which in terms of the Zariski topology are just taking open and closed subsets, correspond respectively to taking a quotient or localization of a ring. Even this simple example shows why the algebraic structure is nice to have: while it's obvious what a closed subvariety should look like over $\mathbb{C}$, what its codimension should be, et cetera, you lose the nice picture as soon as you move to a non-topological field. If you care about solving polynomials over fields other than $\mathbb{C}$, you need a theory that mimics the geometric features of $\mathbb{C}$ (and thus allows you to talk about dimension, tangent vectors, smoothness, and so on) without using its analytic topology. A slightly more complicated example is flatness. A ring homomorphism $R \to S$ is flat if, whenever $M \to N$ is an injection of $R$-modules, $S \otimes_R M \to S \otimes_R N$ is an injection of $S$-modules. Serre found that this slightly arcane algebraic condition is actually the best way to talk about deformations of varieties. Namely, given a morphism $f:X \to Y$ of varieties, you can think of the various fibers $f^{-1}(y)$ for $y \in Y$ to be deformations of each other; if the morphism locally corresponds to a flat ring homomorphism, then these deformations are actually nicely behaved (they'll all be the same dimension, and so on). If you're interested in solving polynomial equations over fields, this should hopefully be enough motivation. This is a fascinating subject on its own, of course. For example, Serre's GAGA established deep relations between the algebraic geometry of varieties over $\mathbb{C}$ and their analytic geometry as manifolds; statements like Fermat's Last Theorem are naturally statements about solutions to polynomials over fields; and I hear that solutions to polynomials over finite fields are important to modern-day cryptography, among other things. If this isn't enough, here's the second piece of motivation. Even if you don't care about varieties, there are a lot of reasons to care about rings. Varieties can be thought of as nicer versions of rings, ones that we can patch together and use geometric intuition on, and some statements purely about rings can be cleanly proved in the language of varieties. The one caveat is that only certain rings appear as affine coordinate rings of affine varieties. For example, if you're interested in number theory or diophantine equations, you might want to study rings that aren't algebras over any field; other areas of math naturally give rise to non-noetherian rings, which then don't arise as finitely generated algebras over fields. A major program of Grothendieck in the 1960's, which has greatly influenced modern-day algebraic geometry, was to generalize varieties to more general objects, called schemes, that do for arbitrary rings what varieties do for finitely generated, reduced algebras over a field. Now, a scheme is much harder to picture than a variety, but it can be tremendously helpful in terms of applying some sort of geometric intuition to arbitrary rings. So, in summary, varieties have rings attached to them given by the polynomials 'defined on' that variety, and a morphism of varieties is supposed to pull polynomials on its codomain back to polynomials on its domain (which Michael Joyce's answer explained quite well). The structure of a variety is meant to encode precisely this information, and thus to allow us to use our geometric intuition over fields where analytic reasoning is no longer possible. More generally, defining schemes allows us to use this geometric intuition to understand arbitrary rings. Algebraic geometry is an old, wide, and fascinating field, and I guarantee you that if you continue to study it, you will find some key idea that puts all this into context and makes everything make sense for you personally. A few good sources to begin that journey are Harris's Algebraic Geometry: A First Course (for the variety-centric point of view), Hartshorne's Algebraic Geometry and Vakil's excellent online notes (if you're interested in schemes), and Eisenbud and Harris's The Geometry of Schemes (only after you've gained a bit of experience -- it attempts to provide motivation and geometric intuition for many ideas about schemes). Best of luck with it! - Thank you Paul, I like your answer. I particularly appreciate the description of flatness. I have decided to accept Michael's answer because it was helpful, but award you the bounty because your answer comes closer to the intuition I am looking for. – Daan Michiels Dec 6 '12 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504073262214661, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/42077-prove-gcd-two-consecutive-fibonacci-numbers-1-a.html
# Thread: 1. ## Prove that the gcd of two consecutive Fibonacci numbers is 1. Prove that the gcd of two consecutive Finonacci numbers is 1. Also identify the ratio $F_{n+1}/F_n$ of consecutive Fibonacci numbers and try to identify the limit. 2. Originally Posted by mathwizard Prove that the gcd of two consecutive Finonacci numbers is 1. By definition, $F_{n+1} = F_n + F_{n-1}$. Assume the $(F_{n+1},F_n) = d$. This implies $d|F_{n+1}, d|F_n$ and thus $d|F_{n-1}$. We can thus recursively(or reverse inductively) claim that $d|F_n \Rightarrow d|F_{n-1}$. Proceeding until n=2, we find that our hypothesis implies $d|F_2,d|F_1 = 1 \Rightarrow d|1 \Rightarrow d=1$ Thus $(F_{n+1},F_n) = 1$ for all positive integers n. Originally Posted by mathwizard Also identity the ratio $F_{n+1}/F_n$ of consecutive Fibonacci numbers and try to identity the limit. By definition, $F_{n+1} = F_n + F_{n-1}$ If you prove that the limit exists, finding it can be easy. So by definition, $\frac{F_{n+1}}{F_n} = 1 + \frac{F_{n-1}}{F_n}$. Let the limit of the ratio be L, then $\lim_{n \to \infty} \frac{F_{n+1}}{F_n} = \lim_{n \to \infty} \frac{F_{n}}{F_{n-1}} = L$ So $\frac{F_{n+1}}{F_n} = 1 + \frac{F_{n-1}}{F_n} \Rightarrow L = 1 + \frac1{L}$. So solve the quadratic and choose the appropriate root. 3. Originally Posted by mathwizard Prove that the gcd of two consecutive Finonacci numbers is 1. We can prove something really shocking. Let $F_n$ denote the $n$-th Fibonnaci number. We will prove $\gcd (F_n, F_m) = F_{\gcd(n,m)}$. WLOG say $m\geq n$. Let us apply the Euclidean algorithm to get, $\left\{ \begin{array}{c} m = q_1n + r_1 \\ n = q_2 r_1+r_2 \\ r_1 = q_3 r_2 + r_3 \\ ... \\ r_{k-2} = q_k r_{k-1} + r_k \\ r_{k-1} = q_{k+2}r_k + 0 \end{array} \right.$ The Euclidean algorithm tells us that if $a = qb + r$ then $\gcd(a,b) = \gcd(b,r)$. It turns out (we will prove this soon) that we have a similar situation with Fibonacci numbers, i.e. $\gcd(F_a,F_b) = F(F_b,F_r)$. If we assume this fact for now than: $\gcd(F_n,F_m) = \gcd( F_{r_1}, F_n) = ... = \gcd(F_{r_{k-1}},F_{r_k})$. Thus, the problem reduces to computing $\gcd (F_{r_{k-1}},F_{r_k})$. This ends up being (we will show this soon) equal to $F_{r_k}$ which is in fact $F_{\gcd(n,m)}$ by the Euclidean algorithm. To establish the above facts that we used we need an identity. This identity is $F_{m+n} = F_{m-1}F_n+F_mF_{n+1}$ for $m\geq 2$. This is established by using basic induction. With this identity we can prove that if $n,m\geq 1$ then $F_n|F_{nm}$. This is proven by induction as well: the inductive step is to note $F_{n(k+1)} = F_{nk+n}= F_{nk-1}F_n+F_{nk}F_{n+1}$ and now use induction. We are now in the position to prove if $a=qb+r$ then $\gcd(F_a,F_b) = \gcd(F_b,F_r)$ (note the fact that $\gcd(F_{r_{k-1}},F_{r_k}) = F_{r_k}$ will follow since $r_k | r_{k-1}$). By identity we have $\gcd(F_a,F_b) =\gcd(F_{qb+r},F_b) = \gcd(F_{qb-1}F_r+F_{qb}F_{r+1},F_b)$. But since $b|qb$ we know (from above paragraph) that $F_b | F_{qb}$. Therefore, $\gcd(F_{qb-1}F_r+F_{qb}F_{r+1},F_b) = \gcd(F_{qb-1}F_r,F_b)$. If we can show that $\gcd(F_{qb-1},F_b)=1$ then it will mean that $\gcd(F_{qb-1}F_r,F_b) = \gcd(F_r,F_b)$. Thus, problem is reduced to proving $\gcd(F_{qb-1},F_b)=1$. Let $d=\gcd(F_{qb-1},F_b)$. Since $d|F_b$ and $F_b|F_{qb}$ (from above) it means $d|F_{qb}$. But then it means $d|F_{qb-1}$ and $d|F_{qb}$. It means $d=1$ since $\gcd(F_{qb-1},F_{qb})=1$ as Isomorphism showed. 4. Originally Posted by ThePerfectHacker We can prove something really shocking. Let $F_n$ denote the $n$-th Fibonnaci number. We will prove $\gcd (F_n, F_m) = F_{\gcd(n,m)}$. Good show, TPH! That is really a beautiful result. 5. this is not related to mathwizard's question, but, i hope, some memebrs find it interesting. anyway ... my apologies for posting something off-topic! i think the most fascinating fact about Fibonacci sequence is that it's periodic modulo any $n \geq 2.$ for example Fibonacci sequence modulo 3 becomes: $1,1,2,0,2,2,1,0,1,1,2,0,2,2,1,0, ... \ .$ here $1,1,2,0,2,2,1,0$ repeats in the sequence. so the period is 8. as far as i know finding a general formula for the period of Fibonacci sequence modulo a given number n is an open problem!! 6. Originally Posted by mathwizard Also identify the ratio $F_{n+1}/F_n$ of consecutive Fibonacci numbers and try to identify the limit. Consider the equation $x^2=x+1$. There are two solutions, $\phi = \tfrac{1}{2}(1+\sqrt{5})$ and $\theta = \tfrac{1}{2}(1-\sqrt{5})$. This means, $\phi^2 = \phi + 1$. Multiply by $\phi$ to get $\phi^3 = \phi^2 + \phi = (\phi+1)+\phi=2\phi+1$. Multiply again to get, $\phi^4 = 2\phi^2 + \phi = 2(\phi+1)+\phi = 3\pi+2$. Do it again, $\phi^5 = 5\pi +3$. The pattern should be clear, $\phi^n = F_n \phi+F_{n-1}$. This can of course be fully justified with induction. Likewise, doing it with $\theta$ we get $\theta^n = F_n \theta + F_{n-1}$. Subtracting the two equations we get $\phi^n - \theta^n = F_n ( \phi - \theta)$. Thus, $F_n = \tfrac{1}{\sqrt{5}}\left[ (\tfrac{1+\sqrt{5}}{2})^n - (\tfrac{1-\sqrt{5}}{2})^n \right]$. Therefore, $\tfrac{F_{n+1}}{F_n} = \tfrac{(1+\sqrt{5})^{n+1} - (1-\sqrt{5})^{n+1}}{2(1+\sqrt{5})^n - 2(1-\sqrt{5})^n}$. This gives us a formula for computing the Fibonacci ratio. This is Mine 1th Post!!! And my 1th Celebration!!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 70, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169296026229858, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/146978-ordinal-arithmetic.html
# Thread: 1. ## Ordinal Arithmetic Hi, Wikipedia says, regarding ordinals: Left division with remainder : for all α and β, if β > 0, then there are unique γ and δ such that α = β·γ + δ and δ < β. Why is this true? Can anyone help construct an induction for it or something? Thanks x 2. Hi You may know that, if $\beta>0,$ right multiplication ( $y\mapsto\beta.y$ ) is strictly increasing. Therefore, the class of ordinals $\{x\ ;\ \beta.x>\alpha\}$ is non empty and has a minimum, which must be a successor (or else, let's call this minimum $\lambda,$ since right multiplication is continuous, we would have $\beta.\lambda=\sup\{\beta.\delta\ ;\ \delta<\lambda\}$ and since $\alpha<\beta.\lambda,$ this means there is a $\delta<\lambda$ such that $\alpha<\beta.\delta,$ absurd) so let's name this minimum $\gamma+1$ and consider $\gamma.$ It is such that $\beta.\gamma\leq\alpha$ and is maximal for this property. Use now that right addition is also a strictly increasing continuous function, therefore there is a minimal ordinal $x$ such that $\beta.\gamma+x>\alpha,$ you find it again to be a successor, let's say $x=\delta+1,$ and conclude. ( Also, if $\delta\leq\gamma,$ since right addition is an (strictly) increasing function, you get $\beta.\gamma+\delta\geq\beta.\gamma+\gamma=\beta.( \gamma+1)>\alpha$ by hypothesis on $\gamma,$ contradiction; therefore $\delta<\gamma$ )
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464721083641052, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/14056/how-does-gravitational-lensing-account-for-einsteins-cross/14631
# How does gravitational lensing account for Einstein's Cross? Einstein's Cross has been attributed to gravitational lensing. However, most examples of gravitational lensing are crescents known as Einstein's rings. I can easily understand the rings and crescents, but I struggle to comprehend the explanation that gravitational lensing accounts for Einstein's cross. I found this explanation, but it was not satisfactory. Image from http://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/Einstein_cross.jpg/300px-Einstein_cross.jpg - Relevant thought: are the four dots mirror images of the original light source or are they distorted chunks of a crescent where the rest of the crescent has been blocked? Perhaps there are other possibilities I have not considered. – JoeHobbit Sep 12 '11 at 15:27 I would add... I'm also interested to know if the Einstein cross was a specific prediction that people had obtained from GR or if the term was coined after the image was discovered. That could help explain a lot. – AlanSE Sep 14 '11 at 2:17 – JoeHobbit Mar 25 at 3:58 ## 4 Answers The middle galaxy in Einstein's cross has an elliptical mass distribution which is wider in the direction of the short leg of the cross (originally, this said long leg), with a center of mass where you see the galaxy. The object is slightly to the right of the center of the ellipse, in the direction of the long leg of the cross (the original answer had the direction reversed). This type of lensing is acheivable in such a configuration, when the lensing object is relatively close to us, so that the rays pass the central region, where the quadrupole moment asymmetry of the gravitational field is apparent. ### Lensing map Given a light source, call the line between us and the source the z axis, and parametrize outgoing light rays by the x-y coordinates of their intesection with an x-y plane one unit distance away from the source in our direction. This is a good parametrization for the tiny angles one is dealing with. The light rays are parametrized by a two dimensional vector v. These light rays then go through a lensing region, and come out going in another direction. Call their intersection point with the x-y plane going through our position v'. The lensing problem is completely determined once you know v' as a function of v. We can only see those rays that come to us, that is, those rays where v'(v) is zero. The number and type of images are entirely determined by the number and type of zeros of this vector field. This is why it is difficult to reconstruct the mass distribution from strong-lensing--- many different vector fields can have the same zeros. The only thing we can observe is the number of zeros, and the Jacobian of the vector field v' at the zero. The Jacobian tells you the linear map between the source and the observed image, the shear, magnification, inversion. The lensing map is always asymptotically linear, v'(v)= v for large v, because far away rays are not lensed, and the scale of v is adjusted to make this constant 1. ### Generic strong lensing In a generic strong lensing problem, the vector field v has only simple zeros. The Jacobian is a diagonalizable matrix with nonzero eigenvalues. This means that each image is perfectly well defined, not arced or smeared. The image is arced only in the infinitely improbable case that you have a singular Jacobian. But we see gravitational arcs all the time! The reason for this is that for the special case of a spherically symmetric source, the Jacobian is always singular. The source, the center of symmetry, and us make a plane, and this plane includes the z-axis, and necessarily includes the direction of the image. The Jacobian at a zero of v' always has a zero eigenvalue in the direction perpendicular to this plane. This means that the spherically-symmetric far-field of any compact source will produce arcs, or smears. When the lensing object is very far, the rays that get to us are far away from the source, and we see far-field arcs and smears. When the lensing galaxy is close, the lensing field has no special symmetry, and we see points with no smearing. So despite the intuition from point sources and everyday lenses, Einstein's cross is the generic case for lensing, the arcs and smears are special cases. You can see this by holding a pen-light next to a funhouse-mirror. Generically, at any distance, you will see the pen light reflected at multiple images, but only near special points do you get smearing or arcing. ### Topological considerations There is a simple topological theorem about this vector field v'. If you make a large circle in the v plane, and go around it counterclockwise, the value v'(v) along this circle makes a counterclockwise loop once around. This is the winding number of the loop. You can easily prove the following properties of the winding number: • Every loop has a winding number • if you divide a loop in two, the winding number of the two parts add up to the winding number of the loop. • the winding number of a small circle is always 0, unless the vector field is zero at inside the circle. Together they tell you what type of zeros can occur in a vector field based on its behavior at infinity. The winding number of the vector field in a small circle around a zero is called its index. The index is always +1 or -1 generically, because any other index happens only when these types of index zeros collide, so it is infinitely improbable. I will call the +1 zeros "sources" although they can be sources sinks or rotation/spiral points. The -1 zeros are called "saddles". The images at saddles are reflected. The images at sources are not. These observations prove the zero theorem: the number of sources plus the number of saddles is equal to the winding number of a very large circle. This means that there are always an odd number of images in a generic vector field, and always one more source than saddle. A quick search reveals that this theorem is known as the "odd number theorem" in the strong lensing community. ### The odd number paradox This theorem is very strange, because it is exactly the opposite of what you always see! The generic images, like Einstein's cross, almost always have an even number of images. The only time you see an odd number of images is when you see exactly one image. What's the deal? The reason can be understood by going to one dimension less, and considering the one-dimensional vector field x'(x). In two dimensions, the light-ray map is defined by a zeros of a real valued function. These zeros also obey the odd-number theorem--- the asymptotic value of x'(x) is negative for negative x and positive for positive x, so there are an odd number of zero crossings. But if you place a point-source between you and the object, you generically see exactly two images! The ray above that is deflected down, and they ray below that is deflected up. You don't ever see an odd number. How does the theorem fail? The reason is that the point source has tremendously large deflections when you get close, so that the vector field is discontinuous there. Light rays that pass very close above the point are deflected very far down, and light rays that pass very close below are deflected far up. the discontinuity has a +1 index, and it fixes the theorem. If you smooth out the point source into a concentrated mass distribution, the vector field becomes continuous again, but one of the images is forced to be right behind the continuous mass distribution, with extremely small magnification. So the Einstein cross has five images: there are four visible images, and one invisible images right behind the foreground galaxy. This requires no fine tuning--- the fifth image occurs where the mass distribution is most concentrated, which is also where the galaxy is. Even if the galaxy were somehow transparent, the fifth image would be extremely dim, because it is where the gradient of the v field is biggest, and the smaller this gradien, the bigger the magnification. ### Einstein's cross After analyzing the general case, it is straightforward to work out qualitatively what is happening in Einstein's cross. There is a central mass, as in all astrophysical lenses, so there is an invisible central singularity/image with index +1. the remaining images must have 2 sources and 2 saddles. The most likely configuration is that the two sources are the left and right points on the long leg of the cross, and the two saddles are the top and bottom points (in my original answer, I had the orientation backwards. To justify the orientation choice, see the quantitative analysis below) You can fill in the qualitative structure of the v'(v) vector field by drawing its flow lines. The image below is the result. It is only a qualitative picture, but you get to see which way the light is deflecting (I changed the image to reflect the correct physics): The flow lines start at the two sources, and get deflected around the two saddles, with some lines going off to infinity and some lines going into the central singularity/sink. There is a special box going around source-saddle-source-saddle which cuts the plane in two, and inside the box, all the source flows end on the central singularity/image and outside all the source flows end at infinity. The flow shows that the apparent fourfold symmetry is not there at all. The two sources are completely different from the two saddles. The direction of light deflection is downward towards the long axis of the cross, and inward toward the center. This is the expected deflection from a source which is elliptical oriented along the long-direction of the galaxy. ### Model (The stuff in this section was wrong. The correct stuff is below) ### General Astrophysical Lensing The general problem is easy to solve, and gives more insight into what you can extract from strong lensing observations. The first thing to note is that the deflection of a particle moving at the speed of light past a point mass in Newton's theory, when the deflection is small is given by the integral of the force over a straight line, divided by the nearly constant speed c, and this straightforward integral gives a deflection which is: $$\Delta\theta = -{R_s\over b}$$ Where $R_s = {2GM\over c^2}$\$ is the Schwartschild radius, b is the impact parameter, the distance of closest approach, and everything is determined by dimensional analysis except the prefactor, which is as I gave it. The General Relativity deflection is twice this, because the space-space metric components contribute an equal amount, as is easiest to see in Schwartschild coodinates in the large radius region, and this is a famous prediction of GR. When the deflections are small, and they always small fractions of a degree in the actual images, the total deflection is additive over the point masses that make up the lensing mass. Further, the path of the light ray from the distant light source is only near the lensing source for a very small fraction of the total transit, and this lensing region is much smaller than the distance to us, or the distance between the light source and the lensing mass. These two observations mean that you can squash all the material in the lensing mass into a single x-y plane, and get the same deflection, up to corrections which go as the ratio of the radius of a galaxy to the distance from us/source to the galaxy, both of which are safely infinitesimal. The radius of a galaxy and dark-matter cloud is a million light years, while the light source and us are a billion light years distant. You convert $\Delta\theta$ to x-y plane coordinates I am using by multiplying by a unit distance. This gives the amount and direction of deflection from a given point mass. The total deflection of the light ray at distance B is given by the sum over all point masses in the galaxy and its associated dark-matter of this vector contribution, which is four time the mass (twice the Schwartschild radius) divided by the distance, pointing directly toward the mass. This sum is $\Delta v$. What is important to note is that this sum is equal to the solution to a completely different problem, namely the 2-d gravitational field of (four times) the squashed planar mass. In 2d, gravity goes like 1/r. The planar gravitational field of the planar mass distribution gives $\Delta v$, and it is most important to note that this means that $\Delta v$ is the gradient of the 2d gravitational potential: $$\Delta V= - \nabla \phi$$ Where $$\phi(x) = \int \rho(u) \ln(|x-u|) d^2u$$ where the two dimensional density $\rho(u)$ is the integral of the three dimensional density in the z direction (times $4G\over c^2$). This is important, because you can easily determine $\phi$ from the mass distribution by well known methods for solving Laplace's equation in 2d, and there are many exact solutions. The impact parameter B is equal to $vR_1$, the original direction the light ray goes times the distance from the light-source to the lensing object, and the position this light ray reaches when it gets to us is: $$v'(v) = v(R_1 + R_2) + \Delta v(vR_1) R_2$$ Choosing a new normalization for v so that vR_1 is the new v, and choosing a normalization for v' so that v'(v) is v at large distances: $$v'(v) = v - {R_1\over R_1+R_2} \nabla \phi(v)$$ This is important, because it means that the whole thing is a gradient, the gradient of: $$v'(v) = - \nabla(\phi'(v))$$ $$\phi'(v) = {R_1\over R_1+R_2} \phi(v) - {v^2\over 2}$$ The resulting potential also has a 2d interpretation--- it is the gravitational potential of the planar squashed mass distribution in a Newton-Hooke background, where objects are pushed outward by a force proportional to their distance. The 2-d gravity potential is easy to calculate, often in closed form, and to find the lensing profile, you just look for the maxima, minima, and saddles of the 2-d potential plus a quadratically falling potential. This solves the problem for all practical astrophysical situations. I found it remarkable that the deflection field is integrable, but perhaps there is a simpler way of understanding this. ### Point mass The 2d potential of a point mass is $$\phi(v) = \ln(|v|)$$ and for an object directly behind it, you get $$\phi'(v) = A \ln(|v|) - |v|^2$$ This gives a central singularity (or if you spread out the mass in the center, a dim image right on top of the mass) plus a perfect ring where $r=\sqrt{A}$. This is the ring image. Moving the light source off center just shifts the relative position of the two potential centers. The new potential is: $$\phi'(v) = {A\over 2} \ln(x^2 + y^2) - {(x-a)^2 + y^2\over 2}$$ Setting the x and y derivatives of the potential to zero, you find two critical points (not counting the singular behavior at x=y=0). The two points both have a singular Jacobian, so they give very large magnifications, and smears or arcs. The two images occur at $$y=0$$, $$x= {a\over 2} \pm \sqrt{A^2 - {a\over 2}^2}$$ So the smear to the side where the object is at is moved further, at large values of a, the second image is right on top of the lesing mass, and at small values of a, the two images are moved in the direction of the displacement by half the amount of displacement. ### Quadrupole mass distribution Consider two masses of size {1\over 2} at position $\pm a$. This gives a potential which is a superposition of the two masses: $$\phi(x,y) = {1\over 4} \ln((x-a)^2 + y^2 ) + {1\over 4} \ln((x+a)^2 + y^2) = {1\over 2} \ln(r^2) + a^2 { x^2 - y^2\over 2r^2}$$ The part in addition to the ordinary $M\ln(r)$ potential of a point source is a quadrupole. Lensing in a quadrupole has a simple algebraic solution. Differentiating, and subtracting the linear part gives $${Ax\over r^2} ( 1 + {a^2\over r^4}(6y^2 - 2x^2) - {r^2\over A} ) =0$$ $${Ay\over r^2} ( 1 + {a^2\over r^4}(2y^2 - 6x^2) - {r^2\over A} ) =0$$ The x=0,y=0 point is at the singular position. The real critical points are at the other simultaneous solutions: $$x=0, y= \pm\sqrt{A}\sqrt{{1\over 2} \pm \sqrt{{1\over 4} + {2a^2\over A}}}$$ $$y=0, x= \pm\sqrt{A}\sqrt{{1\over 2} \pm \sqrt{{1\over 4} - {2a^2\over A}}}$$ Of these eight points, two are imaginary (taking the minus sign inside the square root for y), and two are outside the domain of validity of the solution (taking the minus sign inside the square root for x--- the point is $\sqrt{2}a$), which is right by the point masses making the quadrupole). This leaves four points. But they are all local maxima, none of these are saddles. The saddles are found by solving the nontrivial equations in parentheses for x and y. Taking the difference of the two equations reveals that $x=\pm y$, which gives the four saddle solutions: $$\pm x=\pm y = \sqrt{A}$$ There are eight images for a near-center source lensed by a quadrupole mass. For small values of a, The two images along the line of the two masses are pulled nearer by a fractional change which is $a^2\over A$, the two images perpendicular to the line of the two masses are pulled apart by a fractional change of $a^2\over A$, while the four images on the diagonals are at the location of the point-source disk. For me, this was surprising, but it is obvious in hindsight. The quadrupole field and the Newton-Hooke fields both point along the lines y=x on the diagonal, and their goes from inpointing near the origin to outpointing far away, so it must have a zero. The zeros are topological and stable to small deformations, so if you believe that the field of the galaxy is spherical plus quadrupole, the Einstein cross light source has to be far enough off-center to change the topology of the critical points. ### Quadrupole mass distribution/off-center source To analyze moving off center qualitatively, it helps to understand how saddles and sources respond to movement. If you move the light source, you move the Newton-Hooke center. The result is that the points that were previously sources and saddles now have a nonzero vector value. When the position of a source slowly gets a nonzero vector value, that means that the source is moving in the direction opposite this value. If a saddle gets a nonzero value, the saddle is moving in the direction of this value reflected in the attracting axis of the saddle. This means that if you start with a very asymmetric quadrupole, and you slide the source along the long-axis of the source-saddle-source-saddle-source-saddle-source-saddle ellipse towards one of the sources at the end of the long axis, one of the short axis sources and the short axis saddles approach each other. They annihilate when they touch, and they touch at a finite displacement, since the result must smoothly approach the spherically symmetric solution. Right after the sources and saddles annihilate, you get a cross, but it doesn't look too much like Einstein's cross--- the surviving two saddles and two sources are more asymmetric, and the narrow arm is much narrower than the wide arm. ### Line source For the lensing from a line source, you write the 2-d potential for a line oriented along the y-axis (it's the same as a plane source in 3d, a point source in 1d, or a d-hyperplane source in d+1 dimensions--- a constant field pointing toward the object on either side): $$\phi(x) = B|x|$$ And subtract off the Newton-Hooke source part, with a center at $x=a$. $$\phi'(x) = B|x| - {1\over 2} ((x-a)^2 + y^2)$$ The critical points are on the y-axis by symmetry, and they are very simple to find: $$y=0, x= B+a$$ $$y=0, x= -B+a$$ These are the two images from a long dark-matter filament, or any other linear extended source. Cosmic strings give the same sort of lensing, but the string-model of cosmic strings give ultra-relativistic sources which produce a conical deficit angle, and are technically not covered by the formalism here. But the result is the same--- doubled images. If you spread out the line source so that it is a uniform density between two lines parallel to the y-axis (this would come from squashing a square beam of uniform mass density into a plane), the lensing outside the two lines is unaffected, by the 2-d Gauss's Law. The interior is no longer singular, and you get a third image, as usual, at x=y=0. ### Elongated density plus point source The next model I will consider is a string plus a point. This is to model an elongated mass density with a concentration of mass in the center. The far-field is quadrupolar, and this was analyzed previously, but now I am interested in the case where the mass-density is comparable in length to the lensed image, or even longer. Spreading out the string into a strip does nothing to the lensing outside the strip, and spreading out the point to a sphere also does nothing to the lensing outside the sphere, so this is a good model of many astrophysical situations, where there is an elongated dark matter cloud, perhaps a filament, with a galaxy concentrated somewhere in the middle of the filament. The 2-d potential, plus on-center Newton Hooke is $$\phi'(x) = {A\over 2} \ln(x^2 + y^2) + B|x| - {x^2 + y^2\over 2}$$ The solution to the critical point equations give images at $$y=0, x= {B\over 2} + \sqrt{A + ({B\over 2})^2}$$. $$y=0, x= -{B\over 2} - \sqrt{A + ({B\over 2})^2}$$ Where one of the two solutions of each quadratic equation is unphysical. This lensing is obvious--- it's the same as the string because the light-source is right behind the center mass. Looking along the string itself, there are two more critical points: the x-direction field becomes zero (it is singular for an infinitely narrow string, but ignore that), and the gradient of the potential is in the y direction, by symmetry, and for y near zero, it is inward pointing, and for large y it is outward pointing, so there is a critical point. The string potential has a minimum on the string, so in the x-direction you have a minimum, but the Newton Hooke potential is taking over from the point source potential at the critical point, so in the y-direction these two points are potential maxima. These are two critical points. The two critical points are at: $$x=0, y=\pm\sqrt{A}$$ And this is very robust to thickening the string and the point into strips/spheres, or blobs, so long as the shape is about the same. This is a generic source-saddle-source-saddle cross. In the string case, the two saddles become infinitely dim, because the Jacobian blows up, but in the physical case where the thickness of the string is comparable to the lensing region, the Jacobian is of the same order for the sources and the sink. Moving the light-source off center towards positive x, perpendicular to the string orientation pushes the left source inwards, the right point outward, and the two saddles back and out. This is exactly the Einstein cross configuration. ### Point/strip --- Best Fit Consider a strip of dark matter which is as wide or wider than the lensing configuration, with a point galaxy in the middle. This gives the lensing potential: $$\phi'(x,y) = {A\over 2} \ln(x^2 + y^2 ) + {B\over 2} x^2 - {(x-a)^2 + y^2\over 2}$$ valid inside the strip. Outside the strip, instead of quadratic growth, the potential grows linearly, like for the string. The strip is more useful, because it is simultaneously the simplest elongated model to solve for an off-center object, and also the most accurate Einstein cross. The parameter a tells you how far to the right of center the light-source is. The equations for the critical points are: $$x({A\over r^2} - (1-B) ) + a = 0$$ $$y({A\over r^2} - 1) =0$$ There are two solutions when y=0, at $$x= {a\over 2(1-B)} \pm \sqrt{ {(a/2)^2\over (1-B)^2} +A^2}$$ These are the two sources, on the x axis, as in the string-point problem. There are two additional solutions when ${A\over r^2 -1}=0$, and these are at $$x= {-a\over B} , y = \pm \sqrt{A-{a^2\over B^2}}$$ And these are the usual saddles of line-string lensing. For a small a, the two saddles move to the right of the symmetry line, and the long-arm of the cross moves to the right. This is a perfect fit to Einstein's cross. To see how good a fit it is, look at the following plot of the lensing produced by $$\phi' = \ln(x^2 + y^2) - {.9*(x-.04)^2 + y^2\over 2}$$ The black circle is the center of symmetry of the point/strip, the cross next to it is the true position of the quasar, and the four crosses are the locations of the critical points, while the contour-line density on the saddles/sources tell you the inverse brightness. This matches the data perfectly. ### Summary The quadrupole lensing has a hard time reproducing Einstein's cross exactly, although it can get cross-like patterns. The reason is the eight images for an on-center light source. This means that to get a cross, two saddle-source pairs have to annihilate. Once they do, the remaining saddles and source are not in such a nice cross, they tend to be too close together, not spread out nicely like the image. The quadrupole crosses are already approaching the asymptotic spherical limit, where the saddles and sources become the degenerate spherical arcs. The brightness of the saddles and sources are not approximately the same, the brightness of the far image on the long-leg of the cross is not approximately the same as the brightness of the near image, it is not a good model. This means that we should consider dark-matter around the galaxy extending in an elongated ellipse, the elongation is along the short-leg of the cross. The light-source is slightly to the right of center. This reproduces Einstein's cross exactly. This is almost surely the orientation of the dark-mass distribution in the galaxy, but the details of the distribution are not revealed just from the critical points, which is all strong lensing provides. - 2 That is the most detailed answer I've ever seen on stackexchange :-) – BarsMonster Sep 16 '11 at 15:28 Thanks! I finally used something from Differential Equations!!! – JoeHobbit Sep 17 '11 at 16:16 The most important factor in creating these sorts of distributions is the non spherically symmetric aspects of the galaxy, creating a very warped lens. Not only is the visible part of a galaxy usually disc-like in nature, the majority of the mass is located in dark-matter halos located around the galaxy. Observational evidence suggests that these halos are "flat," in that they are oblong and not spherical, creating a somewhat counter intuitive lens shape. The limiting case of this is the idea of a "cosmic string," a theoretical 1-d topological defect in space-time which is essentially a long dense string. There was quite a bit of news in astronomy/astrophysics circles about the so called Twin Quasar which was thought to be evidence of gravitational lensing by such an object (essentially, an extremely mirror symmetric lens), although since then it has been showed otherwise. Look at the simulation here: http://www-ra.phys.utas.edu.au/~jlovell/simlens/ Here is another article with some examples : http://www.aeos.ulg.ac.be/lens_en.php - 1 Given that galaxies are plates (and clusters I thought could be plate-like too), this would seem to explain a dual image very well. While your references show an example of a 4-image, it's not clear what shape produced that. – AlanSE Aug 28 '11 at 2:44 This is not the explanation--- every spherical symmetric source produces arcs and smears, never points. The failure here is spherical symmetry--- the mass distribution is extended along the long direction. – Ron Maimon Sep 13 '11 at 17:31 Ah, my wording was imprecise, "multiple images" is a confusing term. The main point of the answer was the second sentence. I have edited my answer to be more clear – Benjamin Horowitz Sep 13 '11 at 17:49 Your wording was not imprecise, it was completely precise, it was just wrong. Your answer is correct now, after your edits. But you still didn't explain the saddle point images to the right and to the left. – Ron Maimon Sep 13 '11 at 19:21 Two crests are definitely considered "multiple images". You really don't need to be quite so hostile to mis-interpretations of words. – Benjamin Horowitz Sep 13 '11 at 19:29 show 1 more comment We don't know enough about this galaxy acting as a "lens" to tell for sure. It could be any number of things. It could be a ring, formed by the galactic core acting as a lens, but cut in 4 chunks by the spiral arms. It could be caused by non-uniform distribution of mass in the galaxy. There might be other causes as well. Some simulations:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 32, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351457953453064, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/collision?sort=unanswered&pagesize=30
# Tagged Questions The collision tag has no wiki summary. 2answers 142 views ### Perpendicular Elastic Collision (different masses, different velocities) I'm stuck on a mechanics problem and I can't make any headway past momentum and kinetic energy being conserved. Here is the problem: Two hover cars are approaching an intersection from ... 1answer 115 views +50 ### Firing machine question Suppose we have a firing machine on a frictionless surface at point $x=0$. It fires a bullet of mass $m$ every $T$ seconds. Each bullet has the same constant velocity $v_0$. There's a body of mass ... 1answer 24 views ### Two dimensional elastic collisions with varying angle of incident If in an elastic collision I know all initial values and that mass for each object remains constant throughout the collision (but different from one another) how can I determine their final velocity ... 1answer 159 views ### Calculating a 2D collision between two perfectly circular disks Assume I have two disks, $p_1$ and $p_2$, of radius $r$, with their own velocities (preferably in $(x,y)$ form, but $(m, \theta)$ works too) and masses (unit-less, but same unit) collide in two ... 1answer 272 views ### Kinetic energy in the center of mass In a collision of a particle of mass $m_1$ moving with speed $v_1$ with a stationary particle of mass $m_2$ not all the original kinetic energy can be converted into heat or internal energy. what ... 0answers 56 views +50 ### How multiple objects in contact are resolved in an inelastic collision, when edge normals don't “line up” In a case I understand, let's say I have an object A moving at velocity V toward 3 objects in contact B, C, and D: The momentum of A is the mass of A times its velocity. To figure out how the ... 0answers 88 views ### Elastic collision of rotating bodies How would you explain in detail elastic collision of two rotating bodies to someone with basic understanding of classical mechanics? I'm writing simple physics engine, but now only simulating ... 0answers 251 views ### kinetic energy in collisions We were hoping you could help us understand collision energy. Vehicle A is driving West at 35mph and weighs 1437kg. Vehicle B is driving North at 35 mph and weighs 1882kg. Vehicle B crashes into the ... 0answers 52 views ### Matrix element approximation In the formula for the decay width of $\Upsilon(4S)$ to B-mesons from $\text{e}^+\text{e}^-$ collisions: \Gamma_{\Upsilon(4S)\to B\bar{B}}=\frac{\left|\underline{P}_B \right|}{8\pi ... 0answers 13 views ### How to write down the detailed balance (microreversed) amplitude I know that time-reversal of a reaction and the detailed balance (microreversed, or reciprocal) reaction are different. Textbooks on scattering theory explain how to relate the S-matrix elements of a ... 0answers 157 views ### How to calculate the resulting velocitys and rotation speed after two concave polygons collide in 2d so I've been searching google for how to do this, but all anyone seems to care about is collision detection =p, and no results come close to teaching me how to calculate 2d elastic collision between ... 0answers 50 views ### Continuum mechanics and effects of stress Going to word this question a bit more straightforward than I may have before. Also, I'm trying to use baby formulas so I can grasp exactly what's going on. Object A has an elasticity of ... 0answers 57 views ### Hypersonic collisions: how to understand Preliminary: I'm not good with tensor calculus, but it's okay for me to work with reasonably complex differential equations. What I need is understanding of process of high speed (5-500 km/s) ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174454212188721, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/46033/can-you-measure-the-speed-of-water-coming-out-of-a-hose-by-its-arc
# Can you measure the speed of water coming out of a hose by its arc? Water comes out of a horizontally stationed hose and creates an arc as it heads towards the ground. Can I determine the speed the water was traveling in when it exited the hose by the measuring the arc which is created? For example, let's say that I measure that the water has dropped 2 inches vertically when measuring 1 foot horizontally away from the nozzle - at what speed did it exit the nozzle? - To a reasonable approximation, sure. At least if you wait until a steady state condition is attained. How close to you need to get? Is the flow uniform to that level? – dmckee♦ Dec 5 '12 at 19:56 This is something I was wondering about while taking a shower, not my thesis. So assume any simplifying assumptions... – ytoledano Dec 5 '12 at 20:00 ## 1 Answer To first order you can ignore air resistance1 and treat it as a perfect ballistics problem. So the vertical deviation of the stream from it's initial straight line, $\Delta y$ tells you the time elapsed sine leaving the nozzle by $$t = \sqrt{\frac{2 \Delta y}{g}} .$$ The the distance along the initial stream direction to point from which the $y$ measurement was made is $\Delta x$ ad we have $$v_i = \frac{\Delta x}{t} = \Delta x\sqrt{\frac{g}{2 \Delta y}} .$$ For problems like this the largest errors are likely to be the mechanics of the measurement and the non-uniform initial velocity of the stream rather than air resistance. A common place to see this in action is at any "jumping jets" fountain. If you watch closely you will see that the initial part of any particular jet has a lower trajectory than the rest, and that the main body generally has a beautiful parabolic trajectory. 1 Because once the stream is established no air is being displaced and at "hose" or "shower" velocities there is little viscus friction in the boundary layers. - Nice thinking in the footnote! Do you have a qualitative idea of how will surface tension affect this? I'm thinking it will tend to straighten the jet, and make it go slightly further. – Jaime Dec 5 '12 at 20:38 Not really sure--this isn't my field of expertise--but I think that surface tensions biggest contribution will be in holding the stream together. Now, if it did come apart the claim about no air being displaced would probably fail and we'd lose the near-zero friction resulting in the streaming slowing and falling below the ideal trajectory. – dmckee♦ Dec 5 '12 at 21:34 – Bernhard Dec 6 '12 at 10:21 @Bernhard Thanks. Something to add to my reading. – dmckee♦ Dec 6 '12 at 15:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478737711906433, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/12160/roots-of-legendre-polynomial/12209
# Roots of Legendre Polynomial I was wondering if the following properties of the Legendre polynomials are true in general. They hold for the first ten or fifteen polynomials. 1. Are the roots always simple (i.e., multiplicity $1$)? 2. Except for low-degree cases, the roots can't be calculated exactly, only approximated (unlike Chebyshev polynomials). 3. Are roots of the entire family of Legendre Polynomials dense in the interval $[0,1]$ (i.e., it's not possible to find a subinterval, no matter how small, that doesn't contain at least one root of one polynomial)? If anyone knows of an article/text that proves any of the above, please let me know. The definition of these polynomials can be found on Wikipedia. - 1 – Timothy Wagner Nov 28 '10 at 3:32 The answer to question 2 depends on what you mean by "calculated exactly." – Qiaochu Yuan Nov 28 '10 at 3:34 3 1. Yes. It is a deep theorem of the theory of orthogonal polynomials that all their roots within their support interval are simple. 2. There are no explicit closed forms for the general roots of a Legendre polynomial, but there are asymptotic expansions for the roots. – J. M. Nov 28 '10 at 3:39 3 3. Note that the roots of successive Legendre polynomials are interlacing (they form a Sturm sequence). – J. M. Nov 28 '10 at 3:40 1 @user3971: what a specific definition of "calculated exactly"... what makes a radical so easy to calculate compared to other functions? Even to solve a cubic equation by the cubic formula "by radicals" requires one to compute cosines in the general case and I see no reason this is essentially easier than computing any other function whose Taylor series decays rapidly. – Qiaochu Yuan Nov 28 '10 at 3:44 show 7 more comments ## 5 Answers To resolve the second question, note first that the Legendre polynomials are odd functions for odd order (0 then is one root of the polynomial), and even functions for even order. Thus, with regards to solubility in terms of radicals, you should be able to derive (possibly complicated!) radical expressions at least up until $P_9(x)$. To use that as an example, note that $$\frac{P_9(\sqrt{x})}{\sqrt{x}}$$ is a quartic; thus, one can use the quartic formula to derive explicit expressions for its roots, and then you can easily derive the roots of $P_9(x)$ . $P_{10}(x)$ is where your trouble starts. If we take a look at the polynomial $$P_{10}(\sqrt{x})$$ we have a quintic to contend with. I'll skip the relatively tedious details, but you can verify that its Galois group is not a solvable group, and thus the solution cannot be expressed in terms of radicals (you can use theta or hypergeometric functions, though). So, not much hope in the symbolic front. In the numeric front, things are much easier. The slickest way of getting accurate values of the roots of the Legendre polynomial is to use the Jacobi matrix in my previous answer. Since there exist stable and efficient algorithms (e.g. QR algorithm or divide-and-conquer) for the symmetric eigenproblem (in LAPACK, for instance), and things can be set such that only eigenvalues are returned, you have a good way of generating good approximate values of Legendre polynomial roots. (In the context of Gaussian quadrature, where the roots of orthogonal polynomials play a pivotal role, the scheme is referred to as the Golub-Welsch algorithm.) Alternatively, as I mentioned in the comments, there exist asymptotic approximations for the roots, which can then be subsequently polished with a few applications of Newton-Raphson. One such asymptotic approximation is due to Francesco Tricomi. Letting $\xi_{n,k}$ be the $k$-th root of $P_n(x)$, ordered in decreasing order, we have $$\xi_{n,k}\approx\left(1-\frac1{8n^2}+\frac1{8n^3}\right)\cos\left(\pi\frac{4k-1}{4n+2}\right)$$ and $O(n^{-4})$ and further terms are omitted. Other asymptotic approximations due to Luigi Gatteschi use roots of Bessel functions, but I won't say more about those. - I think there is a simpler proof that the roots are simple. The Legendre polynomial $\displaystyle P_n(x)$ satisfies the differential equation $$\displaystyle (1-x^2) y'' -2x y' + n(n+1) y = 0$$ Note that, we scale the polynomials so that $\displaystyle P_n(1) = 1$, so if $\displaystyle \alpha$ is a root, then $\displaystyle \alpha \neq 1$. Suppose $\displaystyle \alpha$ is a root of multiplicity $\displaystyle \gt 1$. Then we must have that $P_n(\alpha) = P_n'(\alpha) = 0$. The above equation implies that $P_n''(\alpha) = 0$. By induction, by differentiating the above equation we can show that $$\displaystyle (1-x^2) y^{(k+2)'} - f_k(x) y^{(k+1)'} + g_k(x) y^{k'} = 0$$ where $\displaystyle y^{l'}$ is the $\displaystyle l^{th}$ derivate of $\displaystyle y$ Since $\displaystyle \alpha \neq 1$ we see that $\displaystyle P_n^{k'}(\alpha) = 0 \ \ \forall k$ and thus the only root of $\displaystyle P_n(x)$ is $\displaystyle \alpha$. I believe it is easy to show that $\displaystyle P_n(x) \neq C(x-\alpha)^n$. - I'll answer question 1 only for now, but I might edit this to address the others later. One should note that corresponding to any set of orthogonal polynomials, there exists a symmetric tridiagonal matrix, called a Jacobi matrix, whose characteristic polynomial is the monic (leading coefficient is 1) version of the set of orthogonal polynomials considered. To use the Legendre polynomials as an explicit example, we first note that the monic Legendre polynomials satisfy the following two-term recurrence relation: $$\hat{P}_{n+1}(x)=x \hat{P}_n(x)-\frac{n^2}{4 n^2-1}\hat{P}_{n-1}(x)$$ where $\hat{P}_n(x)=\frac{(n!)^2 2^n}{(2n)!}P_n(x)$ is the monic Legendre polynomial. From this, we can derive an explicit expression for the corresponding Jacobi matrix (here I give the 5-by-5 case): $$\begin{pmatrix}0&\frac{1}{\sqrt{3}}&0&0&0\\\frac{1}{\sqrt{3}}&0&\frac{2}{\sqrt{15}}&0&0\\0&\frac{2}{\sqrt{15}}&0&\frac{3}{\sqrt{35}}&0\\0&0&\frac{3}{\sqrt{35}}&0&\frac{4}{\sqrt{63}}\\0&0&0&\frac{4}{\sqrt{63}}&0\end{pmatrix}$$ (the general pattern is that you have $\frac{n}{\sqrt{4 n^2-1}}$ in the $(n,n+1)$ and $(n+1,n)$ positions, and 0 elsewhere.) We now note that $\frac{n}{\sqrt{4 n^2-1}}$ can never be 0, and then use the fact that if a symmetric tridiagonal matrix has no zeroes in its sub- or superdiagonal, then all its eigenvalues have multiplicity 1. (A proof of this fact can be found in Beresford Parlett's The Symmetric Eigenvalue Problem.) Thus, all the roots of the Legendre polynomial are simple roots. A more conventional proof of this fact is in page 27 of Theodore Chihara's An Introduction to Orthogonal Polynomials. Briefly, the argument is that $P_n(x)$ changes sign at least once within $[-1,1]$ (and thus has at least one zero of odd multiplicity within the support interval) since $$\int_{-1}^1 P_n(u)\mathrm du=0$$ Now, the polynomial $$P_n(x)\prod_{j=1}^k(x-\xi_j)$$ where the $\xi_j$ are the distinct zeroes of odd multiplicity within $[-1,1]$, should be greater than or equal to zero within $[-1,1]$, and thus its integral over $[-1,1]$ should be greater than zero. However, since $$\int_{-1}^1 P_n(u) u^k\mathrm du=0\qquad\text{if}\qquad k < n$$ we have a contradiction, and thus all the roots of the Legendre polynomial are simple (and within the support interval $[-1,1]$). - 1 If you can obtain a copy of Chihara's book, do so; it's a very good way to get a feel for the subject of orthogonal polynomials. – J. M. Nov 28 '10 at 14:27 The density of the roots of any family of orthogonal polynomials follows from this result: If $\{p_n\}$ is a family of orthogonal polynomials with roots in $[-1,1]$ and $N(a,b,n)$ represents the number of roots of $p_n$ in $[\cos(b),\cos(a)]$ then $$\lim_{n\to \infty} \frac{N(a,b,n)}{n} = \frac{b-a}{\pi}$$ - The Abramowitz–Stegun Handbook of Mathematical Functions claims on page 787 that all the roots are simple: http://convertit.com/Go/ConvertIt/Reference/AMS55.ASP?Res=150&Page=787 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088682532310486, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/95580/how-to-calculate-the-angle-at-which-a-ship-moves
# How to calculate the angle at which a ship moves? Knowing the angle of the sail and the angle of the wind, how can you calculate the resulting angle of the boats movement? - I don't have time to make this a full answer, but break the velocity down into its components (do you know the velocity)? If not, break it into components using the velocity as a variable. Then, I am not sure how much you are supposed to read into the sail direction - what assumptions are you making? – analysisj Jan 1 '12 at 16:46 If the sail of the boat is at a certain angle, surely the boat will go at a different angle? It's like light refractoriness. – Deza Jan 1 '12 at 16:51 It's effect should vary based on the shape of the sail... – analysisj Jan 1 '12 at 16:54 It depends whether there is resistance against lateral movement - a keel or centreboard. Also whether the steering mechanism (rudder) can be ignored. And many other factors - e.g. sails are not flat. – Mark Bennet Jan 1 '12 at 16:56 5 Actually, for a real sailboat, the (first-order) answer is that it moves through the water in the direction of its keel. The relation between wind and sail determines whether it is speeding up or slowing down. For a second order approximation, the wind can also push the keel sideways through the water, but supposedly that effect is deliberately minimized by the hull design -- it needs to be minimal or the boat wouldn't be able to tack. – Henning Makholm Jan 1 '12 at 17:11 show 4 more comments ## 1 Answer In a simple model (of a boat in stationary, frictionless water), there are three components to the motion of a sailboat: • The direction of its keel, $k$ • The direction of the wind, $w$ • The normal direction of its sail, $s$ Speaking without precision about physical quantities, the force on the sail is the component of the wind normal to the sail. The force on the boat is the component of the force on the sail parallel to the direction of the keel. So the force on the boat is $\langle w,s\rangle \langle s, k\rangle k$. In a more complicated model, one must take into account any currents pushing the boat and the interaction between the boat's hull and the water. - Nitpick: what you have there is not the "force on the boat", but the force after being partially canceled by sidewards forces on the keel. To this one must add/subtract friction on the hull as it moves through the water. And of course everything needs to be done in a frame of reference that moves with the current. – Henning Makholm Jan 2 '12 at 2:03 Yes, true. I have added a caveat to my answer. – Neal Jan 2 '12 at 18:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403362274169922, "perplexity_flag": "middle"}
http://www.physicsforums.com/showpost.php?p=3295848&postcount=6
View Single Post Well Grasshopper, $$\forall x P(x) \Leftrightarrow \neg \exists x \neg P(x)$$ Which is why you can't prove it. To prove it, something must first exist in the universe of discourse, call it "a" $$a\ exists \bigwedge \forall x P(x) \Rightarrow \exists x ¬P(x)$$ is provable. Your proposition is invalid for all models based on the empty set. Or, in other words, your proposition is valid for all universes of discourse except the one universe in which nothing exists. Logic is spooky....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306742548942566, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/60064?sort=oldest
## Condition of possibility = Co-Implication ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Sorry, but I do not know another place to post this question. Condition of possibility is an important philosophical concept. Naively, this concept could be formally defined this way: $q$ is a condition of possibility of $p$ iff $\neg q$ implies $\neg p$ the latter being equivalent with $p$ implies $q$. When we write $\hookrightarrow$ for is a condition of possibility of and $\rightarrow$ for implies we get $q \hookrightarrow p$ iff $p \rightarrow q$. So, condition of possibility is something like co-implication. My question is: While in category theory many concepts and co-concepts are treated as strongly related (= inter-definable) but each in its own right, and while in logic many concepts are treated as strongly related (= inter-definable) but each in its own right: Why wasn't the - philosophically important - concept of condition of possibility found worthy of being named and treated in its own right in (formal) logic? - ## 4 Answers You've answered your own question, in a way: if $q \hookrightarrow p$ is equivalent to $p \to q$ then the difference is only a matter of notation. You say that in logic many concepts are treated as strongly related (= inter-definable) but each in its own right but until Gentzen came along it was common in logic (and still is in some quarters) to try to get away with as few connectives as possible, so that even implication would be defined away and not studied in its own right. In intuitionistic logic, however, and in non-classical logics generally, it's often not possible to define the usual connectives in terms of a subset of them. In particular, $p \to q$ intuitionistically implies $q \hookrightarrow p$ but not the other way round. I don't know if intuitionistic logicians have studied $\lnot q \to \lnot p$ as a connective in its own right. I do know, though, that I wouldn't call it co-implication -- I would reserve that name for the category-theoretic dual, say $\leftarrow$, of implication, where $p \mapsto p \leftarrow q$ would be the left adjoint to $r \mapsto q \vee r$. Andrzej Filinski studied this in Declarative continuations (LNCS, I forget which volume) and Tristan Crolard in Subtractive logic (Theoretical Computer Science, again I don't have the exact reference to hand). - 2 In intuitionistic logic, `$\neg q\to\neg p$` is equivalent to `$p\to\neg\neg q$`. So I doubt that it has been studied as a separate connective. – Andreas Blass Mar 30 2011 at 16:41 +1 for the hint to what you would reserve the name of co-implication for. – Hans Stricker Mar 30 2011 at 17:12 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This “co-implication” is just ordinary implication where you plugged in different variables. If you take a compound formula $A\to B$, there is no telling whether it is an instance of $p\to q$ or $q\to p$. It is thus rather pointless to treat these two formally as separate connectives, unless you are in an unusual context (such as when you for whatever reason need a name for all $16$ binary Boolean connectives). There are more than one implication connectives in e.g. some substructural logics (in particular, non-commutative), but these have a different motivation. - (Maybe I'm missing something, but) $\neg q$ implies $\neg p$ is studied, as the contrapositive of implication (where that is $p$ implies $q$). Contraposition (the inference from $p$ implies $q$ to its contrapositive) is valid in a huge range of logics; I think in general that equivalences like this are studied in terms of validity, not in terms of the individual concepts, because historically the goal of logic has been to formalize reasoning. Contraposition itself is taken for granted because it is so often valid -- you have to really work at it to find a logic that doesn't validate the positive form. (I think some fuzzy logics don't validate it, but they don't even validate modus ponens. The reverse inference is easier to get rid of, e.g. it isn't valid in intuitionistic logic.) (Also, wouldn't "co-implication" be more naturally the negation of implication, not its contrapositive?) - As the other responders noted, you first need to find a formal setting in which what you call "co-implication" is operationally distinguished from ordinary implication. Emil Jeřábek mentioned non-commutative logic, but I think he might have been too quick to dismiss its relevance here. In particular the "right implication" of non-commutative logic (distinguished from "left implication") seems to me to be what you are looking for. Have a look at the "Lambek calculus" (introduced by Lambek in his 1958 article, The mathematics of sentence structure), and then more generally "categorial type logics". Lambek's original motivation was syntax of natural language, but eventually (following a 1983 essay by van Benthem) this idea became part of a general approach to relating natural language syntax and semantics. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9626407027244568, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/8170/list
## Return to Answer 2 fixed latex Here is another generalization of planar graphs. Start with a d-dimensional polytope P with n vertices. For every 2-dimensional face F triangulate F by non crossing diagonals. So if F has k sides you add (k-3) edges. It is known that the total number of edges you get (including the original edges of the polytope) is at least $dn - {{d+1} \choose {2}$. 2}}\$. A polytope is called "elementary" if equality holds. We can consider the following classes of graphs: 1) E_d = Graphs of elementary d-polytopes and all their subgraphs 2) F_d = Graphs obtained by elementary d-polytopes by triangulating all 2 faces by non crossing diagonals, and all their subgraphs. For d=3 both classes are the class of planar graphs. Some properties of planar graphs are known or conjectured to extend. 1) (robustness; conjectured) We can start instead of polytopes by arbitrary polyhedral (d-1)-dimensional pseudomanifolds. But it is conjectured that we will get precisely the same class of graphs. 2) (duality; known) If P is elementary so is its dual P* 3) (coloring; conjectured) Graphs in E_d (and perhaps even in F_d) are (d+1)-colorable. 1 Here is another generalization of planar graphs. Start with a d-dimensional polytope P with n vertices. For every 2-dimensional face F triangulate F by non crossing diagonals. So if F has k sides you add (k-3) edges. It is known that the total number of edges you get (including the original edges of the polytope) is at least $dn - {{d+1} \choose {2}$. A polytope is called "elementary" if equality holds. We can consider the following classes of graphs: 1) E_d = Graphs of elementary d-polytopes and all their subgraphs 2) F_d = Graphs obtained by elementary d-polytopes by triangulating all 2 faces by non crossing diagonals, and all their subgraphs. For d=3 both classes are the class of planar graphs. Some properties of planar graphs are known or conjectured to extend. 1) (robustness; conjectured) We can start instead of polytopes by arbitrary polyhedral (d-1)-dimensional pseudomanifolds. But it is conjectured that we will get precisely the same class of graphs. 2) (duality; known) If P is elementary so is its dual P* 3) (coloring; conjectured) Graphs in E_d (and perhaps even in F_d) are (d+1)-colorable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944796085357666, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/80754/list
## Return to Answer 3 added 34 characters in body It is certainly the case that classifying the minimal subsystems of homeomorphisms of compact 2-manifolds presents profound and fundamental difficulties. This is because some very simple transformations, such as analytic diffeomorphisms of the 2-torus, have extremely rich families of minimal sets. Let $T \colon X \to X$ be a linear Anosov diffeomorphism of the 2-torus. The topological entropy of $T$ is finite and positive but may be arbitrarily large. When this entropy If a natural number $k$ is large enoughspecified, then we may find a linear Anosov diffeomorphism $T$ of the 2-torus $X$ such that the shift transformation on $k$ symbols (for any previously-specified natural number $k$) may be homeomorphically embedded into the dynamical system $(X,T)$ as a compact invariant subset. In particular, every minimal subsystem of the $k$-shift embeds into $(X,T)$ as a minimal subsystem. This is problematic because the $k$-shift has an enormous number of minimal subsystems, all of which will be inherited by the Anosov system. Indeed, the combinatorial version of the Jewett-Krieger theorem implies that every ergodic measure-preserving transformation of an abstract probability space which has entropy strictly less than $\log k$ may be embedded into the $k$-shift as a uniquely ergodic minimal subsystem. In particular, for a linear Anosov diffeomorphism of the 2-torus with topological entropy large enough, every ergodic measurable dynamical system with measure-theoretic entropy up to some threshold arises as a uniquely ergodic minimal subsystem. This already presents us with an enormous number of minimal subsystems, because for each $h \geq 0$ there exist uncountably many ergodic measurable dynamical systems of entropy $h$ which are not pairwise equivalent. This is then compounded by the fact that some minimal systems of $(X,T)$ will not arise from such an embedding, the fact that the embedding of the abstract ergodic system into the $k$-shift is in general not unique, the fact that the embedding of the $k$-shift into $(X,T)$ is in general not unique, and the fact that the $k$-shift itself has additional minimal subsystems. Indeed, there is a further theorem due to Denker, Grillenberger and Sigmund which implies that for any finite collection of abstract ergodic transformations all having entropy strictly below $\log k$, we can find a minimal subsystem of the $k$-shift which has an embedded copy of each of these transformations as its only ergodic measures. On the basis of the above considerations I think that a satisfactory classification of the minimal subsystems of homeomorphisms of the 2-torus is improbable! 2 edited body It is certainly the case that classifying the minimal subsystems of homeomorphisms of compact 2-manifolds presents profound and fundamental difficulties. This is because some very simple transformations, such as analytic diffeomorphisms of the 2-torus, have extremely rich families of minimal sets. Let $T \colon X \to X$ be a linear Anosov diffeomorphism of the 2-torus. The topological entropy of $T$ is finite and positive but may be arbitrarily large. When this entropy is large enough, the shift transformation on $k$ symbols (for any previously-specified natural number $k$) may be homeomorphically embedded into the dynamical system $(X,T)$ as a compact invariant subset. In particular, every minimal subsystem of the $k$-shift embeds into $(X,T)$ as a minimal subsystem. This is problematic because the $k$-shift has an enormous number of minimal subsystems, all of which will be inherited by the Anosov system. Indeed, the combinatorial version of the Jewett-Krieger theorem implies that every ergodic measure-preserving transformation of an abstract probability space which has entropy strictly less than $\log k$ may be embedded into the $k$-shift as a uniquely ergodic minimal subsystem. In particular, for a linear Anosov diffeomorphism of the 2-torus with topological entropy large enough, every ergodic measurable dynamical system with measure-theoretic entropy up to some threshold arises as a uniquely ergodic minimal subsystem. This already presents us with an enormous number of minimal subsystems, because for each $h \geq 0$ there exist uncountably many ergodic measurable dynamical systems of entropy $h$ which are not pairwise equivalent. This is then compounded by the fact that some minimal systems of $(X,T)$ will not arise from such an embedding, the fact that the embedding of the abstract ergodic system into the $k$-shift is in general nor not unique, the fact that the embedding of the $k$-shift into $(X,T)$ is not in general not unique, and the fact that the $k$-shift itself has additional minimal subsystems. Indeed, there is a further theorem due to Denker, Grillenberger and Sigmund which implies that for any finite collection of abstract ergodic transformations all having entropy strictly below $\log k$, we can find a minimal subsystem of the $k$-shift which has an embedded copy of each of these transformations as its only ergodic measures. On the basis of the above considerations I think that a satisfactory classification of the minimal subsystems of homeomorphisms of the 2-torus is improbable! 1 It is certainly the case that classifying the minimal subsystems of homeomorphisms of compact 2-manifolds presents profound and fundamental difficulties. This is because some very simple transformations, such as analytic diffeomorphisms of the 2-torus, have extremely rich families of minimal sets. Let $T \colon X \to X$ be a linear Anosov diffeomorphism of the 2-torus. The topological entropy of $T$ is finite and positive but may be arbitrarily large. When this entropy is large enough, the shift transformation on $k$ symbols (for any previously-specified natural number $k$) may be homeomorphically embedded into the dynamical system $(X,T)$ as a compact invariant subset. In particular, every minimal subsystem of the $k$-shift embeds into $(X,T)$ as a minimal subsystem. This is problematic because the $k$-shift has an enormous number of minimal subsystems, all of which will be inherited by the Anosov system. Indeed, the combinatorial version of the Jewett-Krieger theorem implies that every ergodic measure-preserving transformation of an abstract probability space which has entropy strictly less than $\log k$ may be embedded into the $k$-shift as a uniquely ergodic minimal subsystem. In particular, for a linear Anosov diffeomorphism of the 2-torus with topological entropy large enough, every ergodic measurable dynamical system with measure-theoretic entropy up to some threshold arises as a uniquely ergodic minimal subsystem. This already presents us with an enormous number of minimal subsystems, because for each $h \geq 0$ there exist uncountably many ergodic measurable dynamical systems of entropy $h$ which are not pairwise equivalent. This is then compounded by the fact that some minimal systems of $(X,T)$ will not arise from such an embedding, the fact that the embedding of the abstract ergodic system into the $k$-shift is in general nor unique, the fact that the embedding of the $k$-shift into $(X,T)$ is not in general unique, and the fact that the $k$-shift itself has additional minimal subsystems. Indeed, there is a further theorem due to Denker, Grillenberger and Sigmund which implies that for any finite collection of abstract ergodic transformations all having entropy strictly below $\log k$, we can find a minimal subsystem of the $k$-shift which has an embedded copy of each of these transformations as its only ergodic measures. On the basis of the above considerations I think that a satisfactory classification of the minimal subsystems of homeomorphisms of the 2-torus is improbable!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264329671859741, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/160912-function-image.html
# Thread: 1. ## function image? I have this question, I don't quite know how to approach it : g : R → R g(x) = $\frac {2x^2-x}{3x^2+x+1}$ I need to find Img(g) and determine if its one-to-one function. then it asks - what will happen to the image if we reduce the interval to [0,∞) thanks in advance. 2. This question really belongs in Calculus. This question is asking for the range of g. How do you normally go about finding the range?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241829514503479, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Polynomial_time
# Time complexity (Redirected from Polynomial time) "Running time" redirects here. For the film, see Running Time (film). In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input[1]:226. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor. Since an algorithm's performance time may vary with different inputs of the same size, one commonly uses the worst-case time complexity of an algorithm, denoted as T(n), which is defined as the maximum amount of time taken on any input of size n. Time complexities are classified by the nature of the function T(n). For instance, an algorithm with T(n) = O(n) is called a linear time algorithm, and an algorithm with T(n) = O(2n) is said to be an exponential time algorithm. ## Table of common time complexities Further information: Computational complexity of mathematical operations The following table summarizes some classes of commonly encountered time complexities. In the table, poly(x) = xO(1), i.e., polynomial in x. Name Complexity class Running time (T(n)) Examples of running times Example algorithms constant time O(1) 10 Determining if an integer (represented in binary) is even or odd inverse Ackermann time O(α(n)) Amortized time per operation using a disjoint set iterated logarithmic time O(log* n) Distributed coloring of cycles log-logarithmic O(log log n) Amortized time per operation using a bounded priority queue[2] logarithmic time DLOGTIME O(log n) log n, log(n2) Binary search polylogarithmic time poly(log n) (log n)2 fractional power O(nc) where 0 < c < 1 n1/2, n2/3 Searching in a kd-tree linear time O(n) n Finding the smallest item in an unsorted array "n log star n" time O(n log* n) Seidel's polygon triangulation algorithm. linearithmic time O(n log n) n log n, log n! Fastest possible comparison sort quadratic time O(n2) n2 Bubble sort; Insertion sort cubic time O(n3) n3 Naive multiplication of two n×n matrices. Calculating partial correlation. polynomial time P 2O(log n) = poly(n) n, n log n, n10 Karmarkar's algorithm for linear programming; AKS primality test quasi-polynomial time QP 2poly(log n) nlog log n, nlog n Best-known O(log2 n)-approximation algorithm for the directed Steiner tree problem. sub-exponential time (first definition) SUBEXP O(2nε) for all ε > 0 O(2log nlog log n) Assuming complexity theoretic conjectures, BPP is contained in SUBEXP.[3] sub-exponential time (second definition) 2o(n) 2n1/3 Best-known algorithm for integer factorization and graph isomorphism exponential time E 2O(n) 1.1n, 10n Solving the traveling salesman problem using dynamic programming factorial time O(n!) n! Solving the traveling salesman problem via brute-force search exponential time EXPTIME 2poly(n) 2n, 2n2 double exponential time 2-EXPTIME 22poly(n) 22n Deciding the truth of a given statement in Presburger arithmetic ## Constant time An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. However, finding the minimal value in an unordered array is not a constant time operation as a scan over each element in the array is needed in order to determine the minimal value. Hence it is a linear time operation, taking O(n) time. If the number of elements is known in advance and does not change, however, such an algorithm can still be said to run in constant time. Despite the name "constant time", the running time does not have to be independent of the problem size, but an upper bound for the running time has to be bounded independently of the problem size. For example, the task "exchange the values of a and b if necessary so that a≤b" is called constant time even though the time may depend on whether or not it is already true that a ≤ b. However, there is some constant t such that the time required is always at most t. Here are some examples of code fragments that run in constant time: ```int index = 5; int item = list[index]; if (condition true) then perform some operation that runs in constant time else perform some other operation that runs in constant time for i = 1 to 100 for j = 1 to 200 perform some operation that runs in constant time ``` If T(n) is O(any constant value), this is equivalent to and stated in standard notation as T(n) being O(1). ## Logarithmic time Further information: Logarithmic growth An algorithm is said to take logarithmic time if T(n) = O(log n). Due to the use of the binary numeral system by computers, the logarithm is frequently base 2 (that is, log2 n, sometimes written lg n). However, by the change of base equation for logarithms, loga n and logb n differ only by a constant multiplier, which in big-O notation is discarded; thus O(log n) is the standard notation for logarithmic time algorithms regardless of the base of the logarithm. Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search. A O(log n) algorithm is considered highly efficient, as the operations per instance required to complete decrease with each instance. A very simple example of this type of f(n) is an algorithm that cuts a string in half. It will take O(log n) time (n being the length of the string) since we chop the string in half before each print (we make the assumption that console.log and str.substring run in constant time). This means, in order to increase the number of prints, we have to double the length of the string. ```// Function to recursively print the right half of a string var right = function(str){ var length = str.length; // Helper function var help = function(index){ // Recursive Case: Print right half if(index < length){ // Prints characters from index until the end of the array console.log(str.substring(index, length)); // Recursive Call: call help on right half help(Math.ceil((length + index)/2)); } // Base Case: Do Nothing } help(0); } ``` ## Polylogarithmic time An algorithm is said to run in polylogarithmic time if T(n) = O((log n)k), for some constant k. For example, matrix chain ordering can be solved in polylogarithmic time on a Parallel Random Access Machine.[4] ## Sub-linear time An algorithm is said to run in sub-linear time (often spelled sublinear time) if T(n) = o(n). In particular this includes algorithms with the time complexities defined above, as well as others such as the O(n½) Grover's search algorithm. Typical algorithms that are exact and yet run in sub-linear time use parallel processing (as the NC1 matrix determinant calculation does), non-classical processing (as Grover's search does), or alternatively have guaranteed assumptions on the input structure (as the logarithmic time binary search and many tree maintenance algorithms do). However, languages such as the set of all strings that have a 1-bit indexed by the first log(n) bits may depend on every bit of the input and yet be computable in sub-linear time. The specific term sublinear time algorithm is usually reserved to algorithms that are unlike the above in that they are run over classical serial machine models and are not allowed prior assumptions on the input.[5] They are however allowed to be randomized, and indeed must be randomized for all but the most trivial of tasks. As such an algorithm must provide an answer without reading the entire input, its particulars heavily depend on the access allowed to the input. Usually for an input that is represented as a binary string b1,...,bk it is assumed that the algorithm can in time O(1) request and obtain the value of bi for any i. Sub-linear time algorithms are typically randomized, and provide only approximate solutions. In fact, the property of a binary string having only zeros (and no ones) can be easily proved not to be decidable by a (non-approximate) sub-linear time algorithm. Sub-linear time algorithms arise naturally in the investigation of property testing. ## Linear time An algorithm is said to take linear time, or O(n) time, if its time complexity is O(n). Informally, this means that for large enough input sizes the running time increases linearly with the size of the input. For example, a procedure that adds up all elements of a list requires time proportional to the length of the list. This description is slightly inaccurate, since the running time can significantly deviate from a precise proportionality, especially for small values of n. Linear time is often viewed as a desirable attribute for an algorithm. Much research has been invested into creating algorithms exhibiting (nearly) linear time or better. This research includes both software and hardware methods. In the case of hardware, some algorithms which, mathematically speaking, can never achieve linear time with standard computation models are able to run in linear time. There are several hardware technologies which exploit parallelism to provide this. An example is content-addressable memory. This concept of linear time is used in string matching algorithms such as the Boyer-Moore Algorithm and Ukkonen's Algorithm. ## Linearithmic/quasilinear time A linearithmic function (portmanteau of linear and logarithmic) is a function of the form n · log n (i.e., a product of a linear and a logarithmic term). An algorithm is said to run in linearithmic time if T(n) = O(n log n). Compared to other functions, a linearithmic function is ω(n), o(n1+ε) for every ε > 0, and Θ(n · log n). Thus, a linearithmic term grows faster than a linear term but slower than any polynomial in n with exponent strictly greater than 1. An algorithm is said to run in quasilinear time if T(n) = O(n logk n) for any constant k. Quasilinear time algorithms are also o(n1+ε) for every ε > 0, and thus run faster than any polynomial in n with exponent strictly greater than 1. In many cases, the n · log n running time is simply the result of performing a Θ(log n) operation n times. For example, binary tree sort creates a binary tree by inserting each element of the n-sized array one by one. Since the insert operation on a self-balancing binary search tree takes O(log n) time, the entire algorithm takes linearithmic time. Comparison sorts require at least linearithmic number of comparisons in the worst case because log(n!) = Θ(n log n), by Stirling's approximation. They also frequently arise from the recurrence relation T(n) = 2 T(n/2) + O(n). Some famous algorithms that run in linearithmic time include: • Comb sort, in the average and worst case • Quicksort in the average case • Heapsort, merge sort, introsort, binary tree sort, smoothsort, patience sorting, etc. in the worst case • Fast Fourier transforms • Monge array calculation ## Sub-quadratic time An algorithm is said to be subquadratic time if T(n) = o(n2). For example, most naïve comparison-based sorting algorithms are quadratic (e.g. insertion sort), but more advanced algorithms can be found that are subquadratic (e.g. Shell sort). No general-purpose sorts run in linear time, but the change from quadratic to sub-quadratic is of great practical importance. ## Polynomial time An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, i.e., T(n) = O(nk) for some constant k.[1][6] Problems for which a polynomial time algorithm exists belong to the complexity class P, which is central in the field of computational complexity theory. Cobham's thesis states that polynomial time is a synonym for "tractable", "feasible", "efficient", or "fast".[7] Some examples of polynomial time algorithms: • The quicksort sorting algorithm on n integers performs at most $An^2$ operations for some constant A. Thus it runs in time $O(n^2)$ and is a polynomial time algorithm. • All the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) can be done in polynomial time. • Maximum matchings in graphs can be found in polynomial time. ### Strongly and weakly polynomial time In some contexts, especially in optimization, one differentiates between strongly polynomial time and weakly polynomial time algorithms. These two concepts are only relevant if the inputs to the algorithms consist of integers. Strongly polynomial time is defined in the arithmetic model of computation. In this model of computation the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) take a unit time step to perform, regardless of the sizes of the operands. The algorithm runs in strongly polynomial time if [8] 1. the number of operations in the arithmetic model of computation is bounded by a polynomial in the number of integers in the input instance; and 2. the space used by the algorithm is bounded by a polynomial in the size of the input. Any algorithm with these two properties can be converted to a polynomial time algorithm by replacing the arithmetic operations by suitable algorithms for performing the arithmetic operations on a Turing machine. If the second of the above requirement is not met, then this is not true anymore. Given the integer $2^n$ (which takes up space proportional to n), it is possible to compute $2^{2^n}$ with n multiplications using repeated squaring. However, the space used to represent $2^{2^n}$ is proportional to $2^n$, and thus exponential rather than polynomial in the space used to represent the input. Hence, it is not possible to carry out this computation in polynomial time on a Turing machine, but it is possible to compute it by polynomially many arithmetic operations. Conversely, there are algorithms which run in a number of Turing machine steps bounded by a polynomial in the length of binary-encoded input, but do not take a number of arithmetic operations bounded by a polynomial in the number of input numbers. The Euclidean algorithm for computing the greatest common divisor of two integers is one example. Given two integers $a$ and $b$ the running time of the algorithm is bounded by $O((\log\ a + \log\ b)^2)$ Turing machine steps. This is polynomial in the size of a binary representation of $a$ and $b$ as the size of such a representation is roughly $\log\ a + \log\ b$. At the same time, the number of arithmetic operations cannot be bound by the number of integers in the input (which is constant in this case, there are always only two integers in the input). Due to the latter observation, the algorithm does not run in strongly polynomial time. Its real running time depends on the magnitudes of $a$ and $b$ and not only on the number of integers in the input. An algorithm which runs in polynomial time but which is not strongly polynomial is said to run in weakly polynomial time.[9] A well-known example of a problem for which a weakly polynomial-time algorithm is known, but is not known to admit a strongly polynomial-time algorithm, is linear programming. Weakly polynomial-time should not be confused with pseudo-polynomial time. ### Complexity classes The concept of polynomial time leads to several complexity classes in computational complexity theory. Some important classes defined using polynomial time are the following. • P: The complexity class of decision problems that can be solved on a deterministic Turing machine in polynomial time. • NP: The complexity class of decision problems that can be solved on a non-deterministic Turing machine in polynomial time. • ZPP: The complexity class of decision problems that can be solved with zero error on a probabilistic Turing machine in polynomial time. • RP: The complexity class of decision problems that can be solved with 1-sided error on a probabilistic Turing machine in polynomial time. • BPP: The complexity class of decision problems that can be solved with 2-sided error on a probabilistic Turing machine in polynomial time. • BQP: The complexity class of decision problems that can be solved with 2-sided error on a quantum Turing machine in polynomial time. P is the smallest time-complexity class on a deterministic machine which is robust in terms of machine model changes. (For example, a change from a single-tape Turing machine to a multi-tape machine can lead to a quadratic speedup, but any algorithm that runs in polynomial time under one model also does so on the other.) Any given abstract machine will have a complexity class corresponding to the problems which can be solved in polynomial time on that machine. ## Superpolynomial time An algorithm is said to take superpolynomial time if T(n) is not bounded above by any polynomial. It is ω(nc) time for all constants c, where n is the input parameter, typically the number of bits in the input. For example, an algorithm that runs for 2n steps on an input of size n requires superpolynomial time (more specifically, exponential time). An algorithm that uses exponential resources is clearly superpolynomial, but some algorithms are only very weakly superpolynomial. For example, the Adleman–Pomerance–Rumely primality test runs for nO(log log n) time on n-bit inputs; this grows faster than any polynomial for large enough n, but the input size must become impractically large before it cannot be dominated by a polynomial with small degree. An algorithm that has been proven to require superpolynomial time cannot be solved in polynomial time, and so is known to lie outside the complexity class P. Cobham's thesis conjectures that these algorithms are impractical, and in many cases they are. Since the P versus NP problem is unresolved, no algorithm for a NP-complete problem is currently known to run in polynomial time. ## Quasi-polynomial time Quasi-polynomial time algorithms are algorithms which run slower than polynomial time, yet not so slow as to be exponential time. The worst case running time of a quasi-polynomial time algorithm is $2^{O((\log n)^c)}$ for some fixed c. The best-known classical algorithm for integer factorization, the general number field sieve, which runs in time about $2^{\tilde{O}(n^{1/3})}$ is not quasi-polynomial since the running time cannot be expressed as $2^{O((\log n)^c)}$ for some fixed c. If the constant "c" in the definition of quasi-polynomial time algorithms is equal to 1, we get a polynomial time algorithm, and if it is less than 1, we get a sub-linear time algorithm. Quasi-polynomial time algorithms typically arise in reductions from an NP-hard problem to another problem. For example, one can take an instance of an NP hard problem, say 3SAT, and convert it to an instance of another problem B, but the size of the instance becomes $2^{O((\log n)^c)}$. In that case, this reduction does not prove that problem B is NP-hard; this reduction only shows that there is no polynomial time algorithm for B unless there is a quasi-polynomial time algorithm for 3SAT (and thus all of NP). Similarly, there are some problems for which we know quasi-polynomial time algorithms, but no polynomial time algorithm is known. Such problems arise in approximation algorithms; a famous example is the directed Steiner tree problem, for which there is a quasi-polynomial time approximation algorithm achieving an approximation factor of $O(\log^3 n)$ (n being the number of vertices), but showing the existence of such a polynomial time algorithm is an open problem. The complexity class QP consists of all problems which have quasi-polynomial time algorithms. It can be defined in terms of DTIME as follows.[10] $\mbox{QP} = \bigcup_{c \in \mathbb{N}} \mbox{DTIME}(2^{(\log n)^c})$ ### Relation to NP-complete problems In complexity theory, the unsolved P versus NP problem asks if all problems in NP have polynomial-time algorithms. All the best-known algorithms for NP-complete problems like 3SAT etc. take exponential time. Indeed, it is conjectured for many natural NP-complete problems that they do not have sub-exponential time algorithms. Here "sub-exponential time" is taken to mean the second definition presented above. (On the other hand, many graph problems represented in the natural way by adjacency matrices are solvable in subexponential time simply because the size of the input is square of the number of vertices.) This conjecture (for the k-SAT problem) is known as the exponential time hypothesis.[11] Since it is conjectured that NP-complete problems do not have quasi-polynomial time algorithms, some inapproximability results in the field of approximation algorithms make the assumption that NP-complete problems do not have quasi-polynomial time algorithms. For example, see the known inapproximability results for the set cover problem. ## Sub-exponential time The term sub-exponential time is used to express that the running time of some algorithm may grow faster than any polynomial but is still significantly smaller than an exponential. In this sense, problems that have sub-exponential time algorithms are somewhat more tractable than those that only have exponential algorithms. The precise definition of "sub-exponential" is not generally agreed upon,[12] and we list the two most widely used ones below. ### First definition A problem is said to be sub-exponential time solvable if it can be solved in running times whose logarithms grow smaller than any given polynomial. More precisely, a problem is in sub-exponential time if for every ε > 0 there exists an algorithm which solves the problem in time O(2nε). The set of all such problems is the complexity class SUBEXP which can be defined in terms of DTIME as follows.[3][13][14][15] $\text{SUBEXP}=\bigcap_{\varepsilon>0} \text{DTIME}\left(2^{n^\varepsilon}\right)$ Note that this notion of sub-exponential is non-uniform in terms of ε in the sense that ε is not part of the input and each ε may have its own algorithm for the problem. ### Second definition Some authors define sub-exponential time as running times in 2o(n).[11][16][17] This definition allows larger running times than the first definition of sub-exponential time. An example of such a sub-exponential time algorithm is the best-known classical algorithm for integer factorization, the general number field sieve, which runs in time about $2^{\tilde{O}(n^{1/3})}$, where the length of the input is n. Another example is the best-known algorithm for the graph isomorphism problem, which runs in time 2O(√(n log n)). Note that it makes a difference whether the algorithm is allowed to be sub-exponential in the size of the instance, the number of vertices, or the number of edges. In parameterized complexity, this difference is made explicit by considering pairs $(L,k)$ of decision problems and parameters k. SUBEPT is the class of all parameterized problems that run in time sub-exponential in k and polynomial in the input size n:[18] $\text{SUBEPT}=\text{DTIME}\left(2^{o(k)} \cdot \text{poly}(n)\right).$ More precisely, SUBEPT is the class of all parameterized problems $(L,k)$ for which there is a computable function $f : \mathbb N\to\mathbb N$ with $f \in o(k)$ and an algorithm that decides L in time $2^{f(k)} \cdot \text{poly}(n)$. #### Exponential time hypothesis Main article: Exponential time hypothesis The exponential time hypothesis (ETH) is that 3SAT, the satisfiability problem of Boolean formulas in conjunctive normal form with at most three literals per clause and with n variables, cannot be solved in time 2o(n). More precisely, the hypothesis is that there is some absolute constant c>0 such that 3SAT cannot be decided in time 2cn by any deterministic Turing machine. With m denoting the number of clauses, ETH is equivalent to the hypothesis that kSAT cannot be solved in time 2o(m) for any integer k ≥ 3.[19] The exponential time hypothesis implies P ≠ NP. ## Exponential time An algorithm is said to be exponential time, if T(n) is upper bounded by 2poly(n), where poly(n) is some polynomial in n. More formally, an algorithm is exponential time if T(n) is bounded by O(2nk) for some constant k. Problems which admit exponential time algorithms on a deterministic Turing machine form the complexity class known as EXP. $\text{EXP} = \bigcup_{c \in \mathbb{N}} \text{DTIME}\left(2^{n^c}\right)$ Sometimes, exponential time is used to refer to algorithms that have T(n) = 2O(n), where the exponent is at most a linear function of n. This gives rise to the complexity class E. $\text{E} = \bigcup_{c \in \mathbb{N}} \text{DTIME}\left(2^{cn}\right)$ ## Double exponential time An algorithm is said to be double exponential time if T(n) is upper bounded by 22poly(n), where poly(n) is some polynomial in n. Such algorithms belong to the complexity class 2-EXPTIME. $\mbox{2-EXPTIME} = \bigcup_{c \in \mathbb{N}} \mbox{DTIME}(2^{2^{n^c}})$ Well-known double exponential time algorithms include: • Decision procedures for Presburger arithmetic • Computing a Gröbner basis (in the worst case) • Quantifier elimination on real closed fields takes at least doubly exponential time (but is not even known to be computable in ELEMENTARY) ## References 1. ^ a b Sipser, Michael (2006). Introduction to the Theory of Computation. Course Technology Inc. ISBN 0-619-21764-2. 2. Mehlhorn, Kurt; Naher, Stefan (1990). "Bounded ordered dictionaries in O(log log N) time and O(n) space". Information Processing Letters 35 (4): 183. doi:10.1016/0020-0190(90)90022-P. 3. ^ a b Babai, László; Fortnow, Lance; Nisan, N.; Wigderson, Avi (1993). "BPP has subexponential time simulations unless EXPTIME has publishable proofs". Computational Complexity (Berlin, New York: Springer-Verlag) 3 (4): 307–318. doi:10.1007/BF01275486. 4. Bradford, Phillip G.; Rawlins, Gregory J. E.; Shannon, Gregory E. (1998). "Efficient Matrix Chain Ordering in Polylog Time". SIAM Journal on Computing (Philadelphia: Society for Industrial and Applied Mathematics) 27 (2): 466–490. doi:10.1137/S0097539794270698. ISSN 1095-7111. 5. Kumar, Ravi; Rubinfeld, Ronitt (2003). "Sublinear time algorithms". SIGACT News 34 (4): 57–67. 6. Papadimitriou, Christos H. (1994). Computational complexity. Reading, Mass.: Addison-Wesley. ISBN 0-201-53082-1. 7. Cobham, Alan (1965). "The intrinsic computational difficulty of functions". Proc. Logic, Methodology, and Philosophy of Science II. North Holland. 8. Grötschel, Martin; László Lovász, Alexander Schrijver (1988). "Complexity, Oracles, and Numerical Computation". Geometric Algorithms and Combinatorial Optimization. Springer. ISBN 0-387-13624-X. 9. Schrijver, Alexander (2003). "Preliminaries on algorithms and Complexity". Combinatorial Optimization: Polyhedra and Efficiency 1. Springer. ISBN 3-540-44389-4. 10. ^ a b Impagliazzo, R.; Paturi, R. (2001). "On the complexity of k-SAT". Journal of Computer and System Sciences (Elsevier) 62 (2): 367–375. doi:10.1006/jcss.2000.1727. ISSN 1090-2724. 11. Aaronson, Scott (5 April 2009). "A not-quite-exponential dilemma". Shtetl-Optimized. Retrieved 2 December 2009. 12. Moser, P. (2003). "Baire's Categories on Small Complexity Classes". (Berlin, New York: Springer-Verlag): 333–342. ISSN 0302-9743. 13. Miltersen, P.B. (2001). "DERANDOMIZING COMPLEXITY CLASSES". Handbook of Randomized Computing (Kluwer Academic Pub): 843. 14. Kuperberg, Greg (2005). "A Subexponential-Time Quantum Algorithm for the Dihedral Hidden Subgroup Problem". SIAM Journal on Computing (Philadelphia: Society for Industrial and Applied Mathematics) 35 (1): 188. ISSN 1095-7111. 15. Oded Regev (2004). "A Subexponential Time Algorithm for the Dihedral Hidden Subgroup Problem with Polynomial Space". arXiv:quant-ph/0406151v1 [quant-ph]. 16. Flum, Jörg; Grohe, Martin (2006). Parameterized Complexity Theory. Springer. p. 417. ISBN 978-3-540-29952-3. Retrieved 2010-03-05. 17. Impagliazzo, R.; Paturi, R.; Zane, F. (2001). "Which problems have strongly exponential complexity?". 63 (4): 512–530. doi:10.1006/jcss.2001.1774.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8853045105934143, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Enumeration
# Enumeration For enumeration types in programming languages, see enumerated type. Look up enumeration in Wiktionary, the free dictionary. An enumeration of a collection of items is a complete, ordered listing of all of the items in that collection. The term is commonly used in mathematics and theoretical computer science to refer to a listing of all of the elements of a set. In statistics the term categorical variable is used rather than enumeration. The precise requirements for an enumeration (for example, whether the set must be finite, or whether the list is allowed to contain repetitions) depend on the branch of mathematics and the context in which one is working. Some sets can be enumerated by means of a natural ordering (such as 1, 2, 3, 4, ... for the set of positive integers), but in other cases it may be necessary to impose a (perhaps arbitrary) ordering. In some contexts, such as enumerative combinatorics, the term enumeration is used more in the sense of counting – with emphasis on determination of the number of elements that a set contains, rather than the production of an explicit listing of those elements. ## Enumeration in combinatorics Main article: Enumerative combinatorics In combinatorics, enumeration means counting, i.e., determining the exact number of elements of finite sets, usually grouped into infinite families, such as the family of sets each consisting of all permutations of some finite set. There are flourishing subareas in many branches of mathematics concerned with enumerating in this sense objects of special kinds. For instance, in partition enumeration and graph enumeration the objective is to count partitions or graphs that meet certain conditions. ## Enumeration in set theory In set theory, the notion of enumeration has a broader sense, and does not require the set being enumerated to be finite. ### Enumeration as listing When an enumeration is used in an ordered list context, we impose some sort of ordering structure requirement on the index set. While we can make the requirements on the ordering quite lax in order to allow for great generality, the most natural and common prerequisite is that the index set be well-ordered. According to this characterization, an ordered enumeration is defined to be a surjection with a well-ordered domain. This definition is natural in the sense that a given well-ordering on the index set provides a unique way to list the next element given a partial enumeration. ### Enumeration in countable vs. uncountable context The most common use of enumeration in set theory occurs in the context where infinite sets are separated into those that are countable and those that are not. In this case, an enumeration is merely an enumeration with domain ω, the ordinal of the natural numbers. This definition can also be stated as follows: • As a surjective mapping from $\mathbb{N}$ (the natural numbers) to S (i.e., every element of S is the image of at least one natural number). This definition is especially suitable to questions of computability and elementary set theory. We may also define it differently when working with finite sets. In this case an enumeration may be defined as follows: • As a bijective mapping from S to an initial segment of the natural numbers. This definition is especially suitable to combinatorial questions and finite sets; then the initial segment is {1,2,...,n} for some n which is the cardinality of S. In the first definition it varies whether the mapping is also required to be injective (i.e., every element of S is the image of exactly one natural number), and/or allowed to be partial (i.e., the mapping is defined only for some natural numbers). In some applications (especially those concerned with computability of the set S), these differences are of little importance, because one is concerned only with the mere existence of some enumeration, and an enumeration according to a liberal definition will generally imply that enumerations satisfying stricter requirements also exist. Enumeration of finite sets obviously requires that either non-injectivity or partiality is accepted, and in contexts where finite sets may appear one or both of these are inevitably present. #### Examples • The natural numbers are enumerable by the function f(x) = x. In this case $f: \mathbb{N} \to \mathbb{N}$ is simply the identity function. • $\mathbb{Z}$, the set of integers is enumerable by $f(x):= \begin{cases} -(x+1)/2, & \mbox{if } x \mbox{ is odd} \\ x/2, & \mbox{if } x \mbox{ is even}. \end{cases}$ $f: \mathbb{N} \to \mathbb{Z}$ is a bijection since every natural number corresponds to exactly one integer. The following table gives the first few values of this enumeration: | | | | | | | | x | ƒ(x) | |----|----|----|----|----|----|----|-----|--------| | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | 0 | −1 | 1 | −2 | 2 | −3 | 3 | −4 | 4 | • All finite sets are enumerable. Let S be a finite set with n elements and let K = {1,2,...,n}. Select any element s in S and assign ƒ(n) = s. Now set S' = S − {s} (where − denotes set difference). Select any element s'  ∈ S' and assign ƒ(n − 1) = s' . Continue this process until all elements of the set have been assigned a natural number. Then $f: \{1,2,\dots,n\} \to S$ is an enumeration of S. • The real numbers have no countable enumeration as proved by Cantor's diagonal argument and Cantor's first uncountability proof. #### Properties • There exists an enumeration for a set (in this sense) if and only if the set is countable. • If a set is enumerable it will have an uncountable infinity of different enumerations, except in the degenerate cases of the empty set or (depending on the precise definition) sets with one element. However, if one requires enumerations to be injective and allows only a limited form of partiality such that if ƒ(n) is defined then ƒ(m) must be defined for all m < n, then a finite set of N elements has exactly N! enumerations. • An enumeration e of a set S with domain $\mathbb{N}$ induces a well-order ≤ on that set defined by s ≤ t if and only if min e−1(s) ≤ min e−1(t). Although the order may have little to do with the underlying set, it is useful when some order of the set is necessary. ### Ordinal enumeration In set theory, there is a more general notion of an enumeration than the characterization requiring the domain of the listing function to be an initial segment of the Natural numbers where the domain of the enumerating function can assume any ordinal. Under this definition, an enumeration of a set S is any surjection from an ordinal α onto S. The more restrictive version of enumeration mentioned before is the special case where α is a finite ordinal or the first limit ordinal ω. This more generalized version extends the aforementioned definition to encompass transfinite listings. Under this definition, the first uncountable ordinal $\omega_1$ can be enumerated by the identity function on $\omega_1$ so that these two notions do not coincide. More generally, it is a theorem of ZF that any well-ordered set can be enumerated under this characterization so that it coincides up to relabeling with the generalized listing enumeration. If one also assumes the Axiom of Choice, then all sets can be enumerated so that it coincides up to relabeling with the most general form of enumerations. Since set theorists work with infinite sets of arbitrarily large cardinalities, the default definition among this group of mathematicians of an enumeration of a set tends to be any arbitrary α-sequence exactly listing all of its elements. Indeed, in Jech's book, which is a common reference for set theorists, an enumeration is defined to be exactly this. Therefore, in order to avoid ambiguity, one may use the term finitely enumerable or denumerable to denote one of the corresponding types of distinguished countable enumerations. ### Enumeration as comparison of cardinalities Formally, the most inclusive definition of an enumeration of a set S is any surjection from an arbitrary index set I onto S. In this broad context, every set S can be trivially enumerated by the identity function from S onto itself. If one does not assume the axiom of choice or one of its variants, S need not have any well-ordering. Even if one does assume the axiom of choice, S need not have any natural well-ordering. This general definition therefore lends itself to a counting notion where we are interested in "how many" rather than "in what order." In practice, this broad meaning of enumeration is often used to compare the relative sizes or cardinalities of different sets. If one works in Zermelo-Fraenkel set theory without the axiom of choice, one may want to impose the additional restriction that an enumeration must also be injective (without repetition) since in this theory, the existence of a surjection from I onto S need not imply the existence of an injection from S into I. ## Enumeration in computability theory In computability theory one often considers countable enumerations with the added requirement that the mapping from $\mathbb{N}$ to the enumerated set must be computable. The set being enumerated is then called recursively enumerable (or computably enumerable in more contemporary language), referring to the use of recursion theory in formalizations of what it means for the map to be computable. In this sense, a subset of the natural numbers is computably enumerable if it is the range of a computable function. In this context, enumerable may be used to mean computably enumerable. However, these definitions characterize distinct classes since there are uncountably many subsets of the natural numbers that can be enumerated by an arbitrary function with domain ω and only countably many computable functions. A specific example of a set with an enumeration but not a computable enumeration is the complement of the halting set. Furthermore, this characterization illustrates a place where the ordering of the listing is important. There exists a computable enumeration of the halting set, but not one that lists the elements in an increasing ordering. If there were one, then the halting set would be decidable, which is provably false. In general, being recursively enumerable is a weaker condition than being a decidable set. ## References • Jech, Thomas (2002). Set theory, third millennium edition (revised and expanded). Springer. ISBN 3-540-44085-2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941264748573303, "perplexity_flag": "head"}
http://alanrendall.wordpress.com/2012/04/
# Hydrobates A mathematician thinks aloud ## Archive for April, 2012 ### Low throughput biology April 24, 2012 In modern biology there is a strong tendency to collect huge quantities of data with high throughput techniques. This data is only useful if we have good techniques of analysing it to obtain a better understanding of the biological systems being studied. One approach to doing this is to build mathematical models. An idea which is widespread is that the best models are those which are the closest to reality in the sense that they take account of as many effects as possible and use as many measured quantities as possible. Suppose for definiteness that the model is given by a system of ordinary differential equations. Then this idea translates into using systems with many variables and many parameters. There are several problems which may come up. The first is that some parameters have not been measured at all. The second is that those which have been measured are only known with poor accuracy and different parameters have been measured in different biological systems. A third problem is that even if the equations and parameters were known perfectly we are still faced with the difficult problem of analysing at least some aspects of the qualitative behaviour of solutions of a dynamical system of high dimension. The typical way of getting around this is to put the equations on the computer and calculate the solutions numerically for some initial data. Then we have the problem that we can only do the calculations for a finite number of initial data sets and it is difficult to know how typical the solutions obtained really are. To have a short name for the kind of model just described I will refer to it as a ‘complex model’. In view of all these difficulties with complex models it makes sense to complement the above strategy by one which goes in a very different direction. The idea for this alternative approach is to build models which are as simple as possible subject to the condition that they include a biological effect of interest. The hope is then that a detailed analysis of the simple model will generate new and useful ideas for explaining biological phenomena or will give a picture of what is going on which may be crude but is nevertheless helpful in practise, perhaps even more helpful than a complex model. It often happens that in analysing a complex model many of the parameters have to be guessed (perhaps just in an order of magnitude way) or estimated by some numerical technique. It is then justified to ask whether adding more variables and corresponding parameters really means adding information. How can we hope to understand complex models at all? If these were generic dynamical systems with the given number of unknowns and parameters this would be hopeless. Fortunately the dynamical systems arising in biology are far from generic. They have arisen by the action of evolution optimizing certain properties under strong constraints. Given that this is the case it makes sense to try and understand in what ways these systems are special. If key mechanisms can be identified then we can try to isolate them and study them intensively in relatively simple situations. My intention is not to deny the value of high throughput techniques. What I want to promote is the idea that it is bad if the pursuit of those approaches leads to the neglect of others which may be equally valuable. On a theoretical level this means the use of ‘simple models’ in contrast to ‘complex models’. There is a corresponding idea on the experimental side which may be even more necessary. This is to focus on the study of certain simple biological systems as a complement to high throughput techniques. This alternative might be called ‘low throughput biology’. It occurred to me that if I had this idea under this name then it might also have been introduced by others. Searching for the phrase with Google I only found a few references and as far as I could see the phrase was generally associated with a negative connotation. Rather than making an opposition between low throughput and high throughput techniques like David and Goliath it would be better to promote cooperation between the two. I have come across one good example in this in the work of Uri Alon and his collaborators on network motifs. This work is well explained in the lectures of Alon on systems biology which are available on Youtube. The idea is to take a large quantity of data (such as the network of all transcription factors of E. coli) and to use statistical analysis to identify qualitative features of the network which make it different from a random network. These features can then be isolated, analysed and, most importantly, understood in an intuitive way. Posted in mathematical biology | 3 Comments » ### Dynamics of the MAP kinase cascade April 7, 2012 The MAP kinase cascade is a group of enzymes which can iteratively add phosphate groups to each other. More specifically, when a suitable number of phosphate groups have been added to one enzyme in the cascade it becomes activated and can add a phosphate to the next enzyme in the row. I found this kind of idea of enzymes modifying each other with the main purpose of activating each other fascinating when I first came across it. (The first example I saw was actually the complement cascade which occurs in immunology.) This type of structure is just asking to be modelled mathematically and not surprisingly a lot of work has been done on it. Here I will survey some of what is known. The MAP kinase cascade is a structure which occurs in many types of cells. It has three layers. The first layer consists of a protein which can be phosphorylated once. The second layer consists of a protein which can be phosphorylated twice by the same enzyme. This enzyme is the phosphorylated form of the protein in the first layer. The third layer also consists of a protein which can be phosphorylated twice by the same enzyme. This enzyme is the doubly phosphorylated form of the protein in the second layer. The protein in the third layer is the one which is called MAP kinase (mitogen activated protein kinase, MAPK). A kinase is an enzyme which phosphorylates something else and so it is not suprising that the protein in the second layer is called a MAP kinase kinase (MAPKK). The protein in the first layer is accordingly called a MAP kinase kinase kinase (MAPKKK). The roles of the players in this scheme can be taken by different enzymes. For concreteness I name those which occur in the case of human T cells. There the proteins in the first, second and third layers are called Raf, MEK and ERK, respectively. The protein which phophorylates Raf, and hence starts the whole cascade, is Ras. It, or rather the corresponding gene ras, is famous as an oncogene. This means that when the gene is not working properly cancer can result. In fact many drugs used in cancer treatment target proteins belonging to the MAP kinase cascade. A model for the MAP kinase cascade was written down by Huang and Ferrell (PNAS, 93, 10078). They used a description of Michaelis-Menten type where for each basic substance three species are included in the network. These are the substance itself (free substrate), the enzyme and the complex of the two. Of course since in the MAP kinase cascade certain proteins act both as substrate and enzyme in different reactions there is some overlap between these. For clarity this may be called the ‘extended Michaelis-Menten’ description to contrast it with the ‘effective Michaelis-Menten’ description arising from the extended version by a quasi-steady state limiting process. Note that for a given basic reaction network with $m$ species the extended MM description has more than $m$ species but still has mass-action kinetics whereas the effective MM description has $m$ species but kinetics more complicated than mass action. Phosphatases catalysing the reverse reactions are also included in the model. The phosphatase which removes both phosphate groups of ERK is called MKP3. In the paper the steady states of the model are studied and an input-output relation is computed numerically. The activity of the MAPK is plotted as a function of the concentration of the first enzyme (Ras in the example). A sigmoidal curve is found which corresponds to what is called ultrasensitivity. The dynamical properties of the model are not discussed. In particular it is not discussed whether there might be multistability (more than one stable stationary solution for fixed values of the parameters) or periodic solutions. The authors also did experiments whose results agreed well with the theoretical predictions. The experiments were done with extracts from the oocytes (immature egg cells) of the frog Xenopus laevis. The possible dynamic behaviour was investigated in later papers. In some of these the effect of adding an additional feedback was considered. This kind of feedback is probably important in real biological systems. It may, for instance, explain why the results of experiments on whole oocytes are different from those done with extracts. Here, for mathematical simplicity, I will restrict to the case without additional feedback, in other words to the original Huang-Ferrell model. Multistability in this type of model was found in a paper of Markevich, Hoek and Kholodenko (J. Cell Biol. 164, 353). They investigate both extended and effective MM dynamics numerically and find bistability for both. In the extended MM model, which is the one I am most interested in here, the phosphorylation is supposed to be distributive. In other the words the kinase is released between the two phosphorylation steps. The alternative to this is called processive phosphorylation. In a paper of Conradi et. al. this result is compared with chemical reaction network theory (CRNT). It is found that while techniques from CRNT yield results agreeing with those of Markevich et. al. for the case where both the kinase and the phosphatase act in a distributive way, if one of these is replaced by a processive mechanism it can be proved using the Deficiency One Algorithm of CRNT that there is no multistationarity. The case with distributive phosphorylation is the special case $n=2$ of what is called a multiple futile cycle with $n$ steps. Wang and Sontag (J. Math Biol. 57, 29) proved upper and lower bounds for the number of steady states in this type of system under certain assumptions on the parameters. In particular this confirms that there can be three steady states (without determining their stability). Going beyond the single layer to the full cascade opens up more possibilities. Numerical evidence has been presented by Qiao et. al. (PLOS Comp. Biol. 9, 2007) that there are periodic solutions. To understand why these should exist it might be best to think of them as relaxation oscillations. Posted in dynamical systems, mathematical biology | 2 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9634242653846741, "perplexity_flag": "head"}
http://cms.math.ca/Events/winter12/res/ac
2012 CMS Winter Meeting Fairmont Queen Elizabeth (Montreal), December 7 - 10, 2012 Algebraic Combinatorics Org: Christophe Hohlweg and Franco Saliola (UQAM) [PDF] DREW ARMSTRONG, University of Miami Rational Catalan Combinatorics  [PDF] In this talk I will define the Catalan number ${\rm Cat}(x)$ corresponding to a rational number $x\in\mathbb{Q}$ outside the interval $[-1,0]$. It satisfies the symmetry ${\rm Cat}(x)={\rm Cat}(-x-1)$. Then I will define the derived" Catalan number $${\rm Cat}'(x):={\rm Cat}(1/(x-1))={\rm Cat}(x/(1-x)).$$ (This is a categorification of the Euclidean algorithm.) It satisfies the symmetry ${\rm Cat}'(x)={\rm Cat}'(1/x)$. I will make the bold assertion that every nice class of Catalan objects has a generalization counted by ${\rm Cat}(x)$. I will provide evidence, in the form of lattice paths, noncrossing partitions and associahedra. In the case of associahedra, the symmetry ${\rm Cat}'(x)={\rm Cat}'(1/x)$ is a topological statement about Alexander duality. CAROLINA BENEDETTI, York University Schubert polynomials and $k$-Schur functions  [PDF] In this talk we study operators associated to the graph on dual $k$-Schur functions given by the affine grassmannian order. These operators are analog to the ones given in the $r$-Bruhat order by Bergeron-Sottile. This allows us to understand combinatorially the multiplication of a Schubert polynomial by a Schur function from the multiplication in the space of dual $k$-Schur functions. (Joint work with Nantel Bergeron). CHRIS BERG, UQAM, LaCIM Strong Schur functions and down operators for the affine nilCoxeter algebra.  [PDF] I will introduce the concepts of strong Schur functions, first defined by Lam, Lapointe, Morse and Shimozono. Taking the combinatorial concepts of their definition, I will introduce a family of operators on the nilCoxeter algebra which were used in the proofs of several conjectures on strong Schur functions. This is all joint work with Franco Saliola and Luis Serrano. NANTEL BERGERON, York University Hopf monoid for supercharacter and NCSym  [PDF] I am combining recent work with Thiem and with Aguiar-Thiem. With Nat Thiem, we have defined a new basis of symmetric functions in noncommutative variables that depends on a parameter $q$. This new basis is natural in the context of (coarser) supercharacter theory of the unipotent uppertriangular matrices over a finite fields $F_q$. I will introduce this in the setting of Hopf monoid and show how to compute antipode and primitives as done with Aguiar and Thiem. CHRISTOPHE REUTENAUER, Université du Québec à Montréal Constructing bases of finite index subgroups of free groups using Sturmian sequences  [PDF] Joint work with Jean Berstel, Clelia De Felice, Dominique Perrin, Giuseppina Rindone. The Schützenberger theory of bifix codes is extended to subsets of $F$, the set of factors of a Sturmian (or epiSturmian) sequence. It is shown that such a code, if maximal, is the basis of a subgroup of the free group, of index equal to the degree $d$ of the code; $d$ is the number of partial decodings of long words. This result extends considerably the classical fact (Morse-Hedlund) that the number of factors of length $d$ of any Sturmian sequence is $d+1$. ED RICHMOND, UBC Coxeter groups, palindromic Poincar\'{e} polynomials and triangle group avoidance.  [PDF] Let $W$ be a Coxeter group. For any $w\in W,$ let $P_w$ denote its Poincar\'{e} polynomial (i.e. the generating function of the principle order ideal of $w$ with respect to length). If $W$ is the Weyl group of some Kac-Moody group $G,$ then $P_w$ is the usual Poincar\'{e} polynomial of the corresponding Schubert variety $X_w.$ In this talk, I will discuss joint work with W. Slofstra on detecting when the sequence of coefficients of a Poincar\'{e} polynomial are the same read forwards and backwards (i.e. palindromic). The polynomial $P_w$ satisfies this property precisely when the Schubert variety $X_w$ is rationally smooth. It turns out that this property is easy to detect when the Coxeter group $W$ avoids certain rank 3 parabolic subgroups (triangle groups). One consequence is that, for many Coxeter groups, the number of elements with palindromic Poincar\'{e} polynomials is finite. Explicit enumerations and descriptions of these elements are given in special cases. VIVIEN RIPOLL, UQAM Limit points of root systems of infinite Coxeter groups  [PDF] Let $W$ be an infinite Coxeter group, and consider the root system constructed from its geometric representation. We study the set $E$ of limit points of the directions of roots. As motivational examples, we describe with pictures in rank 2, 3, 4, the fractal shape of this limit set $E$. We define a natural geometric action of $W$ on $E$ and explain its properties (transitivity, orbit of a point). We also study the extreme points of the convex hull of $E$ and its relation with the "imaginary cone" of $W$. (joint works with M. Dyer, Ch. Hohlweg and J.-P. Labbé, arXiv:1112.5415) HUGH THOMAS, University of New Brunswick Lexicographically first subwords in Coxeter groups and quiver representations  [PDF] Fix a reduced word for the longest element $w_0$ of a finite Coxeter group $W$, and then, for each $w\in W$, find the lexicographically first reduced subword for $w$ in the fixed word for $w_0$. Sorting order, introduced by Drew Armstrong in 2007, is inclusion order on these subwords. Lexicographically first reduced subwords also play an important role in total positivity, where they go by the name of "positive distinguished subexpressions": leftmost reduced words in a maximal Grassmannian permutation naturally index cells in the totally positive part of the corresponding Grassmannian. It turns out that for certain reduced words for $w_0$ (those which are $c$-sorting words), including those relevant for total positivity of Grassmannians, the lexicographically first subwords can be described using quiver representations. I will explain this (without assuming prior knowledge of representation theory of quivers). This talk will be based on joint work with Steffen Oppermann and Idun Reiten, arXiv:1205.3268. STEPHANIE VAN WILLIGENBURG, UBC Maximal supports and Schur-positivity among connected skew shapes  [PDF] The Schur-positivity order on skew shapes is denoted by $B < A$ if the difference of their respective Schur functions is a positive linear combination of Schur functions. It is an open problem to determine those connected skew shapes that are maximal with respect to this ordering. In this talk we see that to determine the maximal connected skew shapes in the Schur-positivity order it is enough to consider a special class of ribbon shapes. We also explicitly determine the support for these ribbon shapes. This is joint work with Peter McNamara. MIKE ZABROCKI, York University Another Schur-like basis in the algebra of Non-Commutative Symmetric Functions  [PDF] The algebra of non-commuative symmetric functions (NSym) is the free algebra generated by one element of each degree and it is the dual Hopf algebra of the quasi-symmetric functions (QSym) (see for example: "Noncommutative symmetric functions" by I. Gelfand, et. al. and "Multipartite P-partitions and inner products of skew Schur functions" by I. Gessel). Since this algebra was first studied, it was believed that the' analogue of the Schur symmetric functions in NSym must be the ribbon basis. Recent results however have called that into question since the dual to the Quasi-Schur functions by J. Haglund et. al. is Schur-like' and its commutative image in the ring of symmetric functions is a Schur function. In this talk I will present another candidate for a Schur-like' basis that is based on a determinantal formula and show that it has many interesting properties including a (right) Pieri and Littlewood-Richardson rule, a formula using creation operators, a projection onto the Schur symmetric functions, a Murnaghan-Nakayama rule, a generalization to Hall-Littlewood symmetric functions, etc. These Schur-like' bases make us rethink what might be possible in NSym and QSym. This is joint work with Chris Berg, Nantel Bergeron, Franco Saliola and Luis Serrano.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 1, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8673628568649292, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/41269/distance-between-two-ranges/41293
# Distance between two ranges I'm working on a clustering algorithm to group similar objects that are represented by ranges of real numbers. Let's say that I have a group of people who are buying sugar. Each of them defines minimum and maximum amount he would be willing to buy. What I would like to do is to put in one cluster those people that want to buy similar amount. For example, one of the result clusters would be for people who need between 3 and 4 kg of sugar, while the other one might be between 6 and 8 kg. For this, I have to create a distance function which would tell how similar the requirements of the buyers are, i.e., what is the distance (or difference) between two ranges. What would be the best way to do this? - ## 1 Answer One way to do this would be to consider the Hausdorff distance. Given two subsets $A$, $B$ of a metric space $X$ (for example, two intervals in $\mathbb{R}$) the Hausdorff distance between them is the largest value out of $$\sup_{a\in A}\, \inf_{b\in B}\, d(a,b)$$ $$\sup_{b\in B}\, \inf_{a\in A}\, d(a,b)$$ That is, first we consider a point $a \in A$, find the least distance to a point $b\in B$, and maximise this over $A$. Then we do the same thing with the roles of $A$ and $B$ reversed. Finally we take the largest of these two values. Under the Hausdorff metric, the set of non-empty compact subsets of $X$ into a metric space (so, for example, all distances are non-negative, if the distance is zero then the sets coincide, and distances satisfy the triangle inequality). How does this work in your case? Let's write $A=[a_1,a_2]$ and $B=[b_1,b_2]$ and take a special case of $a_1<a_2<b_1<b_2$, so that the intervals are non-overlapping and every $a\in A$ is less than every $b\in B$. Then the first expression has the value $b_1-a_1$ and the second has the value $b_2-a_2$, so the distance between the intervals is $$d(A,B)=\max(b_1-a_1, b_2-a_2)$$ How about if $B$ is contained inside $A$, so we have $a_1<b_1<b_2<a_2$? Then the first quantity is $\max(b_1-a_1, a_2-b_2)$ and the second quantity is $0$, so the distance between the intervals is $$d(A,B)=\max(b_1-a_1, a_2-b_2)$$ You can proceed through all possible cases in this way. In fact, my conjecture is that for any two such intervals, we always have $$d(A,B) = \max(|a_1-b_1|, |a_2-b_2|)$$ although my enthusiasm doesn't quite stretch to working through all the cases. Shouldn't be too hard to do it yourself though!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603829979896545, "perplexity_flag": "head"}
http://mathoverflow.net/questions/111807/define-a-posterior-probability-of-y-given-x-when-the-model-is-not-probabilistic
Define a posterior probability of y given x when the model is not probabilistic Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we have a very simple online k-means where each new data-point is assigned to its nearest center (the mean is updated incrementally). Each center (cluster) is labelled with the most common label of data-points assigned to that cluster. In this special configuration: is it possible to compute a sort of "posterior probability"? I.e., can the posterior probability of a class label $y$ given a data-point $x$ ($P(y|x)$) just be $1/\text{distance}(x, m_y)$, where $m_y$ is a center labelled with $y$ which is nearest to $x$? - Try and explain your question to someone who might have some interest, but doesn't already understand everything that you know. Also, maybe this would be better on stats.stackexchange.com – Anthony Quas Nov 8 at 23:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319308996200562, "perplexity_flag": "middle"}