url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/71050?sort=votes
## Can the twin prime problem be solved with a single use of a halting oracle? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It occurred to me that if it were possible to determine whether a given program halts, that could be used to answer the twin primes conjecture A) Write a program which takes input n and then counts upward until it's found n pairs of twin primes B) Write a program which for any input n returns true if A halts and false otherwise C) Write a program which counts upward running B on every n until B returns false D) If C halts, there are finitely many twin primes, otherwise infinite. I was wondering if there was a way to do this without nesting halting problems... ie if you only get one chance to ask whether a program halts, is that sufficient to answer the twin primes conjecture - The title doesn't match the question. Perhaps you could change the title to something like, "Can the twin prime problem be solved with a single use of a halting oracle?". – S. Carnahan♦ Jul 23 2011 at 6:16 .. and the answer would be yes, to S. Carnahan's rewrite, if you get the code number for the program correct. Gerhard "Ask Me About System Design" Paseman, 2011.07.22 – Gerhard Paseman Jul 23 2011 at 6:24 that is a better name. thanks – dspyz Jul 23 2011 at 7:04 2 I feel like it is worth remarking that there are reasonable strengthinings of the twin prime conjecture which are $\Pi_1$. For example, in the notation of bit.ly/n8AyP4, the statement that $10^{-100}<\frac{2C_2n}{\pi_2(n)\ln(n)^2}<10^{100}$ for all $n$. – Kevin Ventullo Jul 24 2011 at 0:38 ## 3 Answers Note that your program is actually using a lot more than the halting oracle $0'$. It is using $0''$ — the halting oracle for machines using the $0'$ oracle. The oracle $0''$ is capable of deciding any $\Pi_2$ statement (like the twin prime conjecture) with a single definite query. Let's look at the twin prime conjecture in further detail. For any fixed $N$, the $\Pi_1$ statement "there are no twin prime pairs after $N$" can be resolved by a single query to $0'$. Thus, if there are only finitely many twin primes, then there is a single query to $0'$ that will let us know that — the catch is that we don't know which query will give us the answer. Note that we can still get by with finitely many queries to $0'$ by trying all natural numbers $N$ in order until we get a positive answer to the query "there are no twin prime pairs after $N$" (assuming the twin prime conjecture is actually false). To say "there are infinitely many twin primes" is a $\Pi_2$ statement. In general, one cannot positively decide a $\Pi_2$ statement by a single query to $0'$. However, the twin prime conjecture is a very specific $\Pi_2$ statement, so these general case arguments do not necessarily apply. For example, it is conceivable that the existence of infinitely many twin primes is in fact equivalent to the existence of a magic twinmaker, which is a certain $\Pi_1$ property of a natural number. In this case, we could resolve the twin prime conjecture by making a single query to $0'$: we could ask whether "there are no twin prime pairs after $N$" for some suitably chosen $N$, or we could ask whether "$N$ is a magic twinmaker" for some suitably chosen $N$. Again, the catch is that we don't know $N$ and, moreover, we don't even know which of the two questions to ask! However, the situation is not so bad, we could still get by with only finitely many queries to $0'$ without making lucky guesses. We go through all the natural numbers $N$ in order, in each case asking whether "there are no twin primes after $N$" or whether "$N$ is a magic twinmaker" until we get a positive answer. Since one of the two cases must occur for some $N$, we will eventually get a positive answer. Unfortunately, this magic twinmaker concept is completely made up for the purpose of illustration. It could be that the twin prime conjecture is a generic $\Pi_2$ statement, in which case we cannot expect to decide it positively with a single query to $0'$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have no disagreement with the answer of François Dorais, but I have a different take on the problem. Let $S$ be any statement of number theory, such as the Twin prime conjecture, Goldbach's conjecture, etc. of any quantifier complexity. Let us say that $S$ is $ZF$-decidable if $ZF$ either proves $S$, or $ZF$ proves the negation of $S$ (here $ZF$ is Zermelo-Fraenkel set theory). Proposition. Under the assumption that $ZF$ is arithmetically sound (i.e., it proves no false arithmetical sentence), there is a recursive function $f$ such that truth of any $ZF$-decidable statement $S$ of number theory can be determined by one query "Is $f(S)$ $\in K?$" (where $S$ is identified with its the Gödel number, and $K$ is the halting oracle). The above proposition follows immediately from the well-known fact that $K$ is a complete r.e. set; i.e., every recursively enumerable set $X$ is Turing-reducible to $K$; indeed, given such an $X$ there is a even a 1-1 recursive function $f$ such that $n\in X$ iff $f(n) \in K$. The $X$ at work here is the set of (Gödel numbers of) theorems of $ZF$. Therefore, if the twin prime conjecture is decidable within $ZF$, and $ZF$ is arithmetically sound, then its truth-value can be determined by a single query to the halting oracle. Two closing comments are in order: (1) It is well-known that if a statement $S$ of number theory is $ZFC$-decidable (where $ZFC$ is $ZF$ plus the axiom of choice), then $S$ is $ZF$-decidable. The proof is nontrivial and makes a detour through Gödel's constructible universe, and absoluteness considerations (this is due to Kreisel; according to McIntyre, it was surprisingly missed by Gödel himself). (2) There is nothing special about $ZF$ here, the above proposition holds for any axiomatic system $T$ with a recursive set of axioms, including those weaker than $ZF$, such as $PA$ (Peano arithmetic) or stronger than $ZF$, e.g., $ZF$ with "large cardinals". - I just saw according to this thread that it's an open problem: http://boards.straightdope.com/sdmb/archive/index.php/t-569801.html - 1 @GH: I am afraid you are confusing the fact that there is no automatic way to solve the halting problem (for all computer programs), with the fact that perhaps we can prove that a particular program halts. The question talks about one particular computer program, which is unknown whether it halts or not; indeed this particular program halts iff the twin conjecture holds. – boumol Jul 23 2011 at 10:31 1 @GH: One of the ways to formulate this kind of question is as follows: Is an explicit algorithm A using the oracle for the halting problem known such that A answers “Yes” if and only if the twin prime conjecture holds? – Tsuyoshi Ito Jul 23 2011 at 15:03 1 @boumol: I disagree. The question asks how can we decide the twin prime conjecture with a halting oracle. Actually within a formal system like ZFC one can reformulate the twin prime conjecture to a single halting problem: the program generates all conclusions of ZFC and stops if the twin prime conjecture is above them. This program halts iff the twin prime conjecture is valid within ZFC. – GH Jul 23 2011 at 19:22 2 @GH "There is no halting oracle" No, this is incorrect. There is a halting oracle. There's just no computable halting oracle. – Sam Alexander Jul 23 2011 at 21:12 1 I dropped the condition “use the oracle only once” by mistake. What I meant to say was that one way to formulate this question rigorously is: Is an explicit algorithm A using the oracle for the halting problem only once known such that A answers “Yes” if and only if the twin prime conjecture holds? Without some kind of formulation, “The twin prime conjecture is (or is not) decidable” does not make much sense. – Tsuyoshi Ito Jul 23 2011 at 21:21 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310258626937866, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/29408/what-is-the-expectation-of-a-normal-random-variable-divided-by-uniform-random-va?answertab=votes
# What is the expectation of a normal random variable divided by uniform random variable? I have two random variables: $x = N(0, \sigma^2)$ and $y =U[0, b]$. I need to compute $E(x/(1+y))$. How does one go about doing this? They are independent so the joint pdf is just the product of the two pdfs but can the integral be computed in closed form or is this something that should just be done numerically? - Is this homework? Hint: Try to reason by symmetry… – Neil G May 29 '12 at 17:48 1. No this is not homework. 2. That's true, I should have been more specific: The equation in my model is actually $\frac{1}{1+y}$ so it does not diverge. – Alex May 29 '12 at 17:51 ## 2 Answers From your description (and comments) you're trying to calculate $E \left( \frac{X}{1+Y} \right)$ where $X \sim N(0,\sigma^2)$ and $Y \sim {\rm Uniform}(0,b)$. By independence, $$E \left( \frac{X}{1+Y} \right) = E(X) \cdot E \left( \frac{1}{1+Y} \right)$$ We know $E(X) = 0$, therefore $$E \left( \frac{X}{1+Y} \right) = 0$$ as long as $E \left( \frac{1}{1+Y} \right)$ is finite. We know that $\frac{1}{1+Y}$ is bounded within $\left(\frac{1}{1+b},1 \right)$ with probability 1, therefore its mean also is. - 1 Isn't $Y\sim U(0,b)$ and, therefore, $1/(1+Y)$ bounded by $(1/(1+b),1)$ with probability 1? Because $b>0$, the expectation is finite anyways as you said. – Néstor May 29 '12 at 18:01 1 You're right. I was assuming $U(0,1)$. Will fix that. – Macro May 29 '12 at 18:02 Since it's not homework, then the expectation is zero by symmetry. How could there be an argument suggesting the answer is $k>0$ without a similar argument suggesting $-k$? - I presume you mean the symmetry under $x \to -x$. But you do need to establish that the expectation even exists before you can apply that ... :-). – whuber♦ May 29 '12 at 17:58 1 @whuber: Yes, you're right. I initially wrote it up as the expectation is either zero or doesn't exist, but then he edited his question and it was clear that it did exist. Macro has a nice explicit answer. – Neil G May 29 '12 at 17:59 The problem would have been more interesting if X had a mean different from 0. Then you would have to calculate E(1/(1+y)) – Michael Chernick May 29 '12 at 19:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9768710732460022, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/48664-advanced-calculus-help.html
# Thread: 1. ## Advanced Calculus Help A) For each part, find a function f: R -> R that has the desired properties: neither onto nor one-to-one B)Under what conditions does A\(A\B) = B? C)Define f:J -> N(natural numbers) where f(n) = 2n-1 for each n element of N(natural numbers). D) Given A = {1,2,3,4,5}, B = {2,3,4,5,6,7} and C = {a,b,c,d,e} state an example of f: A ->B, g: B-> C, such that g(f)(the composition of g onto f) is 1-1 but g is not 1-1. 2. Originally Posted by algebrapro18 B)Under what conditions does A\(A\B) = B? See #2 of http://www.mathhelpforum.com/math-he...ts-proofs.html by kathrynmath. 3. I read that thread and still can't see the answer. 4. ## Solution for B As we have msntioned previously, $A\backslash B=A\cap B^{c}$, where $B^{c}$ is the compliment set of $B$. Noting that $(B^{c})^{c}=B$, and simplifying the given expression, we get $A\backslash(A\backslash B)=A\backslash(A\cap B^{c})$ .............. $=A\cap(A\cap B^{c})^{c}$ .............. $=A\cap(A^{c}\cup B)$ .............. $=\underset{\emptyset}{\underbrace{(A\cap A^{c})}}\cup(A\cap B)$ .............. $=A\cap B.\qquad(*)$ Since (*) is equal to $B$, we infer that $B\subset A$ is true. 5. Thanks, that helps me with part B now all I need help with is C because I got A and D done by my self. 6. Originally Posted by algebrapro18 Thanks, that helps me with part B now all I need help with is C because I got A and D done by my self. Can you please explain C a little bit more? What is J or should we find what it is? Also do we still want the function f not to be onto and into again? 7. C)Define f:J -> N(natural numbers) where f(n) = 2n-1 for each n which is an element of N. I need to use what was given(the line above is all that is given) to find: the image of f, if f is 1-1, if f is onto, and if f is onto I need to find its inverse, the domain of the inverse, and the range of the inverse. here is what I have so far: The Im(f) has to be all odd numbers because that is what you get when you plug numbers into f(n)=2n-1. from there I get stumped. 8. ## Solution for C Originally Posted by algebrapro18 C)Define f:J -> N(natural numbers) where f(n) = 2n-1 for each n element of N(natural numbers). Although the following remark is not applicable for this exercise, I would like to tell it. Remark. Let $f:A\to B$ be a function and $A,B$ be finite sets if f is onto, then it is one-to-one; vice versa, if f is one-to-one, then it is onto. if $J\not\subset K:=\Big\{1,\frac{3}{2},2,\frac{5}{2},\ldots\Big\}$, then $f$ can not be a function with an image which is a subset of $\mathbb{N}$ (pick an element which is not in the set $K$, and see that it is mapped into the set $\mathbb{R}\backslash\mathbb{N}$). Therefore, we must have $J\subset K$. If $J=K$, then $f$ is one-to-one and onto. If $J\neq K$, then $f$ is only one-to-one. Just, try to figure it out by yourself by letting $f(n)=2n-1=m$, where $n\in K$ and $m\in\mathbb{N}$. Note that $f$ is strictly increasing, which indicates that it is one-to-one. Then obtain the set $K$ by isolating $n$... 9. In english please...I understood about 1 persent of what you said... 10. ## In English Originally Posted by algebrapro18 If $f$ is a function from $J$ to $\mathbb{N}$ defined by $f(n)=2n-1$, we see that $2n-1\in\mathbb{N}$ for all $n\in J$. This means that for every $n\in J$, there exists $m\in\mathbb{N}$ such that $2n-1=m$ holds. Note that for every $m\in\mathbb{N}$, we may not find $n\in J$. Therefore, we see that $<br /> n=\frac{m+1}{2}\text{ for }m\in\mathbb{N}<br />$ holds. Since $m\in\mathbb{N}$, we this indicates that $<br /> n\in K:=\Big\{1,\frac{3}{2},2,\frac{5}{2},3,\frac{7}{2} ,\ldots\Big\}.<br />$ Hence, the domain of $f$ can be picked to be any subset of $K$, i.e. $J\subset K$. Now consider the following possible cases. If $J=K$, then $f$ is one-to-one and onto. If $J\neq K$, then $f$ is only one-to-one. Hint. $f$ is strictly icreasing, and it is hence one-to-one. I guess it is more clear now?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 51, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351258873939514, "perplexity_flag": "middle"}
http://eventuallyalmosteverywhere.wordpress.com/2012/12/18/exploring-the-supercritical-random-graph/
# Exploring the Supercritical Random Graph Posted on December 18, 2012 by I’ve spent a bit of time this week reading and doing all the exercises from some excellent notes by van der Hofstad about random graphs. I think they are absolutely excellent and would not be surprised if they become the standard text for an introduction to probabilistic combinatorics. You can find them hosted on the author’s website. I’ve been reading chapters 4 and 5, which approaches the properties of phase transitions in G(n,p) by formalising the analogy between component sizes and population sizes in a binomial branching process. When I met this sort of material for the first time during Part III, the proofs generally relied on careful first and second moment bounds, which is fine in many ways, but I enjoyed vdH’s (perhaps more modern?) approach, as it seems to give a more accurate picture of what is actually going on. In this post, I am going to talk about using the branching process picture to explain why the giant component emerges when it does, and how to get a grip on how large it is at any time after it has emerged. Background A quick tour through the background, and in particular the notation will be required. At some point I will write a post about this topic in a more digestible format, but for now I want to move on as quickly as possible. We are looking at the sparse random graph $G(n,\frac{\lambda}{n})$, in the super-critical phase $\lambda>1$. With high probability (that is, with probability tending to 1 as n grows), we have a so-called giant component, with O(n) vertices. Because all the edges in the configuration are independent, we can view the component containing a fixed vertex as a branching process. Given vertex v(1), the number of neighbours is distributed like $\text{Bi}(n-1,\frac{\lambda}{n})$. The number of neighbours of each of these which we haven’t already considered is then $\text{Bi}(n-k,\frac{\lambda}{n})$, conditional on k, the number of vertices we have already discounted. After any finite number of steps, k=o(n), and so it is fairly reasonable to approximate this just by $\text{Bi}(n,\frac{\lambda}{n})$. Furthermore, as n grows, this distribution converges to $\text{Po}(\lambda)$, and so it is natural to expect that the probability that the fixed vertex lies in a giant component is equal to the survival probability $\zeta_\lambda$ (that is, the probability that it is infinite) of a branching process with $\text{Po}(\lambda)$ offspring distribution. Note that given a graph, the probability of a fixed vertex lying in a giant component is equal to the fraction of the vertex in the giant component. At this point it is clear why the emergence of the giant component must happen at $\lambda=1$, because we require $\mathbb{E}\text{Po}(\lambda)>1$ for the survival probability to be non-zero. Obviously, all of this needs to be made precise and rigorous, and this is treated in sections 4.3 and 4.4 of the notes. Exploration Process A common functional of a rooted branching process to consider is the following. This is called in various places an exploration process, a depth-first process or a Lukasiewicz path. We take a depth-first labelling of the tree v(0), v(1), v(2),… , and define c(k) to be the number of children of vertex v(k). We then define the exploration process by: $S(0)=0,\quad S(k+1)=S(k)+c(k)-1.$ By far the best way to think of this is to imagine we are making the depth-first walk on the tree. S(k) records how many vertices we have seen (because they are connected by an edge to a vertex we have visited) but have not yet visited. To clarify understanding of the definition, note that when you arrive at a vertex with no children, this should decrease by one, as you can see no new vertices, but have visited an extra one. This exploration process is useful to consider for a couple of reasons. Firstly, you can reconstruct the branching process directly from it. Secondly, while other functionals (eg the height, or contour process) look like random walks, the exploration process genuinely is a random walk. The distribution of the number of children of the next vertex we arrive at is independent of everything we have previously seen in the tree, and is the same for every vertex. If we were looking at branching processes in a different context, we might observe that this gives some information in a suitably-rescaled limit, as rescaled random walks converge to Brownian motion if the variance of the (offspring) distribution is finite. (This is Donsker’s result, which I should write something about soon…) The most important property is that the exploration process returns to 0 precisely when we have exhausted all the vertices in a component. At that point, we have seen exactly the vertices which we have explored. There is no reason not to extend the definition to forests, that is a union of trees. The depth-first exploration is the same – but when we have exhausted one component, we move onto another component, chosen according to some labelling property. Then, running minima of the exploration process (ie times when it is smaller than it has been before) correspond to jumping between components, and thus excursions above the minimum to components themselves. The running minimum will be non-positive, with absolute value equal to the number of components already exhausted. Although the exploration process was defined with reference to and in the language of trees, the result of a branching process, this is not necessary. With some vertex denoted as the root, we can construct a depth-first labelling of a general graph, and the exploration process follows exactly as before. Note that we end up ignoring all edges except a set that forms a forest. This is what we will apply to G(n,p). Exploring G(n,p) When we jump between components in the exploration process on a supercritical (that is $\lambda>1$) random graph, we move to a component chosen randomly with size-biased distribution. If there is a giant component, as we know there is in the supercritical case, then this will dominate the size-biased distribution. Precisely, if the giant component takes up a fraction H of the vertices, then the number of components to be explored before we get to the giant component is geometrically distributed with parameter H. All other components have size O(log n), so the expected number of vertices explored before we get to the giant component is O(log n)/H = o(n), and so in the limit, we explore the giant component immediately. The exploration process therefore gives good control on the giant component in the limit, as roughly speaking the first time it returns to 0 is the size of the giant component. Fortunately, we can also control the distribution of S_t, the exploration process at time t. We have that: $S_t+(t-1)\sim \text{Bi}(n-1,1-(1-p)^t).$ This is not too hard to see. $S_t+(t-1)$ is number of vertices we have explored or seen, ie are connected to a vertex we have explored. Suppose the remaining vertices are called unseen, and we began the exploration at vertex 1. Then any vertex with label in {2,…,n} is unseen if it successively avoids being in the neighbourhood of v(1), v(2), … v(t). This happens with probability $(1-p)^t$, and so the probability of being an explored or seen vertex is the complement of this. In the supercritical case, we are taking $p=\frac{\lambda}{n}$ with $\lambda>1$, and we also want to speed up S, so that all the exploration processes are defined on [0,1], and rescale the sizes by n, so that it records the fraction of the graph rather than the number of vertices. So we set consider the rescaling $\frac{1}{n}S_{nt}$. It is straightforward to use the distribution of S_t we deduce that the asymptotic mean $\mathbb{E}\frac{1}{n}S_{nt}=\mu_t = 1-t-e^{-\lambda t}$. Now we are in a position to provide more concrete motivation for the claim that the proportion of vertices in the giant component is $\zeta_\lambda$, the survival probability of a branching process with $\text{Po}(\lambda)$ offspring distribution. It helps to consider instead the extinction probability $1-\zeta_\lambda$. We have: $1-\zeta_\lambda=\sum_{k\geq 0}\mathbb{P}(\text{Po}(\lambda)=k)(1-\zeta_\lambda)^k=e^{-\lambda\zeta_\lambda},$ where the second equality is a consequence of the simple form for the moment generating function of the Poisson distribution. As a result, we have that $\mu_{\zeta_\lambda}=0$. In fact we also have a central limit theorem for S_t, which enables us to deduce that $\frac{1}{n}S_{n\zeta_\lambda}=0$ with high probability, as well as in expectation, which is precisely what is required to prove that the giant component of $G(n,\frac{\lambda}{n})$ has size $n(\zeta_\lambda+o(1))$. ###### Related articles • Branching Processes and Dwass’s Theorem (eventuallyalmosteverywhere.wordpress.com) • Coloring a Unicyclic Graph (cs.stackexchange.com) • Flaw in this Vertex Cover Algorithm (cs.stackexchange.com) • How many edges must a graph with N vertices have in order to guarantee that it is connected? (cs.stackexchange.com) ### Like this: This entry was posted in Branching Processes, Probability Theory, Random Graphs and tagged convergence in distribution, Random walk, random graph, central limit theorem, excursion, branching process, total population size, size-biased, probabilistic combinatorics, sparse, giant component, exploration process, depth-first, Lukasiewicz path, Donsker's theorem, supercritical by dominicyeo. Bookmark the permalink. ## 3 thoughts on “Exploring the Supercritical Random Graph” 1. Pingback: Analytic vs Probabilistic Arguments for Supercritical BP | Eventually Almost Everywhere 2. Pingback: From G(n,p) to G(n,m) | Eventually Almost Everywhere 3. Pingback: Poisson Tails | Eventually Almost Everywhere • 17,076 hits ### Spam Blocked %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304785132408142, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/34587/determining-a-correction-factor-and-applying-it-to-a-second-set-of-numbers
# Determining a correction factor and applying it to a second set of numbers 1st- I would like to figure out the correction factor between two sets of numbers (time series). 2nd- I want to apply that correction factor to a second set of numbers. As an example- say I have two temperature sensors, and I place them in the same exact environment. Even though they should be reading the same temperature, I can expect a little bit of offset due to error between the sensors. Something like: ````Sensor 1 Sensor 2 10 10 10 9 11 11 12 11.8 13 12.9 14 13.9 ```` It looks like Sensor 2 consistently reads a little low (or conversely that Sensor 1 reads a little high). Since I don't know the actual temperature this is a relative problem. So start, I'm not sure about the best way to figure out the correction factor. It's pretty easy to run a regression analysis or a correlation analysis by plotting an x y scatter plot and calculating a best fit line. But, I'm not sure if this is the way to go. But the real question is that once I figure out this correction factor, how do I apply it to a second set of numbers in a way that will reflect the error associated with the first set of numbers (basically that sensor 2 is reading slightly lower than sensor 1). To continue the above example, say I take the same two sensors, but now place them in different environments where they will be exposed to different temperatures. Now I have a second set of numbers (below) from the same instruments but no longer reading the same temperature. How do I relate the original correction factor to a set of numbers from different environments. ````Sensor 1 Sensor 2 10 13 11 14 12 14 12 14 13 14 ```` For the first set of numbers the regression analysis yields the equation y=1.0925x -1.3125. My initial thoughts were that I could use the regression equation from the first set of numbers as my correction factor, and then apply that to the second set of numbers in order to adjust them to account for the inherent error of the sensors. But since the sensors are now in two different environments, I can no longer just plug numbers into a y=mx+b type linear regression equation. I also thought about adjusting the second set of data by just adding the y intercept value to sensor 2, but this obviously does not work as the y intercept is too large in this case. So I think I have been barking up the wrong tree. So I'm not sure if a regression analysis is the way to go. But in the end, all I'm looking for is a way to quantify the error between the two sensors (as found in the first set of numbers) and then apply that to all future deployments. - I think you should start with examining how the disagreement between the sensors depend on the temperature. Plot the difference between the two readings against the sum of them. – ttnphns Aug 18 '12 at 16:39 ## 2 Answers I am not sure whether you are interested in estimating the temperature or whether you want to know what the value of sensor 2 would be for a given value of sensor 1. Still, I think that the tool you need is total least squares (TLS), also known as orthogonal regression. Contrary to normal linear regression, where you assume that all the error is the $Y$ variable, in orthogonal regression, you assume there is an error on both variables. This picture from the Wikipedia link above shows the difference: the error is assumed to be orthogonal to the regression line (and not vertical). If you are familiar with Principal Component Analysis (PCA), the orthogonal regression line is the first principal component of your dataset. There seems to be no TLS function in R, so the easiest is actually to do it by PCA. It can go like this. ````X <- c(10, 10, 11, 12, 13, 14) Y <- c(10, 9, 11, 11.8, 12.9, 13.9) pca <- prcomp(cbind(X,Y)) # Error (residual variance). pca[['sdev']][2]^2 # 0.05581165 PC1 <- pca[['rotation']][,1] # Estimated temperatures. cbind(X,Y) %*% PC1 # Y / X slope from the point (mean(X), mean(Y)). PC1[['Y']] / PC1[['X']] # 1.115854 ```` This means that you can model your system as $(Y - \bar{Y}) = 1.116 \cdot (X - \bar{X})$ or $Y = 1.116 \cdot X + (\bar{Y} - 1.116 \cdot \bar{X})$ if you need $Y$ as a function of $X$. - If the regression only involves one independent variable x then orthogonal regression assumes the error variance in both x and y are the same. Otherwise for the error in variables model the direction to minimize the squared difference is in a different direction than orthogonal to the line. – Michael Chernick Aug 18 '12 at 18:00 Thanks for the answer. And I apologize if this seems like a simplistic question, but what should I do with the answer (the 1.115854 value) should I add it to one of the columns of data? Or should I do something else with it? Thanks! – Vinterwoo Aug 27 '12 at 19:21 Sorry for answering late. I added the last line to clarify how you can use the result. – gui11aume Sep 27 '12 at 18:24 If the difference between the sensors is basically a constant plus an independent error term the regression model could be applied. What is not clear though is whether or not the difference changes in magnitude as a function of the actual temperature. This would probably show up as a change in the residual variance as a function of temperature or the slope of the line. The fact that the slope is slightly greater than 1 indicates that the difference increases a little as the temperature increases. However even if the assumptions required for the regression approach to be valid you should use a lot more than 5 observations to fit the model and estimate the offset (correction) for sensor 2 assuming sensor 1 is correct. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214383959770203, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/164731-countability-proof.html
# Thread: 1. ## Countability proof. I need help on proving (1) prove that if B A and A is countable, then B is countable (2) prove that if B A, A is infinite, and B is finite, then A\B is infinite I don't even know where to start... THx 2. I have a theorem which says Suppose A is a set. The following statements are equivalent: 1. A is countable 2. Either A = empty or there is a function f:z+ -> A that is onto. 3. There is a function f:A -> z+ that is one-to-one I am supposed to use those however I am stuck.... 3. If $A$ is denumerable then $A=\{a_1,a_2,\ldots\}$ . If $B=\emptyset$ then, $B$ is finite . If $B\neq \emptyset$ let $n_1$ be the least positive integer such that $a_{n_1}\in B$, let $n_2$ be the least positive integer such that $n_2>n_1$ and $a_{n_2}\in B$ ... Could you continue? Regards. Fernando Revilla 4. You can use the equivalent condition #3 to show (1). Namely, suppose $B\subseteq A$ and there exists a one-to-one function $f:A\to\mathbb{Z}^+$. Consider the restriction of f on B, i.e., the function whose domain is B and that acts just like f on any $x\in B$. Is this restriction still one-to-one? For (2), note that $A = B\cup(A\setminus B)$. What happens when $A\setminus B$ is finite? 5. I get the (2) part , however could you explain more on (1) part please??? 6. could you explain more on (1) part please??? Whom are you asking? Fernando suggests using condition #2 to prove (1), i.e., that B is countable. Going with #2, let's assume that there is an onto function $f:\mathbb{Z}^+\to A$ and that B is nonempty, i.e., there exists some fixed $x_0\in B$. I think it is easier to construct another function $g:\mathbb{Z}^+\to B$ as follows: $g(n)=<br /> \begin{cases}<br /> x_0 & f(n)\notin B\\<br /> f(n) & \text{otherwise}<br /> \end{cases}$ You have to check that indeed $g:\mathbb{Z}^+\to B$ and that g is onto. If you are going ti use #3 to show (1), then assume that there exists an $f:A\to\mathbb{Z}^+$. Consider $g(x)$ such that $g(x)=f(x)$ for $x\in B$ and $g(x)$ is undefined otherwise. You need to show that $g:B\to\mathbb{Z}^+$ and that g is one-to-one.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442981481552124, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/187620/blow-up-of-ode-solution
# Blow-up of ODE solution I am a newcomer to ODE. The relevant theorem that I can think of is about the maximum open interval of existence of the solution. But I have not learned to find the interval on which the solution exists. $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is $C^{1}$ and bounded on $\mathbb{R}^{n}.$ Is it possible to have a solution of $\dot{x}=f(x)$ that blows up in finite time? - ## 1 Answer No. Since $x(t) = x_0 + \int_0^t f(x(\tau)) \, d \tau$, and $f$ is bounded by, say, $B$, you have $$\|x(t)\| \leq \|x_0\| + \int_0^t \|f(x(\tau))\| d \tau \leq \|x_0\| + t B.$$ Hence $x(t)$ is bounded when the time is bounded. - (+1) Nice answer. – Mhenni Benghorbal May 5 at 1:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533600807189941, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/51899-implicit-differentation.html
# Thread: 1. ## Implicit Differentation $e^y+24-e^-1 = 5x^2+4y^2$ I can't figure this out.... I believe this stage is correct $e^y+e^-1=10x+8y*y'-y'$ But Need help on further solving it. Help appreciate. 2. Actually the first step should be $y'e^y = 10x + 8yy'$ 3. Where did you put the $e^-1$? 4. $e^{-1}$ is a constant and its derivative is 0. 5. I thought it was -e^-1. Anyway, in case of constant I get $(10x)/(-1*e^y-8y)$. Where could I have gone wrong there? 6. I get $y' = \frac{10x}{e^y - 8y}$ 7. The reason i have negative 1 is because there are two y', and when i place it ont he other side (-y') I factored and assumed 1. Why is this not correct? Thanks however.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9680085182189941, "perplexity_flag": "middle"}
http://engineering.wikia.com/wiki/Lift_(force)
# Lift (force) Talk0 643pages on this wiki Lift consists of the sum of all the fluid dynamic forces on a body perpendicular to the direction of the external flow around that body. There are a number of ways of explaining the production of lift; some are more complicated than others, some have been shown to be false. The simplest explanation is that the wing deflects air downward, and the reaction pushes the wing up. More complicated explanations focus on the air pressure around the wing, but these approaches are merely different expressions of the same underlying physical principles. ## Reaction due to accelerated air In air (or comparably in any fluid), lift is created as an airstream passes by an airfoil and is deflected downward. The force created by this deflection of the air creates an equal and opposite force upward on an airfoil according to Newton's third law of motion. The deflection of airflow downward during the creation of lift is known as downwash. It is important to note that the deflection of the air does not simply involve the air molecules "bouncing off" the bottom of the airfoil. Rather, air molecules closely follow both the top and bottom surfaces of the airfoil, and so the airflow is deflected downward by both the upper and lower surfaces. The downward deflection during the creation of lift can also be described as a "turning" of the airflow. Nearly any shape will produce lift if curved or tilted with respect to the air flow direction. However, most shapes will be very inefficient and create a great deal of drag. One of the primary goals of airfoil design is to devise a shape that produces the most lift while producing the least lift-induced drag. The airflow normally follows the curvature of the wing surface as it changes direction - this is known as flow-attachment, also called the Coanda effect. It is possible to measure lift using the reaction model. The force acting on the airfoil is the negative of the time-rate-of-change of the momentum of the air. In a wind tunnel, the speed and direction of the air can be measured (using, for example, a Pitot tube or Laser Doppler velocimetry) and thence the lift derived. ## Bernoulli's principle The force on the wing can also be examined in terms of the pressure differences above and below the wing. (This method of explanation is mathematically equivalent to the Newton's 3rd law explanation as developed above.) The relationship between the velocities and pressures above and below the wing are nearly predicted by Bernoulli's equation. More generally, the resulting force (Lift + Drag) is the integral of pressure on the contour of the wing. $\mathbf{L}+\mathbf{D} = \oint_{\partial\Omega}p\mathbf{n} \; d\partial\Omega$ where: • L is the Lift, • D is the Drag, • $\partial\Omega$ is the frontier of the domain, • p is the value of the pressure, • n is the normal to the profile. If the velocity of the fluid at each point is known, Bernoulli's law can be applied to determine the pressure: ${v^2 \over 2}+{p\over\rho}=\text{constant}$ where: • v is the velocity of the fluid • p is the pressure • $\rho$ is the density Thus, Bernoulli's law allows v to be substituted for p in the integral to compute the total aerodynamic force, and the resulting equation suffices to predict both lift and drag. However it assumes an a priori knowlege of the velocity vector field in the vicinity of the wing; in particular Bernoulli's law does not explain why the air speeds up or slows down near the wing. Also, Bernoulli's law makes assumptions on the flow, some of which may be not be applicable in all situations. In particular, the simple form above ignores compressibility. For compressible flow, a more complicated form of Bernoulli's law needs to be applied. ## Circulation A third way of calculating lift is a mathematical construction called circulation. Again, it is mathematically equivalent to the two explanations above. It is often used by practicing aerodynamicists as a convenient quantity, but is not often useful for a layperson's understanding. The circulation is the line integral of the velocity of the air, in a closed loop around the boundary of an airfoil. It can be understood as the total amount of "spinning" (or vorticity) of air around the airfoil. When the circulation is known, the section lift can be calculated using: $l = \rho \times V \times \Gamma$ where $\rho$ ```is the air density, $V$ is the free-stream airspeed, and $\Gamma$ is the circulation. ``` The Helmholtz theorem states that circulation is conserved. When an aircraft is at rest, there is no circulation. As the flow speed increases (that is, the aircraft accelerates in the air-body-fixed frame), a vortex, called the starting vortex, forms at the trailing edge of the airfoil, due to viscous effects in the boundary layer. Eventually the vortex detaches from the airfoil and gets swept away from it rearward. The circulation in the starting vortex is equal in magnitude and opposite in direction to the circulation around the airfoil. Theoretically, the starting vortex remains connected to the vortex bound in the airfoil, through the wing-tip vortices, forming a closed circuit. In reality the starting vortex gets dissipated by a number of effects, as do the wing-tip vortices far behind the aircraft. ## Coefficient of lift Aerodynamicists are one of the most frequent users of dimensionless numbers. The coefficient of lift is one such term. When the coefficient of lift is known, for instance from tables of airfoil data, lift can be calculated using the Lift Equation: $L = C_L \times \rho \times {V^2\over 2} \times A$ where: • $C_L$ is the coefficient of lift, • $\rho$ is the density of air (1.225 kg/m3 at sea level)* • V is the freestream velocity, that is the airspeed far from the lifting surface • A is the surface area of the lifting surface • L is the lift force produced. This equation can be used in any consistent system. For instance, if the density is measured in kilograms per cubic metre, the velocity is measured in metres per second, and the area is measured in square metres, the lift will be calculated in newtons. Or, if the density is in slugs per cubic foot, the velocity is in feet per second, and the area is in square feet, the resulting lift will be in pounds force. * Note that at altitudes other than sea level, the density can be found using the Barometric formula Compare with: Drag equation. ## Common explanation of lift is false There is a common explanation put forward in many mainstream sources that explains lift as follows: due to the greater curvature (and hence longer path) of the upper surface of an aerofoil, the air going over the top must go faster in order to "keep up" with the air flowing around the bottom since they have to both traverse the airfoil in the same amount of time. Bernoulli's law is then cited to say that due to the faster speed on top the pressure is lower. Despite the fact that this "explanation" is probably the most common of all, it is false. There is no physical principle that implies the air over the top must keep up with the air below, and expiremental evidence shows that it does not. Such an explanation would mean that an aircraft could not fly inverted, which is demonstrably not the case. It also fails to account for aerofoils which are fully symmetrical yet still develop significant lift, or for sails which are thin membranes with no path-length difference between their two sides. Although the assumption of equal transit time is not correct, some of the phenomena described by this explanation are. In particular: • there are regions of low pressure above the wing and regions of high pressure below the wing • the air speeds up as it passes over the top of the wing and slows down as it passes the bottom, and • Bernoulli's law can be used to relate the velocities and pressures. However, Bernoulli's law does not explain why the air changes speed, it only says that speed and pressure are related. Without some reason why the air changes speed, any explanation based on speed differences is incomplete (and of course any explanation that incorrectly describes why the speed is different is itself incorrect) . Note that while this explanation depends on Bernoulli's law, the fact that this theory has been discredited does not imply that Bernoulli's law is incorrect. It is interesting to note that Albert Einstein, in attempting to design a practical aircraft based on this principle, came up with an aerofoil section that featured a large hump on its upper surface, on the basis that an even longer path must aid lift if the principle is true. Its performance was terrible, and we can suppose that in fact this was the point that Einstein was trying to prove. There is a book on this topic: "Understanding Flight", published by McGraw-Hill, [[|ISBN 0071363777]], by David Anderson and Scott Eberhardt. The authors are a physicist and an aeronautical engineer. They explain flight in non-technical terms and specifically address the Bernoulli myth. Although currently accepted theories of aerodynamic lift were developed as early as 1907, this incorrect explanation didn't appear until 1936 and became popularized later, especially after World War II. It is unclear why this explanation has gained such currency, except by repetition and perhaps the fact that it is easy to grasp intuitively without mathematics and gets some of the description right. Note that this explanation does not appear in peer-reviewed papers (except when the author points it out as incorrect) and any text book claiming to be a serious work on the topic will not promote this explanation. # Photos Add a Photo 385photos on this wiki • by H Padleckas 2012-02-06T07:12:40Z Posted in User:Thompson.coetzee • by Mr Swordfish 2011-08-11T20:31:16Z Posted in Lift (force), more... • by Andreimak 2011-04-03T03:32:10Z Posted in Diesel Engine Engineering 2, more... • by H Padleckas 2011-02-27T09:54:59Z Posted in Valve, more... • by H Padleckas 2011-02-10T04:14:36Z Posted in User:Thompson.coetzee • by H Padleckas 2011-01-25T16:48:45Z Posted in User:Thompson.coetzee • by Gideon021 2010-12-06T14:46:04Z • by Janicebelen 2010-03-04T03:54:21Z • by H Padleckas 2009-11-19T17:54:02Z Posted in User:H Padleckas, more... • by H Padleckas 2009-11-19T14:25:42Z Posted in User:H Padleckas, more... • by H Padleckas 2009-11-19T14:16:55Z Posted in User:H Padleckas, more... • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944255530834198, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/tagged/gaussian-process?sort=unanswered&pagesize=15
# Tagged Questions The gaussian-process tag has no wiki summary. 1answer 53 views ### How to test if two samples are distributed from the same Gaussian process Given a sequence $\mathbf{x} = (x_1,x_2,\dots,x_n)$ which is sampled from some Gaussian process $GP(\mu_1,\Sigma_1)$ and a "target" sequence $\mathbf{y} = (y_1,y_2,\dots,y_n)$ sampled from another ... 1answer 224 views ### Gaussian process predictor I am building GP regressor , my input data is 1-d column vector and so is my target. I have divided my data into training and testing sets. I trained the model to learn the hyper-paramters and then ... 0answers 50 views ### Confusion related to a derivation I was reading this paper http://cs.ru.nl/~perry/publications/2011/ICANN2011/groot-icann2011.pdf and I am a bit confused how this was derived \$p(f|Y) \propto p(f)*p(Y|f) \propto ... 0answers 48 views ### Confusion related to calculation of likelihood I was reading this paper related to Learning from multiple annotator using Gaussian processes. The idea is if we don't have the actual ground truth of a certain data, but only the labels from some ... 0answers 152 views ### Combining normal distributions Imagine that I take two separate measures and I get two separate normal distributions N1(m1, s1^2) N2(m2, s2^2) How can I find a single normal distribution N3 ... 0answers 79 views ### Estimating a 1-D Brownian motion process using noisy observations This question is a follow-up on my previous question. Suppose I have a Brownian motion process that is defined as follows: at time $i=1$ random variable $X_1\sim\mathbf{N}(\mu,\sigma^2)$, and, for ... 0answers 41 views ### How to implement multiple GP submodels in PYMC I'm hoping someone can give me some guidance on implementing Gaussian processes (GP) with PYMC. In particular, I'm not sure how to use multiple GP submodels properly within a single pymc model. More ... 0answers 39 views ### Confusion related to kriging I was going through the wiki article related to kriging http://en.wikipedia.org/wiki/Kriging. However, I couldn't follow some derivations. In the first figure for simple kriging, how come the ... 0answers 133 views ### Gaussian process - dimensionality reduction Specific question on Gaussian Processes and dimensionality reduction. I saw a a method for dimensionality reduction for the squared exponential covariance function (not ARD) whereby one uses a GxD ... 0answers 56 views ### With what probability the standard deviation of GP capture the measurement? An interesting property of Gaussian Processes is estimating the uncertainty range. This uncertainty range of prediction can potentially capture the actual measurements. I am wondering, how many times ... 0answers 133 views ### Guassian Process Regression - feature selection I'm using guassian process regression to do some modeling. One issue I'm encountering is feature selection for some of my models, which often have many relevant features. I'm not sure what the best ... 0answers 115 views ### Inferring a Gaussian from noisy data Assume a noise comes from a specific point on a line, noise which I can detect but not completely accurately. My uncertainty we assume to be Gaussian. I want to gather evidence about the real ... 0answers 99 views ### Similarity matrix and multiple-regression Let, $S_{n*n}$ represent a similarity matrix, among $n$ observation, my case n = 215. and $Y=\{y_1, y_2, ...,y_n\}$ contains a response value for each $x_n$ observation. For each observation we have ... 0answers 31 views ### Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance The Gaussian, or squared exponential covariance is $k_{SE}(s,t) = \exp \left\{ -\frac{1}{2l} (s - t)^2 \right\}$. It is a common covariance function used in Gaussian processes. The Karhunen-Loeve ... 0answers 35 views ### Confusion related to derivation of gaussian process regression I was going through these slides related to gaussian process regression and I have a certain confusion http://www.eurandom.tue.nl/events/workshops/2010/YESIV/Prog-Abstr_files/Ghahramani-lecture2.pdf ... 0answers 63 views ### Time derivative of a gaussian process I am currently working on biomass. I am trying to quantify how much the level of uncertainties in biomass estimations will affect the level of uncertainty in biomass fluxes. For example, I know the ... 0answers 89 views ### How to incorporate prior knowledge in GPML? I am using the MATLAB code for Rasmussen & Williams' book Gaussian Processes for Machine Learning. How can one incorporate prior knowledge in Gaussian process regression? Say, that the variance ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8875707983970642, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/91333/intuition-around-why-sine-of-x-angle-always-equals-same-result/108460
# Intuition around why Sine of X angle always equals same result. My understanding so far. Sine represents a ratio of two sides of an interior angle within a right angle triangle. So given the three lengths of a triangle you can find the sine of any of the 3 interior angles. Also if you are given the actual angle of an interior angle, you can get the sine using a calculator. Thus, I deduce from these two statements that on a right triangle any interior angle of a specific number represents a constant raio, whatever the area of the triangle. I can visualize that in my head to a certain extent, scaling the triangles sides equally, increases the area of the triangle but not the ratio of the sides. But I'm wondering if I'm missing anything in terms of intuition around this? - It is hard to tell what is being asked here. The sine is a function, so it outputs the same value every time you give it the same input. Angles in similar triangles have the same trig ratios, as those angles are the same. – The Chaz 2.0 Dec 14 '11 at 3:59 @TheChaz You're right, my question is vague. One good point that you helped me clarify in your comment, was the fact that sine is a function. Being a function it can't map to multiple results from the same input. I guess I just struggled with the fact that the sine of X degree is always the same. No matter the size of the triangle. I still can't quite grasp the impact that a 90 degree triangle has on maintaining a constant ratio. For example what happens in non-90 degree triangles with sine? – drc Dec 14 '11 at 4:12 Trig ratios don't have the same visible connection to sides in non-right triangles (there's a word for those - just can't remember it!). The sine of, say, the 45 degree angle in a 35-45-100 triangle would still be 1/sqrt2, but you'd have to draw altitudes to see such ratios. – The Chaz 2.0 Dec 14 '11 at 4:24 1 Here's what I suspect is meant. Take two right triangles with the same shape but different sizes; each has a $\theta^\circ$ angle. Look at the ratio of opposite to hypontenuse. Lo and behold, it's the same in both triangles, despite the difference in sizes. So the question would be: why? How is that proved? – Michael Hardy Dec 14 '11 at 4:26 OBLIQUE triangles! Had to dig out a precal book... :) – The Chaz 2.0 Dec 14 '11 at 4:26 ## 3 Answers I think Michael Hardy has rephrased the question well, and I think The Chaz has answered it by referring to similar triangles. If two right triangles both have an angle $\theta$, then they are similar, so the ratio opposite-to-hypotenuse will be the same in both, so the sine depends only on the angle and not on the area. - The proof assumes that we are in Euclidean geometry - similarity depends on this - so we need the parallel postulate. In non-Euclidean geometry the situation is interestingly different, in that the angle-sum of a triangle depends on its size. It is not clear from the question what kind/sophistication of proof is required. – Mark Bennet Dec 14 '11 at 8:03 OP says the sine is a ratio of two sides of a right-angle triangle. I think that puts us firmly in Euclidean geometry. But if you want to write up a treatment of trig functions in a non-Euclidean setting, be my guest. – Gerry Myerson Dec 14 '11 at 9:31 Fair enough. I was just thinking that it might be worth noting what feeds our intuition about these things - then it might be less counterintuitive to imagine other possibilities. – Mark Bennet Dec 14 '11 at 12:05 Good point.${}$ – Gerry Myerson Dec 14 '11 at 12:36 "For example what happens in non-90 degree triangles with sine?" There are a ton of interesting trig equalities that apply to any triangle, such as the fact that the ratio of each side to the angle opposite is the same for all 3 angles/sides - the Law of Sines. You should totally read "Trigonometry" by Gelfand and Saul - hugely informative and fun to read. - What is at stake here is that in euclidean geometry thanks to the parallel axiom we have the notion of similarity: We can scale our figures by an arbitrary factor $\lambda>0$, whereby all incidences stay intact, all lengths are multiplied by $\lambda$, and all angles stay the same. This additional transformation group is not present in other geometries (which apart from the parallel axiom satisfy much the same axioms): You cannot enlarge a spherical triangle "linearly" by a factor $\lambda>0$ whereby all angles stay the same. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315446615219116, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/59722-find-slope-line.html
# Thread: 1. ## Find the slope of a line Find the slope of the line joining $(2,4)$ and $(2+h, f(2+h))$ in terms of the nonzero number $h$ Ok I know that to find the slope of a line the formula is $m=\frac {y_2-y_1}{x_2-x_1}$, but when I try to solve for it I get $\frac {f(2+h)-4}{h}$. Is that right? 2. Hello there, You are not given a specific value for $h$ so that seem correct to me. Hope this helps! #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478649497032166, "perplexity_flag": "head"}
http://mathinsight.org/differentiability_multivariable_definition
# Math Insight • Top • Introduction • The definition • In threads When printing from Firefox, select the SVG renderer in the MathJax contextual menu (right click any math equation) for better printing results ### The definition of differentiability in higher dimensions #### Suggested background The definition of differentiability in multivariable calculus formalizes what we meant in the introductory page when we referred to differentiability as the existence of a linear approximation. The introductory page simply used the vague wording that a linear approximation must be a “really good” approximation to the function near a point. What does this “really good” mean? For a scalar valued function of two variables, $f: \R^2 \to \R$ (confused?), this condition meant the existence of a tangent plane, such as the one shown in this applet, which is the example used in the introductory page. Neuron firing rate function with tangent plane. A fictitious representation of the firing rate $r(i,s)$ of a neuron in response to an input $i$ and nicotine level $s$. The graph of the function has a tangent plane at the location $(i,s)=(3,3)$ of the green point, so the function is differentiable there. By rotating the graph, you can see how the tangent plane touches the surface at the point $(3,3)$. There is no tangent plane to the graph at any point (such as the red point) along the fold of the graph; the function $r(i,s)$ is not differentiable at any point along the fold. Although we have geometric intuition that helps us understand a tangent plane, this intuition doesn't help us understand the general case of a linear approximation of a function $\vc{f}: \R^n \to \R^m$. Even for the two-dimensional case where differentiability is based on existence of a tangent plane, we should still be precise about what me mean by the existence of this linear approximation. Otherwise, we could never be certain if a given plane was really tangent to the graph. How can we formalize this definition? For starters, let's write our candidate linear approximation around the point $\vc{a}$ as \begin{align*} \vc{L}(\vc{x}) = \vc{f}(\vc{a}) + \vc{T}(\vc{x}-\vc{a}) \end{align*} where $\vc{T}(\vc{x})$ is a linear transformation. We want to derive a condition to test if $\vc{L}$ really is the linear approximation of $\vc{f}$ around $\vc{a}$. The condition needs to capture the sense that $\vc{L}(\vc{x})$ is “really close” to $\vc{f}(\vc{x})$ when $\vc{x}$ is near $\vc{a}$. One possibility might be that the limit of $\vc{L}(\vc{x})$ as $\vc{x}$ approaches $\vc{a}$ should be the same as the limit of $\vc{f}(\vc{x})$ as $\vc{x}$ approaches $\vc{a}$. That sounds reasonable. That would mean that distance between $\vc{L}(\vc{x})$ and $\vc{f}(\vc{x})$ should go to zero as we get close to $\vc{a}$. We could write this candidate condition as $$\lim_{\vc{x} \to \vc{a}} \| \vc{f}(\vc{x})-\vc{L}(\vc{x})\| = 0.$$ You can't get any closer than zero distance, so certainly this condition should be a good one. The problem with this definition becomes immediately clear if you try to apply it to the one variable linear approximation, i.e., the tangent line of a curve. As shown in the following figure, many lines satisfy the above condition, as it only specifies that the line passes through the point $(a,f(a))$. Since the lines and the function are continuous, their distance clearly goes to zero as $x$ approaches $a$. How can we pick out the tangent line (shown in blue) to the graph of $f(x)$ (shown in green) from among all these candidate lines? In hindsight, we can see that are above condition was doomed to failure. Reviewing the definition of a linear transformation reminds us that for any linear transformation $\vc{T}(\vc{x})$, it must be true that $\vc{T}(\vc{0})=\vc{0}$. So, given the above definition of $\vc{L}(\vc{x})$, we can see that $\vc{L}(\vc{a}) = \vc{f}(\vc{a})$ no matter what we choose for $\vc{T}(\vc{x})$. The condition $$\lim_{\vc{x} \to \vc{a}} \| \vc{f}(\vc{x})-\vc{L}(\vc{x})\| = 0$$ doesn't restrict the choice of $\vc{T}(\vc{x})$ at all. We need a new, stronger condition that enforces that not only does $\| \vc{f}(\vc{x})-\vc{L}(\vc{x})\|$ go to zero as $\vc{x} \to \vc{a}$, but that it goes to zero fast. To derive this condition, let's go back to single variable functions $f(x)$. In fact, let's simplify to the case where $f(x)$ is a quadratic function, which, without loss of generality, we can write as $$f(x) = A + B(x-a) + C(x-a)^2,$$ where $A$, $B$, and $C$ are just real constants. A linear transformation in one variable must be of the form $T(x)=mx$, so our candidate linear approximation can be written $$L(x) = f(a)+T(x-a) = A+m(x-a).$$ The difference between $f$ and $L$ is $$|f(x)-L(x)| = |B(x-a)+C(x-a)^2-m(x-a)|.$$ We want a condition that this expression must go to zero very fast as $x \to a$, and the condition must be strong enough to uniquely determine $m$. Since all the terms in $|f(x)-L(x)|$ include a factor $(x-a)$, we know that it must go to zero at least as fast as $|x-a|$. If we want to impose a stronger condition, we must insist that it go to zero faster than $|x-a|$. How do we impose such a condition? We could divide by $|x-a|$ and insist that the result still goes to zero as $x \to a$. Then our candidate condition would be $$\lim_{x \to a} \frac{|f(x)-L(x)|}{|x-a|} = 0.$$ Does this work? Plug in our example quadratic function, and the condition becomes $$\lim_{x \to a} \frac{|B(x-a)+C(x-a)^2-m(x-a)|}{|x-a|} = 0$$ which simplifies to $$\lim_{x \to a} |B+C(x-a)-m| = 0.$$ Since $C(x-a) \to 0$ as $x \to a$, we see that our condition does indeed put a nice constraint on our linear transformation $T(x)=mx$. We are forced to conclude that $L(x)$ is a linear approximation of $f$ at $x=a$ only if $m=B$. This worked perfectly. Given that $B=f'(a)$, we see that this condition gives that the slope of the linear approximation (or tangent line) $L(x)$ is the derivative, consistent with what we learned in single-variable calculus. Motivated by our single variable example, we propose the following condition for differentiability in general: $$\lim_{\vc{x} \to \vc{a}} \frac{\| \vc{f}(\vc{x})-\vc{L}(\vc{x})\|}{\|\vc{x}-\vc{a}\|} = 0.$$ This condition means that $\| \vc{f}(\vc{x})-\vc{L}(\vc{x})\|$ goes to zero very fast as $\vc{x} \to \vc{a}$, faster than the distance $\|\vc{x}-\vc{a}\|$ between $\vc{x}$ and $\vc{a}$ goes to zero. This condition is what we meant in the introduction to differentiability when we said that the linear approximation $\vc{L}(\vc{x})$ must be a “really good” approximation to $\vc{f}(\vc{x})$ when $\vc{x}$ is close to $\vc{a}$. #### Definition of differentiability We summarize our results with the definition differentiability. For simplicity, we'll write the definition directly in terms of the linear transformation $\vc{T}(x)$ without explicitly referring to the linear approximation $\vc{L}(\vc{x})$. Definition: The function $\vc{f}: \R^n \to \R^m$ is differentiable at the point $\vc{a}$ if there exists a linear transformation $\vc{T}: \R^n \to \R^m$ that satisfies the condition $$\lim_{\vc{x} \to \vc{a}} \frac{\| \vc{f}(\vc{x})-\vc{f}(\vc{a}) - \vc{T}(\vc{x}-\vc{a})\|}{\|\vc{x}-\vc{a}\|} = 0.$$ The $m \times n$ matrix associated with the linear transformation $\vc{T}$ is the matrix of partial derivatives, which we denote by $\jacm{f}(\vc{a})$. We can refer to $\jacm{f}(\vc{a})$ as the total derivative (or simply the derivative) of $\vc{f}$. #### What does that limit really mean? It was a lot of work to derive an sensible expression for the defintion of differentiability. You deserve to rest after getting through this. But don't rest for long because there are some more important subtleties lurking in that limit definition. It turns out that a limit in two or more dimensions is a bit more complicated that a limit in one dimension. One has to worry about the many different ways in which you can approach the point $\vc{a}$. In one dimension, you can only approach from either above or below, but more dimensions gives you a lot more room to move around. For a function to be differentiable, we need the limit defining the differentiability condition to be satisfied, no matter how you approach the limit $\vc{x} \to \vc{a}$. This requirement can lead to some surprises, so you have to be careful. For example, don't make the mistake of assuming that the existence of partial derivatives is enough to ensure differentiability. You've been warned! Maybe it'd be best to check out the subtleties of differentiability in higher dimension so you'll be prepared for any function you might meet. #### Cite this as Nykamp DQ, “The definition of differentiability in higher dimensions.” From Math Insight. http://mathinsight.org/differentiability_multivariable_definition Keywords: derivative, differentiability, linear approximation, linear transformation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941675066947937, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/132348/how-to-evaluate-int-01-sin-frac1xdx/132355
# how to evaluate $\int_0^1\sin(\frac{1}{x})dx$? How can I evaluate the integral of $$\int_0^1\sin\left(\frac{1}{x}\right)dx.$$ Maybe it needs the cosine integral to evaluate it, but I cannot understand it very well. Thanks a lot. - 2 Make a substitution so that you have $\sin t$ in there and then see what you get. – GEdgar Apr 16 '12 at 2:59 ## 1 Answer This integral cannot be done in terms of elementary functions, in general. That being said, we can try to get somewhere: Let $\frac{1}{x} = t$. Then $-\frac{1}{x^2} dx = dt$, which means that $dx = -\frac{1}{t^2} dt$. So $$\int_0^1 \sin\!\left(\frac{1}{x}\!\right)\, dx = \int_\infty^1 -\frac{\sin(t)}{t^2}\, dt = \int_1^\infty \frac{\sin(t)}{t^2}\,dt = \sin(1) + \int_1^\infty \frac{\cos(t)}{t}\, dt$$ by integration by parts. Use the substitution: $dv = \frac{1}{t^2} dt$, $u = \sin(t)$, $v = -\frac{1}{t}$, $du = \cos(t) dt$ Using the definition of the cosine integral $$\mathrm{ci}(x) = -\int_x^\infty \frac{\cos(t)}{t}\, dt$$ we get that $$\int_0^1 \sin\!\left(\frac{1}{x}\!\right)\, dx = \sin(1) - \mathrm{ci}(1)$$ And I don't believe this is number going to be elementarily expressible, but it can be approximated. (The approximate value of the integral is around 0.504) (Thank you to J.M. for his correction of a piece of misused terminology.) - thanks a lot ,but do you know how to prove that it cannot be done in closed form,maybe it is very hard.add you to facebook. – noname1014 Apr 16 '12 at 3:52 I'm trying to recall how to prove that. The only thing that comes to mind is Liouville's Theorem, which is the fundamental theorem from Differential Galois Theory. Unfortunately, I can't recall all of the details, but I suspect that the integral in question cannot be expressed in closed form because $si(x) = -\int_x^\infty \frac{\sin(t)}{t}\, dt$ cannot be expressed in closed form in terms of elementary functions. – Nicholas Stull Apr 16 '12 at 4:07 And it is (almost a direct consequence) true that $ci(x)$ also cannot be expressed in closed form in terms of elementary functions. But again, I can't quite recall the details of the proof. – Nicholas Stull Apr 16 '12 at 4:08 1 Whoa, time out! Let's fix your wording a bit, mmkay? OP's integral does have a closed form; what it doesn't have is an expression in terms of elementary functions, which is why OP mentioned the need for the sine integral and cosine integral. That these special functions cannot be expressed elementarily can be shown via, say, Risch's algorithm. – J. M. Apr 17 '12 at 1:03 1 @J.M., my apologies for that slip-up. What I meant to say was that they couldn't be expressed elementarily, but in typing it up I misused the terminology. Thanks for the correction, and the reference. – Nicholas Stull Apr 17 '12 at 1:11 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469238519668579, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/67298/list
Return to Answer 2 added 40 characters in body This is indeed true. See for example Lemma 2.60 in Kollar-Mori Birational Geometry of algebraic varieties. In particular, it is shown that a Cartier divisor $D$ is big if and only if $mD \sim A + E$ for some ample divisor $A$ and effective divisor $E$. This is also proven in Corollary 2.2.7 in Lazarsfeld's Positivity of in Algebraic Geometry I. Anyway, they make no assumptions on the dimension or singularities of the ambient variety, you also don't need the nef assumption. 1 This is indeed true. See for example Lemma 2.60 in Kollar-Mori Birational Geometry of algebraic varieties. In particular, it is shown that a Cartier divisor $D$ is big if and only if $mD \sim A + E$ for some ample divisor $A$ and effective divisor $E$. This is also proven in Corollary 2.2.7 in Lazarsfeld's Positivity of in Algebraic Geometry I. Anyway, they make no assumptions on the dimension or singularities of the ambient variety.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874906301498413, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/1324-proof-induction.html
# Thread: 1. ## induction, how do i begin? prove by induction that for every natural n and every x>0 (e^x) > 1 + x/1! + (x^2)/2! + ... + (x^n)/n! it says we can assume that we know basic properties of derivatives and integrals, but i just am not sure where to begin. 2. Originally Posted by cen0te prove by induction that for every natural n and every x>0 (e^x) > 1 + x/1! + (x^2)/2! + ... + (x^n)/n! In duction follows a pattern: 1. First we show that what we wish to prove for general n actualy holds for the first n which is realevant to the problem. In this case it's n=0, and we need to show that for every x>0 that: $e^x\ >\ 1$, which I presume you know is true. 2. Second we show that if what we have to prove is true when n=k, then it is also true for n=k+1. So assume it true for n=y, then we have $e^x\ >\ 1 + x/1! + x^2/2! + ... + x^k/k!<br />$, for all $x\ >\ 0$. Now consider: $\int_0^y\ e^x\ dx\ =\ [e^x]_0^y\ =\ e^y-1\ \ \ \ \ ...[1]$ also: $\int_0^y\ 1 + x/1! + x^2/2! + ... + x^k/k!\ dx\ =$ $\ [x + x^2/2! + x^3/3! + ... + x^{(k+1)}/(k+1)!]_0^y\ =$ $y + y^2/2! + y^3/3! + ... + y^{(k+1)}/(k+1)!\ \ \ \ \ ...[2]$ Now the integrand in [1] is strictly greater than the integrand in [2] at every point over which the integral is taken. So $\int_0^y\ e^x\ dx\ >\ \int_0^y\ 1 + x/1! + x^2/2! + ... + x^k/k!\ dx$ which can be rewritten from what we have found above as: $<br /> e^y-1\ >\ y + y^2/2! + y^3/3! + ... + y^{(k+1)}/(k+1)!<br />$ So rearranging and replacing y by x we have: $<br /> e^x\ >\ 1\ +\ x + x^2/2! + x^3/3! + ... + x^{(k+1)}/(k+1)!<br />$ 3. That is we have proven that if what we have to prove for all $n \geq\ 0$, is true for n=k, then it is true for n=k+1, also we have proven that it is true for n=0, so by the principle of induction we have proven it true for all $n \geq\ 0$ RonL 3. Ok, I think I understand your proof. I understand induction, it is calculus that I am horribly rusty on. I have not taken any in over 5 years, and I do not even remember the notation. Many thanks for your help. As an aside, could the statement be reworked to support x<0? 4. You can also use Taylor's Theorem, but that too requires Calculus. $P(x)=\sum_{n=0}^{\infty}\frac{f^n(a)(x-a)!}{n!}$ 5. Originally Posted by cen0te Ok, I think I understand your proof. I understand induction, it is calculus that I am horribly rusty on. I have not taken any in over 5 years, and I do not even remember the notation. Many thanks for your help. As an aside, could the statement be reworked to support x<0? It's not true for x<0, since the series has alternating signs and so is alternatly greater and less than $e^x$ as n increases. RonL 6. Originally Posted by cen0te Ok, I think I understand your proof. I understand induction, it is calculus that I am horribly rusty on. I have not taken any in over 5 years, and I do not even remember the notation. The problem is that the instructions given with the problem indicate that you are expected to use calculus in the proof. saw it because induction is not the natural way to prove it. If you know the series expansion for $e^x$ the result is hovering around the obvious pile. RonL 7. There is a second part to the homework problem which says "State and prove the corresponding theorem for x<0." So I have tried, but I am not sure what is being sought. I assume the the statement needs to be reworked, but again...I am at a loss. 8. Originally Posted by cen0te There is a second part to the homework problem which says "State and prove the corresponding theorem for x<0." So I have tried, but I am not sure what is being sought. I assume the the statement needs to be reworked, but again...I am at a loss. I don't know if this is what they are looking for but: suppose $x<0$, and let $y=-x$, then $y>0$, so for all $n\ \epsilon\ \mathbb{N}$ (the set of natural numbers 0,1, .. or 1,2, .. depending on your preference for how they are defined). $<br /> e^y\ >\ 1 + y/1! + y^2/2! + ... + y^n/n!<br />$ now substitute $-x$ for $y$ to get: $<br /> e^{-x}\ >\ 1 - x/1! + x^2/2! - ... + (-1)^n\ x^n/n!<br />$ or: $<br /> e^x\ <\ \frac{1}{1 - x/1! + x^2/2! - ... + (-1)^n\ x^n/n!}<br />$ RonL 9. Hmm, I actually thought of that, but for some reason I thought that was basically just cheating. I had assumed that it would have something to do with how e^x with x<0 would just keep approaching 0. However, I can't come up with anything that satisfies me, so I will go with your more experienced suggestion. Thanks again. 10. Originally Posted by cen0te There is a second part to the homework problem which says "State and prove the corresponding theorem for x<0." So I have tried, but I am not sure what is being sought. I assume the the statement needs to be reworked, but again...I am at a loss. For all x < 0 and all positive integers n, $<br /> (-1)^n e^x\ <\ (-1)^n(1 + x/1! + x^2/2! + ... + x^k/k!)<br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651560187339783, "perplexity_flag": "middle"}
http://www.physicsforums.com/showpost.php?p=2278941&postcount=15
View Single Post Recognitions: Homework Help From first principles, an outcome to any sequence of 100 flips consists of a sequence of H (for Heads) and T (for Tails) of length 100, so $$\Omega$$ would be the set of all $$2^{100}$$ sequences, from all Ts through all Hs. One particular $$\omega$$ would be this one: $$\omega = \underbrace{HH \cdots H}_{\text{length 50}} \overbrace{TT \cdots T}^{\text{length50}}$$ for the r.vs I defined, and for this $$\omega$$, $$X(\omega) = Y(\omega) = 50$$ Notice the incredible amount of savings we have in the move from the original sample space $$\Omega$$, which has $$2^{100}$$ elements, to the set of values of $$X$$ (and of course $$Y$$) - there are only 101 different values to "keep track of". I hope this helps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415226578712463, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/57540/list
Return to Question 2 added 32 characters in body A covering map $p:X\to Y$ between topological spaces can be viewed as a fiber bundle $\Sigma\to X\to Y$ with a discrete group $\Sigma=Gal(X/Y)$ as fiber. Such a fiber bundle leads to a long exact sequence of homotopy groups. In this case, if $Y$ is contractible then, of course, so is $X$. I'm wondering what happens if the covering map $p$ is ramified. Is there any relation between the homotopy groups of $\pi_n(X),\pi_n(Y)$ and $\Sigma$? I'm guessing that perhaps the fixed set $X^\Sigma$ might be involved. I'm particularly interested in two cases: 1) When $\Sigma=\Sigma_2$, the two-element group. This occurs often in toric topology. 2) What conditions can force $Y$ to be contractible (or just weakly null-homotopic). 1 Is there a long exact sequence associated to a ramified covering? A covering map $p:X\to Y$ between topological spaces can be viewed as a fiber bundle $\Sigma\to X\to Y$ with a discrete group $\Sigma=Gal(X/Y)$ as fiber. Such a fiber bundle leads to a long exact sequence of homotopy groups. In this case, if $Y$ is contractible then, of course, so is $X$. I'm wondering what happens if the covering map $p$ is ramified. Is there any relation between the homotopy groups of $\pi_n(X),\pi_n(Y)$ and $\Sigma$? I'm guessing that perhaps the fixed set $X^\Sigma$ might be involved. I'm particularly interested in two cases: 1) When $\Sigma=\Sigma_2$, the two-element group. This occurs often in toric topology. 2) What conditions can force $Y$ to be contractible.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507249593734741, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/40958/how-does-gravity-work?answertab=oldest
# How does gravity work? The math, I read, is $F = G\frac{m_{1}m_{2}}{r^{2}}$ What I'm unclear about is how does it work in practice. Say there are two identical pebbles massing 1 kilogram each outside Sol's gravity well close enough to attract each other. What happens then? And, How long does each 'stage' take? Do the pebbles start to orbit around each other and eventually coalesce? Do they, without coalescing, begin to attract other particles around them? - Gravity's just a theory. ;) – AdamRedwine Oct 16 '12 at 18:18 4 "How does gravity work?" "Fine, thanks!" – Keith Thompson Oct 16 '12 at 19:27 1 – Fabian Oct 16 '12 at 23:19 Why the downvote? – Everyone Oct 19 '12 at 19:59 ## 1 Answer Assuming you've placed your two pebbles far from any other masses the only force they will feel is their mutual gravitational attraction. Suppose you place them one metre apart, then the force they feel is given by your equation, and since in your example $m_1$, $m_2$ and $r$ are all equal to one the force each pebble will feel is just $G$ or $6.673 \times 10^{-11}$ Newtons. Each pebble will therefore start accelerating towards the other pebble at $6.673 \times 10^{-11}$ ms$^{-2}$. Lets say you've made your pebbles from granite, which has a density of about 2700 kg/m$^3$, so the radius of the (spherical) pebbles is 0.045 m. When the pebbles collide, after a bit of bouncing around they'll settle down and remain in contact at a spacing of 0.09m. At this spacing the force between them will be about $8.2 \times 10^{-9}$ Newtons. You could make the pebbles orbit each other. The general equations for two bodies orbiting each other are a bit complex if you're not up to speed with calculus, but if you're happy with a circular orbit it's easy to calculate the orbital velocity. For an object moving in a circular orbit the acceleration towards the centre is simply $v^2/r$. In the example above we calculated the acceleration to be $6.673 \times 10^{-11}$ ms$^{-2}$, and the radius of the orbit is half the spacing so $r = 0.5m$. That means the orbital velocity is given by: $$6.673 \times 10^{-11} = \frac{v^2}{0.5}$$ so: $$v = 5.78 \times 10^{-6} \space \text{m/sec}$$ So if you placed the pebbles a metre apart and set each one moving at 5.78 microns per second they would follow circular orbits about their centre of mass. If you want to know more about how two bodies orbit each other have a look at the Wikipedia article on the two body problem. - 2 Just to add: The pebbles won't coalesce because the force pulling them together is very small - much smaller than the material strength. Once you get more than few 10km size lumps of rock their own gravity is strong enough to squash them into a sphere – Martin Beckett Oct 16 '12 at 17:59 A quick follow-up for your leisure time - Once the two pebbles are at a separation of 0.09m, do any other pebbles nearby experience two separate gravity 'field', or a single one? I would think the latter – Everyone Oct 16 '12 at 18:30 1 @Everyone Why would you expect anything like that? There is nothing magic about the "1" in 1 meter. The length of the meter is entirely arbitrary. The gravitational field of extended bodies is actually the sum of the separate field from each infinitesimal bit of mass. It just happened that you can often take the results as if all the mass was at the CoM. – dmckee♦ Oct 16 '12 at 21:50 The second paragraph above mentioned the two pebbles settle down permanently at a distance of 0.09m. So I wondered whether neighbours then begin to experience a stronger 'pull'. – Everyone Oct 17 '12 at 9:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399874210357666, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19294?sort=votes
## Dispensing with the notion of infinity for the sake of coverings [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Instead of taking a one to one correspondence meaning each set has the same number of elements. why not use the concept of coverings of topology? The irrational numbers covers the whole numbers but not vice versa? A hierarchy of coverings instead of infinities. Wouldn't that make those infinities more manageable in those terms?( yes I know topology can be expressed in set theory) - This is in danger of being closed as "not a real question". As the answers below testify, at present it is simply unclear what you mean. Please clarify... – Pete L. Clark Mar 25 2010 at 14:54 In particular, why do you want to "dispense with the notion of infinity"? What is unmanageable about the current notions of infinity? – Pete L. Clark Mar 25 2010 at 14:56 ## 2 Answers By $X$ covers $Y$ I assume you mean there exists a surjection $f:X \to Y$. The theory you're describing is exactly the same as the standard theory of cardinal numbers. In fact, if $X$ "covers" $Y$ and $Y$ "covers" $X$ then there is a bijection between $X$ and $Y$ . The proof is pretty and easy and is a good homework problem. You could also look it up in the beginning of any book that introduces the cardinals. Aside: I don't see what this has to do with topology. I also don't understand what you mean by "Dispensing with the notion of infinity..." - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't quite know what you mean by "coverings of topology", but it is possible to formalize a notion of size for infinite sets which relies on the part-whole conception, rather than the bijective correspondence conception. These two views are mutually exclusive, in the sense that size for finite sets satisfies both properties, infinite sets can only support one of the two. But either choice can be made to work! So, if you require the notion of size to equate two bijective sets, then the even numbers are equal in size to the natural numbers (this is the traditional Cantorian view). You could also take a mereological view, and say that one set is smaller than another if every element of the first set is a member of the second set. In this interpretation, the even numbers are smaller than the natural numbers. A recent issue of the Review of Symbolic Logic had an article about these issues, including both some history of mathematics and more recent logical systems which formalize the mereological view. See Paolo Mancosu's "Measuring the Size of Infinite Collections of Natural Numbers: Was Cantor's Theory of Infinite Number Inevitable?" -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480395317077637, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/01/15/some-compact-subspaces/?like=1&source=post_flair&_wpnonce=3fdfac1636
# The Unapologetic Mathematician ## Some compact subspaces Let’s say we have a compact space $X$. A subset $C\subseteq X$ may not be itself compact, but there’s one useful case in which it will be. If $C$ is closed, then $C$ is compact. Let’s take an open cover $\{F_i\}_{i\in\mathcal{I}}$ of $C$. The sets $F_i$ are open subsets of $C$, but they may not be open as subsets of $X$. But by the definition of the subspace topology, each one must be the intersection of $C$ with an open subset of $X$. Let’s just say that each $F_i$ is an open subset of $X$ to begin with. Now, we have one more open set floating around. The complement of $C$ is open, since $C$ is closed! So between the collection $\{F_i\}$ and the extra set $X\setminus C$ we’ve got an open cover of $X$. By compactness of $X$, this open cover has a finite subcover. We can throw out $X\setminus C$ from the subcover if it’s in there, and we’re left with a finite open cover of $C$, and so $C$ is compact. In fact, if we restrict to Hausdorff spaces, $C$ must be closed to be compact. Indeed, we proved that if $C$ is compact and $X$ is Hausdorff then any point $x\in X\setminus C$ can be separated from $C$ by a neighborhood $U\subseteq X\setminus C$. Since there is such an open neighborhood, $x$ must be an interior point of $X\setminus C$. And since $x$ was arbitrary, every point of $X\setminus C$ is an interior point, and so $X\setminus C$ must be open. Putting these two sides together, we can see that if $X$ is compact Hausdorff, then a subset $C\subseteq X$ is compact exactly when it’s closed. ### Like this: Posted by John Armstrong | Point-Set Topology, Topology ## 3 Comments » 1. [...] Heine-Borel Theorem We’ve talked about compact subspaces, particularly of compact spaces and Hausdorff spaces (and, of course, compact Hausdorff spaces). So [...] Pingback by | January 18, 2008 | Reply 2. [...] is a closed interval covered by the open intervals . But the Heine-Borel theorem says that is compact, and so we can find a finite collection of the which cover . Renumbering the open intervals, we [...] Pingback by | April 15, 2010 | Reply 3. [...] base since it’s open. This union gives an open covering of the set. Since the set is closed, it is compact. And so the open covering we just found has a finite subcover. That is, we can write our clopen set [...] Pingback by | August 19, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 35, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319846034049988, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/6324/are-there-integer-solutions-to-9x-8y-1
Are there integer solutions to $9^x - 8^y = 1$? This came up in proving non-regularity of a certain language (powers of 2 over the ternary alphabet). Any clue to the above equation could help me move forward. Edit: Of course, $x = 1, y = 1$ is a solution. I am looking for non-trivial solutions. - 5 yes x=1,y=1 is a soution – anonymous Oct 8 '10 at 16:11 Thanks Chandru1, I edited it. – user813 Oct 8 '10 at 16:22 @avinash: out of curiosity, how does this come up in proving non-regularity of the language you describe? The approach which first occurs to me is totally different. Are you using the pumping lemma? – Qiaochu Yuan Oct 8 '10 at 21:14 @Qiaochu: Yes, I was trying pumping lemma. But it's going nowhere. How did you solve it? – user813 Oct 9 '10 at 3:48 @avinash: the approach I thought of is unnecessarily complicated. There is a theorem of Berstel which implies that if the language you're looking at is regular, then the number L_n of words of length n must be eventually periodic, and I believe one derives a contradiction from here. – Qiaochu Yuan Oct 9 '10 at 9:43 show 1 more comment 3 Answers Except $x=1$ and $y=1$ there aren't any. We have that $$3^{2x} - 1 = 8^y$$ i.e $$(3^x + 1)(3^x - 1) = 8^y$$ Thus we must have that $$3^x + 1 = 2^m, 3^x - 1 = 2^n$$ Thus $$2^m - 2^n = 2$$ i.e $$2^n(2^{m-n} - 1) = 2$$ Thus $n=1$ and $m=2$. - You didn't use the assumption that x>1; you got n=1 which implies x=1. :-) (Many times a proof by contradiction can be simplified if you try to rewrite it without the contradication… it's usually worth a shot to see if it gets simpler.) – ShreevatsaR Oct 8 '10 at 16:58 @Shree: Right, I will just delete that line. – Aryabhata Oct 8 '10 at 17:07 @Moron, pardon my ignorance, but can you explain why it must be true that $3^x+1=2^m$ and $3^x-1=2^n$? – user1736 Oct 8 '10 at 17:18 4 @user: If A*B = a power of 2 = 2^y, then, both must be powers of 2. (I am counting 1 to be a power of 2 = 2^0). This is because any prime that divides A, must divide 2^y and hence must be 2. – Aryabhata Oct 8 '10 at 17:22 2 @Moron: This is so elementary and beautiful. Thanks. – user813 Oct 8 '10 at 17:44 show 1 more comment The only solution is $x=y=1$. This is a special case of Catalan's conjecture which was proven by Mihailescu in 2002: the only solution in natural numbers of $x^a - y^b = 1$ with $a,b\gt 1$ is $x=3$, $a=2$, $y=2$, and $b=3$. But as Moron points out, it can be deduced much more elementarily. - Equation $\rm\ 3^{2x}-2^{3y}=1\$ is an instance of various special cases of Catalan's Conjecture. First,$\$ making the specialization $\rm\ \ \: z,\:p^n = 3^x,2^{3y}\$ below yields $\rm\ x = 1 = y\$ as desired. LEMMA$\ \$ $\rm z^2 - p^n = 1\ \ \Rightarrow\ \ z,\:p^n = \:3\ ,\:2^3\$ or $\ 2,\:3\$ for $\rm\ \ z,\:p\:,n\in \mathbb N,\ \ p\:$ prime Proof $\rm\ \ \ (z+1)\:(z-1)\: =\: p^n\ \ \Rightarrow\ \ z+1 = p^{\:j},\ \ z-1 = p^k\$ for some $\rm\ j,\:k\in \mathbb N$ $\rm\quad \:\Rightarrow\ \ \ \ 2\ =\ p^{\:j} - p^k\ =\ p^k\: (p^{\:j-k}-1) \ \Rightarrow\ p^k=2\$ or $\rm\ p^k = 1 \ \Rightarrow\ \ldots$ Second, it's simply the special case $\rm\: X = 3^x,\ Y = 2^y\:$ of $\rm\ X^2 - Y^3 = 1\:,\:$ solved by Euler in 1738. Nowadays one can present this solution quite easily using elementary properties of $\rm\ \mathbb Z[\sqrt[3]{2}]\:$, e.g see p.44 of Metsankyla: Catalan's Conjecture: another old diophantine problem solved. See also this MO thread and this MO thread and Schoof: Catalan's Conjecture. Note also that Catalan equations are a special case of the theory of generalized Fermat (FLT) equations, e.g. see Darmon's exposition. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251320362091064, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/19159-centralizer-subgroup.html
# Thread: 1. ## Is Centralizer subgroup? Problem: Let G be a group, prove that the centralizers of G is a subgroup of G. Proof: By definitions, the centralizers of G, $C(a) = \{ g \in G:ga=ag \} \forall a \in G$ Now, the identity element, e, has the property of $ea=ae$, thus e is in C(a). So C(a) is not an empty set. Assume C(a) contains more than just {e}, since if that is the case C(a) would be a subgroup. Let x,y be in C(a), then: $xa=ax,ya=ay \forall a \in G$ Then $(xy)a=x(ya)=x(ay)=(xa)y=a(xy)$ Thus xy is in C(a). Now, $(xa)^{-1}=(ax)^{-1}$ $x^{-1}a^{-1}=a^{-1}x^{-1}$ So x^{-1} is in C(a), thus C(a) is a subgroup of G. Q.E.D. Now, I am not sure if I have proven the last part correctly, that is, the inverse of x is in C(a), would anyone please have a look? Oh, and the test is tomorrow morning, normally I would never ask for a free answer, as I would like to work it out myself if at all possible. But would anyone please give me the correct answer if I'm wrong as this problem might show up in the test? If you aren't comfortable with it, I fully understand, I really do appreciate the help I'm getting from here, thanks! K 2. Originally Posted by tttcomrader Problem: Let G be a group, prove that the centralizers of G is a subgroup of G. Proof: By definitions, the centralizers of G, $C(a) = \{ g \in G:ga=ag \} \forall a \in G$ Now, the identity element, e, has the property of $ea=ae$, thus e is in C(a). So C(a) is not an empty set. Assume C(a) contains more than just {e}, since if that is the case C(a) would be a subgroup. Let x,y be in C(a), then: $xa=ax,ya=ay \forall a \in G$ Then $(xy)a=x(ya)=x(ay)=(xa)y=a(xy)$ Thus xy is in C(a). Now, $(xa)^{-1}=(ax)^{-1}$ $x^{-1}a^{-1}=a^{-1}x^{-1}$ So x^{-1} is in C(a), thus C(a) is a subgroup of G. Q.E.D. Everything else was perfect. The only thing you need to show is $x\in C(a)\implies x^{-1} \in C(a)$. This means, $xa = ax$ Thus, $a=x^{-1}ax$ Thus, $ax^{-1}=x^{-1}a$. Q.E.D. Which book you use? 3. Oh, man, I didn't have a chance to look at your reply this morning. But the test was easy, I think I miss one or two questions, should be an A. Thanks. Btw, we use "Contemporary Abstract Algebra" by Joseph A. Gallian. 4. Originally Posted by tttcomrader Oh, man, I didn't have a chance to look at your reply this morning. But the test was easy, I think I miss one or two questions, should be an A. Thanks. Btw, we use "Contemporary Abstract Algebra" by Joseph A. Gallian. Remember you got an A all because of me. 5. Originally Posted by tttcomrader Then $(xy)a=x(ya)=x(ay)=(xa)y=a(xy)$ Thus xy is in C(a). I was having a problem with this step. Is it ok to assume that the operation is associative? If so, then why? Proving closure is the only hang-up I was having on this problem. 6. Yes. Since $a,x,y \in G$, and all elements in $G$ are associative under it's operation, then any subset of $G$ automatically inherits the associativity property. 7. Originally Posted by spoon737 Yes. Since $a,x,y \in G$, and all elements in $G$ are associative under it's operation, then any subset of $G$ automatically inherits the associativity property. Right, because associativity is a property of ALL groups in the first place. I forgot that detail!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553042650222778, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/32409/examples-of-folk-theorems/33699
## Examples of “folk theorems” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In this post, Justin gives a quote about Raoul Bott that has this line in it: He talked about 'folk' theorems... theorems everyone knew, but were never written down. What are some good/interesting examples of these types of theorems? - 3 I gave a sort of weak example on page 1 of math.uga.edu/~pete/coveringnumbersv2.pdf -- this is a simple linear algebra result that often gets asked as a problem (e.g. on MO!) but is rarely discussed in standard texts. – Pete L. Clark Jul 18 2010 at 23:18 1 Any response to this question ceases to be an example! – Qiaochu Yuan Sep 12 2011 at 1:01 ## 11 Answers The example I first learned was the following: a 2-D TQFT is equivalent to a Frobenius algebra. This is discussed and stated as a folk theorem by Voronov; later, a careful proof was written up and published by Lowell Abrams. See also the book by Joachim Kock. - I think it was also proved in a paper of Sawin: ams.org/mathscinet-getitem?mr=1359651 – Agol Jun 14 at 19:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In the context of game theory, the term «folk theorem» has a rather specific meaning... - 1 Another class of examples of Folk theorems can be found in the references here, ams.org/mathscinet/search/… :) – Mariano Suárez-Alvarez Jul 19 2010 at 4:21 In category theory there is a 'folk' model structure on the category Cat, where the weak equivalences are the equivalences of categories. There is a similar model structure on 2Cat, with weak equivalences being equivalences of 2-categories (weak ones, I presume) The former was not written down for a long time, but the latter was published by Steve Lack. Andre Joyal is not in favour of the name 'folk model structure', and there was discussion on this at the nForum (starting at that comment and continuing). That the existence of this model structure is a 'folk' theorem is a bit of folklore itself, as pointed out by Joyal at this comment. - It seems now that people are trying to change the name of this to be the 'canonical' model structure on Cat – David White Mar 18 at 16:57 This is true, and you can see it in the discussion I link to. – David Roberts Mar 20 at 22:59 On Folk Theorems is an old classic from computer science. Although the title suggests it's about folk theorems in general, it's mostly about the theorem which states, roughly, that programs written in imperative programming languages only need one loop. - I was at a queueing theory lecture recently where the lecturer talked about Little's Theorem and Wolff's PASTA theorem as having been around as folk theorems for a long time before they were published with proof. - In Fudenberg's book Game Theory, the following was listed as a folk theorem: The folk theorem for repeat games assert that if players are sufficiently patient then any feasible, individual rational payoffs can be enforced by an equilibrium. Thus, in the limit of extreme patience, repeated play allows any payoff to be an equilibrium outcome. - There are quite a few examples in additive combinatorics of theorems or tricks that were talked about and 'known' a few years before anyone published a proof of them. For example, let $\phi(n)$ be the largest number such that every set A of n reals contains a subset B of cardinality $\phi(n)$ such that no element of A can be represented as the sum of two distinct elements of B ('B is sum-free with respect to A'). It was remarked by both Klarner and Erdos that $\phi(n)\geq\log n-O(1)$ for large n, but it was ten years before Choi published a proof of this (a simple application of Turan's theorem on independent sets in graphs). Presumably phenomena like this occurs because those who think of it see it as too simple or straightforward to be worth the bother of publishing. A different type of example is the idea that if $f:G\to\mathbb{C}$ is a function on a finite abelian group $G$ with a small $L^2$ norm, then it can be decomposed as the sum of structured parts (with a small error term). For example, $f=f_1+f_2+f_3$, where $f_1$ is the linear combination of a small number of characters, $f_2$ is Gowers uniform and $f_3$ has $L^2$ norm less than $\epsilon$. This kind of folk theorem arises because it is a commonly applied heuristic that can be made precise in a variety of different ways, often jury-rigged for a specific application. - Stark, The Gauss class-number problems, available at http://www.uni-math.gwdg.de/tschinkel/gauss-dirichlet/stark.pdf writes, on page 251, "We define the Epstein zeta functions, $$\zeta(s,Q)=(1/2)\sum_{m,n\ne0,0}Q(m,n)^{-s}$$ ... Theorem 4.1 (Folk Theorem.) Let $c\gt1/4$ be a real number and set $$Q(x,y)=x^2+xy+cy^2,$$ with discriminant $d=1-4c\lt0$. Then for $c\gt41$, $\zeta(s,Q)$ has a zero $s$ with $\sigma\gt1$." He follows this with a "Folk proof." - My advisor once told me that the following statement (which I read in Ravenel's Complex Cobordism and Stable Homotopy Groups of Spheres) was a Folk Theorem: For $p>2$ and in a certain range, the Adams Spectral Sequence coincides with the homology Bockstein spectral sequence It turns out the range is $t<(2p-1)s-2$, and a reference is Haynes Miller's paper - There are some "folkish" elements in something I just published: The "80/20" account of Pareto's law circulates among management people who don't care about mathematics, and the probability density proportional to $x \mapsto x^{-\alpha - 1}$ on $(x_0,\infty)$ for some $x_0>0$ (and 0 elsewhere) is found in many probability and statistics books, and actually gets used in various fields to which mathematics gets applied. But the idea that they are in some sense the same thing seems to have circulated only in a "folk" manner for many years until now. - Another nice type of 'folk theorems' I have seen is of a sort where some relatively straight forward generalization of a well established theorem is assumed and then used for its heuristic or explanatory value. I find this is often used in fields where mathematicians interact with non-mathematicians and although it is completely non-rigorous (and sometimes even misleading!) most often it helps in exposition and for building intuition. An example would be the "folk theorem of evolutionary game theory" (as used by Hofbauer and Sigmund, BAMS 2003) on certain kinds of correspondences of Nash equilibrium and dynamic approaches. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958890974521637, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/53106/does-increasing-the-density-of-a-solution-decrease-the-rate-of-temperature-chang?answertab=active
# Does increasing the density of a solution decrease the rate of temperature change? I did an experiment to compare whether salt water (5% concentration of salt) or fresh water of the same volume took longer to heat up to a certain temperature. We found that salt water took longer to heat up than fresh water. Is this due to density? specific heat capacity? or should I have gotten different results. - ## 1 Answer The thermal conductivity of saline is less than water. See this page for graphs of thermal conductivity against salt content. Note that a secondary effect is that adding salt to water actually lowers the specific heat, and this will increase the rate of temperature change. See the question Why does salty water heat up quicker than pure water? and it's answers. In particular follow the link I provide to the paper by Zwicky. However you're comparing the same volume you have more mass to heat up because the density of sea water is greater than the density of pure water. If you take sea water (about 3.5% salt - I chose this because data is easily Googlable) the specific heat is 3.993kJ per kg per degree, compared to water at 4.184kJ/kg/K. However the density of seawater is 1037kg/m$^3$ so the specific heat per cubic metre is almost exactly the same as pure water. - Thanks, this is perfect – VikeStep Feb 6 at 6:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341358542442322, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/81466/list
## Return to Answer 2 added 117 characters in body The question itself seems too elementary for this site, since it just involves the standard axiomatic treatment of root systems as in Bourbaki Groupes et algebres de Lie, VI.1.7. The question is really about an arbitrary reductive algebraic group (with nontrivial derived group) over an algebraically closed field, along with its Borel subgroups in natural bijection with systems of positive roots relative to a fixed maximal torus. Such a torus $T$ lies in exactly $|W|$ Borel subgroups, where $W=N_G(T)/T$ is the Weyl group. At this point the axiomatic theory takes over and provides straightforward criteria for a given "closed" set of roots to be the positive roots for some choice of simple roots: the set has to be disjoint from its negative and together with its negative exhaust all roots (Prop. 20, Cor. 1). 1 The question itself seems too elementary for this site, since it just involves the standard axiomatic treatment of root systems as in Bourbaki Groupes et algebres de Lie, VI.1.7. The question is really about an arbitrary reductive algebraic group (with nontrivial derived group) over an algebraically closed field, along with its Borel subgroups in natural bijection with systems of positive roots relative to a fixed maximal torus. Such a torus $T$ lies in exactly $|W|$ Borel subgroups, where $W=N_G(T)/T$ is the Weyl group. At this point the axiomatic theory takes over and provides straightforward criteria for a given "closed" set of roots to be the positive roots for some choice of simple roots.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331355094909668, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/120290/list
## Return to Answer 2 deleted 1 characters in body Here is a more bare hands explanation. Let $\phi$ be the field automorphism of ${\rm SL}_n(q^2)$ that acts by applying $x \mapsto x^q$ for to the matrix entries. Let $\gamma$ be the graph automorphism that maps matrices $A$ to their inverse-tranpose $A^{- \mathrm{T}}$. Then ${\rm SL}_n(q)$ is the subgroup of ${\rm SL}_n(q^2)$ that is centralized by $\phi$, whereas the group ${\rm SU}_n(q^2)$ (which is confusingly often denoted by ${\rm SU}_n(q)$) that fixes the identity matrix as unitary form is the subgroup of ${\rm SL}_n(q^2)$ that is centralized by $\phi\gamma$. The automorphism $\gamma$ is outer for $n>2$, but when $n=2$ it is inner and acts in the same way as conjugation by the matrix `$\left( \begin{array}{rr}0&1\\ -1&0\end{array} \right)$`. It turns out in this case that $\phi$ and $\phi\gamma$ are conjugate in the automorphism group of ${\rm SL}_2(q^2)$ by (the projective image of) an element $g \in {\rm GL}_2(q^2)$, and hence that ${\rm SL}_2(q)$ is conjugate to ${\rm SU}_2(q^2)$ in ${\rm GL}_2(q^2)$. With a bit of calculation on the back of an envelope, we find that `$g = \left( \begin{array}{rr}a&b\\ c&d\end{array} \right)$`, where $b = -t^qa^q$ and $d= -t^qc^q$ for some field element $t$ with $t^{q+1} = -1$. 1 Here is a more bare hands explanation. Let $\phi$ be the field automorphism of ${\rm SL}_n(q^2)$ that acts by applying $x \mapsto x^q$ for the matrix entries. Let $\gamma$ be the graph automorphism that maps matrices $A$ to their inverse-tranpose $A^{- \mathrm{T}}$. Then ${\rm SL}_n(q)$ is the subgroup of ${\rm SL}_n(q^2)$ that is centralized by $\phi$, whereas the group ${\rm SU}_n(q^2)$ (which is confusingly often denoted by ${\rm SU}_n(q)$) that fixes the identity matrix as unitary form is the subgroup of ${\rm SL}_n(q^2)$ that is centralized by $\phi\gamma$. The automorphism $\gamma$ is outer for $n>2$, but when $n=2$ it is inner and acts in the same way as conjugation by the matrix `$\left( \begin{array}{rr}0&1\\ -1&0\end{array} \right)$`. It turns out in this case that $\phi$ and $\phi\gamma$ are conjugate in the automorphism group of ${\rm SL}_2(q^2)$ by (the projective image of) an element $g \in {\rm GL}_2(q^2)$, and hence that ${\rm SL}_2(q)$ is conjugate to ${\rm SU}_2(q^2)$ in ${\rm GL}_2(q^2)$. With a bit of calculation on the back of an envelope, we find that `$g = \left( \begin{array}{rr}a&b\\ c&d\end{array} \right)$`, where $b = -t^qa^q$ and $d= -t^qc^q$ for some field element $t$ with $t^{q+1} = -1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9684198498725891, "perplexity_flag": "head"}
http://nrich.maths.org/2821/index?nomenu=1
## 'Multiplication Square' printed from http://nrich.maths.org/ ### Show menu Take a look at the multiplication square below: Pick any 2 by 2 square and add the numbers on each diagonal. For example, if you take: the numbers along one diagonal add up to $77$ ($32 + 45$) and the numbers along the other diagonal add up to $76$ ($36 + 40$). Try a few more examples. What do you notice? Can you show (prove) that this will always be true? Now pick any 3 by 3 square and add the numbers on each diagonal. For example, if you take: the numbers along one diagonal add up to $275$ ($72 + 91 + 112$) and the numbers along the other diagonal add up to $271$ ($84 + 91 + 96$). Try a few more examples. What do you notice this time? Can you show (prove) that this will always be true? Now pick any 4 by 4 square and add the numbers on each diagonal. For example, if you take: the numbers along one diagonal add up to $176$ ($24 + 36 + 50 + 66$) and the numbers along the other diagonal add up to $166$ ($33 + 40 + 45 + 48$). Try a few more examples. What do you notice now? Can you show (prove) that this will always be true? Can you predict what will happen if you pick a 5 by 5 square, a 6 by 6 square ... an n by n square, and add the numbers on each diagonal?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8315166234970093, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/146661/help-me-evaluate-limit-of-sequence/146682
# Help me evaluate limit of sequence I have this limit, and i have no idea of approach: $$\lim_{n \rightarrow + \infty } \left(\frac{n^3}{4n-7}\right)\left(\cos\left(\frac1n\right)-1\right)$$ turns out to be of indeterminate form, how to solve it? - As $n \to\infty$ we get a $"0\cdot\infty"$ kind of a limit to calculate. This can be done using L'Hopital's rule. – Amihai Zivan May 18 '12 at 10:58 Can't apply l'Hospital's rule because is limit of succession!!! If i solve it with Derive i have -1/8 result – Carmine Paternoster May 18 '12 at 11:11 3 And $-\frac18$ is correct. – Brian M. Scott May 18 '12 at 11:18 ## 6 Answers $$\lim_{n\to\infty}\frac{\cos(1/n)-1}{\frac{4n-7}{n^3}}\;,$$ let $x=\frac{1}{n}$ $$\lim_{x\to0}\;\frac{\cos(x)-1}{4x^2-7x^3}$$ Applying L'hospital rule $$\lim_{x\to0}\;\frac{-\sin(x)}{8x-21x^2}$$ Applying L'hospital rule again $$\lim_{x\to0}\;\frac{-\cos(x)}{8-42x} =\frac{-1}{8}$$ - Thanks very much for your clear explanation – Carmine Paternoster May 18 '12 at 11:29 anytime anytime – Tomarinator May 18 '12 at 11:30 When you have an $\infty\cdot0$ indeterminate form, the standard trick is to convert it to an $\frac{\infty}{\infty}$ or $\frac00$ form by shifting one of the factors into the denominator. Here, for instance, you might try rewriting the limit as $$\lim_{n\to\infty}\frac{\cos(1/n)-1}{\frac{4n-7}{n^3}}\;,$$ since $\frac1{\cos(1/n)-1}$ doesn’t look like a very nice thing to have in your denominator. This is a genuine $\frac00$ form, so l’Hospital’s rule applies. (You may have to apply it more than once.) - Can't apply l'Hospital's rule because is limit of succession!!! – Carmine Paternoster May 18 '12 at 11:10 1 @Carmine: You certainly can: it’s a basic fact about sequences when they’re defined by nice differentiable functions. – Brian M. Scott May 18 '12 at 11:11 $\cos(x) -1 = -2\sin^2(x/2)$ use it let $x=\frac{1}{n}$, $\lim_{x\to0}\;\frac{-2\sin^2(x/2)}{4x^2-7x^3}$ = $\lim_{x\to0}\;\frac{-\sin^2(x/2)}{x^2/4}.\frac{1}{2.(4-7x)}$ =$-\lim_{x\to0}\;(\frac{\sin(x/2)}{x/2})^2 .\lim_{x\to0}\;\frac{1}{2.(4-7x)}$= $\frac{-1}{8}$ - $$\lim_{n\to\infty } \frac{n^3}{4n-7} \cos\left(\frac1n-1\right) = \lim_{n\to\infty} \frac{n}{4n-7} \frac{\cos\left(\frac1n-1\right)}{\frac1{n^2}}$$ Maybe you have memorized that $\lim\limits_{x\to 0} \frac{1-\cos x}{x^2}=\frac12$, if not, you can get this limits by applying L'Hospital rule twice. For $n\to\infty$ you have $\frac1n\to 0$, hence $$\lim\limits_{n\to\infty} \frac{\cos\left(\frac1n-1\right)}{\frac1{n^2}} = -\frac12.$$ The other limit $\lim\limits_{n\to\infty} \frac{n}{4n-7}$ should be easy. - Hint The proofs by l’Hospital and power series essentially use (second) derivatives. One can eliminate these advanced techniques and use only derivatives. Changing variables $\rm\: x = 1/n\:$ $$\rm \lim_{x\to 0}\: \frac{cos(x)-1}{x^2}\:\frac{1}{4 - 7\:\!x}$$ has latter fraction $\to \dfrac{1}4\:$ and former $\rm\to \dfrac{cos''(0)}2 = -\dfrac{cos(0)}2 = -\dfrac{1}2\:$ by the formula $$\rm f''(0) =\: \lim_{x\to 0}\:\frac{f(x) - 2\: f(0) + f(-x)}{x^2}\: \left[\:=\ 2\:\lim_{x\to 0}\frac{f(x)-f(0)}{x^2}\ \ if\ \ f(-x) = f(x) \right]$$ - If you have some background knowledge of power series, the following approach may be interesting. It uses the fact that a power series $\sum a_n(x-b)^n$ for $f(x)$ typically gives a very good indication of the behaviour of $f(x)$ near $x=b$. It is convenient, but not necessary, to let $x=1/n$. Recall that the MacLaurin series for $\cos x$ is given by $$1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\cdots.$$ Thus $$\cos x-1=-\frac{x^2}{2!}+O(x^4).$$ Note that $$\frac{n^3}{4n-7}=\frac{1}{x^2}\frac{1}{4-7x}.$$ So our product is $$\frac{1}{4-7x}\left(-\frac{1}{2!}+O(x^2)\right).$$ Finally, let $x\to 0^+$. The term $\frac{1}{4-7x}$ approaches $\frac{1}{4}$ and the $O(x^2)$ term approaches $0$. Added: We can also use the following early calculus idea, which is in effect not far from the method used by Prasad G. We are interested in $$\lim_{x\to 0}\frac{\cos x -1}{4x^2-7x^3}.$$ Multiply top and bottom by $\cos x+1$, noting that then the top becomes $\cos^2 x-1$, which is $-\sin^2 x$. So we want $$\lim_{x\to 0}\:\left(\frac{\sin x}{x}\right)^2\frac{-1}{(4-7x)(\cos x+1)}.$$ Let $x\to 0$, and use the fact that $\frac{\sin x}{x}\to 0$ as $x\to 0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325485825538635, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/44023/does-adding-corefinements-to-a-grothendieck-pretopology-change-the-topos/44390
## Does adding “co”refinements to a Grothendieck pretopology change the topos? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we have a Grothendieck pretopology $\tau$ on a category C with fibered products. Now define a new Grothendieck pretopology $\tau'$ consisting of all families of morphisms refinable by $\tau$-covers. That is, the new covers are the families `$\{V_\beta \to X\}$` such that there exists some $\tau$-cover `$\{U_\alpha \to X\}$` and a factorisation `$U_\alpha \to V_{\beta_\alpha} \to X$ for each $\alpha$`. This new set of families is also a Grothendieck pretopology and the question is: do they give the same topos? That is, is a presheaf a $\tau$-sheaf if and only if it is a $\tau'$-sheaf? Edit: I could't read the relevant page in Elephant either, but Mike's answer lead me to the saturation section of http://ncatlab.org/nlab/show/coverage after which I worked out how to prove it myself. If someone explains to me how to typeset diagrams, I'll write up the answer. - When you want to use set brackets {} you need to surround the display in backticks. – Harry Gindi Oct 29 2010 at 1:16 ## 4 Answers The answer is yes. David Roberts had the right idea—adding those new covering families gives you a new pretopology which generates the same Grothendieck topology—but not because it's a sieve completion, rather because there is an additional saturation condition in the definition of Grothendieck topology (in addition to saying that it consists of sieves) which essentially gives you this property. It's not hard to check that any presheaf which is a sheaf for your original pretopology must also be one for the new one you define. You can find it as C2.1.6 in the Elephant. Note what this does not say: it's not necessarily true that if you have just a pair of families with the same codomain one of which corefines the other, that a sheaf for one of them is necessarily a sheaf for the other. The proof uses the assumption that the first covering family is part of a pretopology, and in particular can be pulled back along any morphism to another covering family, for which your presheaf is also a sheaf. - Ah yes - saturation. I was going to mention that at some point, but Google books wouldn't let me look at that page of the Elephant to check my intuition. – David Roberts Oct 29 2010 at 5:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Edit again: this answer is wrong, see the comments. The new set of families (for each object $X$) is called the sieve generated by the existing covers of $X$. One term for a Grothendieck pretopology is a basis for a Grothendieck topology, and different bases can give rise to the same Grothendieck topology. All of them, and the topology they generate, have the same sheaves. See here for example. Edit: Actually it is proposition C.2.1.9 in Johnstone's Sketches of an Elephant (Google books ) - Hi David, thanks for your reply. From what I can tell, I would get the sieves if in my question I had written $V_{\beta_\alpha} \to U_\alpha \to \to X$ in the place of $U_\alpha \to V_{\beta_\alpha} \to X$. – anon Oct 28 2010 at 23:14 Sorry for the double arrow, that shouldn't be there. – anon Oct 28 2010 at 23:15 Using the language of sieves, my question is equivalent to asking if the sieve on $X$ generated by $\{V_\beta \to X\}$ is equal to a sieve generated by some $\tau$-cover (possibly a different $\tau$-cover to $\{U_\alpha \to X\}$). – anon Oct 28 2010 at 23:17 For some context, the first sentance in Section 1 of Goodwillie and Lichtenbaum's paper on the $h$-topology claims that the answer to my question is yes, the way I read it, but I don't see why. – anon Oct 28 2010 at 23:20 Hmm, yes, you are right. I'll think about this some more. – David Roberts Oct 28 2010 at 23:34 show 3 more comments I think that the topologies are the some: 1) Let $\widetilde{\mathscr{C} }$ the topos of $\tau$-sheaves. Give a family $g_i: X_i\to X\ i\in I$ the follow are equivalent: a) $\cup_{i\in I} Image(g_i) = X$ in $\widetilde{\mathscr{C} }$ (for simply notation all $\mathscr{C}$ objects and situations are traslated in $\widetilde{\mathscr{C}}$ by Yoneda imbedding and associate sheaf functor). b) the natural morphism $\coprod_{i\in I} X_i \to X$ is Epi in $\widetilde{\mathscr{C} }$ c) The natural diagram $\coprod_{i,j\in I}X_{i,j}\rightrightarrows\coprod_{i,j\in I}X_{i}\to X$ in $\widetilde{\mathscr{C} }$ is a Coker (where $X_{i,j}:=X_i\times_X X_j$ d) For any $F\in \widetilde{C}$ the natural diagram $F(X)\to \prod_{i\in I}F(X_i) \rightrightarrows \prod_{i,j} F(X_{i, j})$ is a Ker. PROOF: Only observe that in $\widetilde{\mathscr{C} }$ any Epi is a coequalizer, then the equalizer of its Ker-couple, and the coprodocts are disjoint i.e. are coherent (commutate) by pullback’s. 2) We have that the (old) $\tau$-coverings are also $\tau’$-coverings (consider trivial factorization by first morphism as identity). Only we have to prove that for any $F\in \widetilde{\mathscr{C}}$ and for any $\tau’$-coverings $g_i: X_i\to X\ i\in I$ the diagram in (d) is exat (i.e. a Ker diagram), or equivalently that (a) is true, but from the factorizations $U_\alpha \to V_{\beta_\alpha }\to X$ follow $\coprod_\alpha U_\alpha \to \coprod_\beta V_\beta \to X$ and this composition is Epi, then $\ \coprod_\beta V_\beta \to X$ is Epi. Excuse my for your time If I wrong. - I think you get the same sheaves if and only if your topos of sheaves can be expressed as as sheaves on some singleton pretopology: If $V \stackrel{f}{\rightarrow} X$ has the property that there exists a covering family $$\left(U_\alpha \stackrel{i_\alpha}{\rightarrow}X\right)_\alpha$$ and for $\alpha$ a map $\lambda_\alpha:U_\alpha \to V$ such that $f \circ \lambda_\alpha=i_\alpha$, that implies that $ay(f):ay(V) \to ay(X)$ is an epimorphism of representable sheaves, where $a$ is sheafification and $y$ is Yoneda. By Corollay 7 p.144 of Sheaves in Geometry and Logic, this means that the sieve generated by the singleton $f$ is a covering sieve. So if anything, you still have AT LEAST the same amount of sheaves as before. Conversely, if my topology can be generated by singletons, any singleton cover trivially satisfies your requirements. So, in summary, I think what you are describing is some sort of "singelton completion", which seems to be a way of making your topos locally-connected. - Note that anon isn't talking about a 'singleton completion', but taking general covering families. I think I have an argument why the singleton pretopology may not have the same sheaves as the original pretopology, in the case that the original pretopology isn't superextensive, over at the nForum: math.ntnu.no/~stacey/Mathforge/nForum/… – David Roberts Oct 29 2010 at 1:00 I know he wasn't TRYING to take a singleton covering family, but the point is, if you have a family of maps $f_\beta$ which satisfy the requirements he asked, each map $f_\beta$ is already a singleton cover by my argument. – David Carchedi Oct 29 2010 at 1:06 But in that case, if C has an initial object, as you need pullbacks of covers to be covers, this could make the map 0 -> X a cover. Also, if C is extensive, then you'd want the singleton covers generated by {U_i -> X} to be the same as those generated by \coprod U_i -> X, which would exclude f_\beta. – David Roberts Oct 29 2010 at 1:20 How is $0 \to X$ a cover? You'd need a cover $U_\alpha$ of $X$ such that each $U_\alpha \to X$ factored through $0 \to X$. I'm just not seeing that. – David Carchedi Oct 29 2010 at 1:23 I shouldn't have used $X$ in $0 \to X$. Work with Top to keep things concrete. Let X be a space, covered by open sets $\{j_a:U_a \to X\}$, and let $Z \hookrightarrow X$ be a subspace disjoint from some $U_b$, and let $f_a = j_a$ for each $a$. $f_b$ can't be a cover, because if it were, $Z\times_X U_b = 0 \to Z$ should be a cover. I'm just saying that if C doesn't have lots of coproducts, the singleton pretopology generated by the given pretopology could be very different to the one outlined in the question. – David Roberts Oct 29 2010 at 2:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427009224891663, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/97961-prior-posterior-distributions.html
# Thread: 1. ## Prior and posterior distributions A geologist researching seismic activity in south-east Turkey is able to specify her prior beliefs regarding the parameter of the Poisson distribution. She tells us that $\theta$ can be modelled with a gamma distribution, i.e. $\theta\sim\Gamma(\alpha,\lambda)$, and so our prior distribution for $\theta$ is of the form $\pi(\theta)=\frac{\lambda^{\alpha}}{\Gamma(\alpha) }\theta^{\alpha-1}e^{-\lambda\theta}, \theta, \alpha, \lambda\geq0$ Specifically the geologist specifies that $\theta\sim\Gamma(6,2)$ a) Use the prior distribution for $\theta$ specified by the geologist, and the likelihood function... $L(\theta|\underline{x})=\frac{e^{-2\theta}\theta^9}{4320}$ ...to obtain the posterior distribution for $\theta$ in light of this data $\pi(\theta|\underline{x}=(6,3))$ b) What is the poterior mean for the number of seismic earth tremors per week? How has the mean changed in light of the data? I've gotten myself in the right tizzy with this. Now I know how to find the posterior, but adding this with extra data I'm not entirely sure what I'm meant to be doing. Any ideas?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8745534420013428, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4271803
Physics Forums | | | | |------------------------------------------------------------------------------------------|----|--------| | View Poll Results: What do observed violation of Bell's inequality tell us about nature? | | | | Nature is non-local | 10 | 30.30% | | Anti-realism (quantum measurement results do not pre-exist) | 15 | 45.45% | | Other: Superdeterminism, backward causation, many worlds, etc. | 8 | 24.24% | | Voters: 33. You may not vote on this poll | | | Page 6 of 17 « First < 3 4 5 6 7 8 9 16 > Last » Recognitions: Gold Member ## What do violations of Bell's inequalities tell us about nature? Quote by ttn You think that, by saying there are no pre-existing values, we can consistently maintain locality....That is, you do not accept that Einstein/EPR validly argued "from locality to" pre-existing values. That is, you think that it is possible to explain the perfect correlations locally but without pre-existing values. This is precisely why I issued "ttn's challenge" in my first post in this thread: please display an actual concrete (if toy) model that explains the perfect correlations locally without relying on pre-existing values. This is the part that always confused me. What difference would there be between a local vs non-local non-realism? Maudlin notes this, I think, when he writes: The microscopic world, Bohr assured us, is at least unanschaulich (unvisualizable) or even non-existent. Unvisualizable we can deal with—a 10-dimensional space with compactified dimensions is, I suppose, unvisualizable but still clearly describable. Non-existent is a different matter. If the subatomic world is non-existent, then there is no ontological work to be done at all, since there is nothing to describe. Bohr sometimes sounds like this: there is a classical world, a world of laboratory equipment and middle-sized dry goods, but it is not composed of atoms or electrons or anything at all. All of the mathematical machinery that seems to be about atoms and electrons is just part of an uninterpreted apparatus designed to predict correlations among the behaviors of the classical objects. I take it that no one pretends anymore to understand this sort of gobbledegook, but a generation of physicists raised on it might well be inclined to consider a theory adequately understood if it provides a predictive apparatus for macroscopic events, and does not require that the apparatus itself be comprehensible in any way. If one takes this attitude, then the problem I have been trying to present will seem trivial. For there is a simple algorithm for associating certain clumped up wavefunctions with experimental situations: simply pretend that the wavefunction is defined on a configuration space, and pretend that there are atoms in a configuration, and read off the pretend configuration where the wavefunction is clumped up, and associate this with the state of the laboratory equipment in the obvious way. If there are no microscopic objects from which macroscopic objects are composed, then as long as the method works, there is nothing more to say. Needless to say, no one interested in the ontology of the world (such as a many-worlds theorist) can take this sort of instrumentalist approach. Can the world be only wavefunction? In Ch. 4 of "Many Worlds?: Everett, Quantum Theory, and Reality" So , if non-realism, then the issue of locality vs non-locality seems kind of pointless since there doesn't appear to be any ontological issues. I mean what ontological difference would there be between the local vs non-local version of non-realism? Anyway, that's how I understood it or I'm not getting it. As I posted previously, I think Gisin argues similarily here: What is surprising is that so many good physicists interpret the violation of Bell’s inequality as an argument against realism. Apparently their hope is to thus save locality, though I have no idea what locality of a non-real world could mean? It might be interesting to remember that no physicist before the advent of relativity interpreted the instantaneous action at a distance of Newton’s gravity as a sign of non-realism... Is realism compatible with true randomness? http://arxiv.org/pdf/1012.2536v1.pdf And even a Bayesian argument seems hard to swallow because as Timpson notes: We just do look at data and we just do update our probabilities in light of it; and it’s just a brute fact that those who do so do better in the world; and those who don’t, don’t. Those poor souls die out. But this move only invites restatement of the challenge: why do those who observe and update do better? To maintain that there is no answer to this question, that it is just a brute fact, is to concede the point. There is an explanatory gap. By contrast, if one maintains that the point of gathering data and updating is to track objective features of the world, to bring one’s judgements about what might be expected to happen into alignment with the extent to which facts actually do favour the outcomes in question, then the gap is closed. We can see in this case how someone who deploys the means will do better in achieving the ends: in coping with the world. This seems strong evidence in favour of some sort of objective view of probabilities and against a purely subjective view, hence against the quantum Bayesian... The form of the argument, rather, is that there exists a deep puzzle if the quantum Bayesian is right: it will forever remain mysterious why gathering data and updating according to the rules should help us get on in life. This mystery is dispelled if one allows that subjective probabilities should track objective features of the world. The existence of the means/ends explanatory gap is a significant theoretical cost to bear if one is to stick with purely subjective probabilities. This cost is one which many may not be willing to bear; and reasonably so, it seems. Quantum Bayesianism: A Study http://arxiv.org/pdf/0804.2047v1.pdf Recognitions: Gold Member Science Advisor Quote by ttn Luckily, truth is not decided by majority vote. So far -- since nobody has risen to answer my challenge -- all the results prove is that 12 people hold a view that they have no actual basis for. Or maybe yours is not a strong enough argument. I will point out: I am not aware of any Bohmian that would say that EPR was correct in believing: It is unreasonable to require that only those observables which can be simultaneously measured have reality. I.e. that counterfactual observables do have reality. So in my book, every Bohmian is an anti-realist. Quote by bohm2 This is the part that always confused me. What difference would there be between a local vs non-local non-realism? I certainly agree with you (and Maudlin) that -- if the rejection of "realism" means that there is no physical reality at all -- then the idea that there is still something meaningful for "locality" to mean is completely crazy. Clearly, if there's no physical reality, then it makes no sense to say that all the causal influences that propagate around from one physically real hunk of stuff to another move at or slower than 3 x 10^8 m/s. If there's no reality, then reality's neither local nor nonlocal because there's no reality! But the point is that there are very few people who actually seriously think there's no physical reality at all. (This would be solipsism, right? Note that even the arch-quantum-solipsist Chris Fuchs denies being a solipsist! Point being, very few people, perhaps nobody, would openly confess to thinking there's no physical reality at all.) And yet there are at least 12 people right here on this thread who say that Bell's theorem proves that realism is false! What gives? Well, those people simply don't mean by "realism" the claim that there's a physical world out there. They mean something much much much narrower, much subtler. They mean in particular something like: "there is a fact of the matter about what the outcome of a measurement was destined to be, before the measurement was even made, and indeed whether it is in fact made or not." That is, they mean, roughly, that there are "hidden variables" (not to be found in QM's wave functions) that determine how things are going to come out. So , if non-realism, then the issue of locality vs non-locality seems kind of pointless since there doesn't appear to be any ontological issues. Correct... if "non-realism" means solipsism. But if instead "non-realism" just means the denial of hidden variables / pre-existing values / counter-factual definiteness, then it indeed makes perfect sense. Of course, in the context of Bell's theorem, what really matters is just whether endorsing this (latter, non-insane) type of "non-realism" gives us a way of avoiding the unpalatable conclusion of non-locality. At least 12 people here think it does! And yet none of them have yet addressed the challenge: produce a local but non-realist model that accounts for the perfect correlations. (Note, even if somebody did this, they'd still technically need to show that you can *also* account for the *rest* of the QM predictions -- namely the predictions for what happens when the analyzers are *not* parallel -- before they could really be in a position to say that local non-realism is compatible with all the QM predictions. My challenge is thus quite "easy" -- it only pertains to a subset of the full QM predictions! And yet no takers... This of course just shows how *bad* non-realism is. If you are a non-realist, you can't even account for this perfect-correlations *subset* of the QM predictions locally! That's what EPR pointed out long ago...) Quote by DrChinese Or maybe yours is not a strong enough argument. I will point out: I am not aware of any Bohmian that would say that EPR was correct in believing: It is unreasonable to require that only those observables which can be simultaneously measured have reality. I.e. that counterfactual observables do have reality. So in my book, every Bohmian is an anti-realist. It depends on exactly what you mean by "realism". I'll say something about this later, in answer to audioloop's question. But what you, Dr C, are missing above is that when Podolsky said something was "unreasonable", what he actually meant (and absolutely should have said instead!) was: "inconsistent with locality". But I've explained this so many times to you over the years, without getting through, there's really no point even trying again. We should all be thinking of reality as fields and particles as excitations of the fields, instead of crippled and incoherent classical-like models. Classical-like concepts like time, space, 'physical stuff', realism... could well be emergent. Just my unprofessional view(backed by some of the great names in physics). In the same way that we can not even in principle predict the behavior of certain large collections of bodies from the behavior of just one constituent(e.g. a flock of birds), it seems equally impossible to predict the behavior of a large ensemble of particles from looking at just one electron or proton. Hence why it could be totally impossible to understand the reality of chairs and tables by looking at just quantum mechanical rules and axioms. The fundamental aspect of the emergent system is its capacity to be what it is while being completely unlike any other version of what it is. And we are just beginning to approach problems in this direction - we also have to embrace the emergence of life from non-life and consciousnesss from non-consciousness among other similar phenomena(like the possible emergence of a reality from a non-reality - these 3/life, consciousness and physical stuff/ account for all that can be observed in the universe). Emergence is an observational fact and sounds much less abusrd than many of the other ideas put forward here. PP. Since none of my conscious thoughts can at present be modelled and framed in purely classical/physical terms, shouldn't we also be proposing hidden variables for explaning the reality of the paragraph i wrote above? Quote by audioloop travis, do you believe in CFD ? Interesting question. The first thing I'd say is: who cares? If the topic is Bell's theorem, then it simply doesn't matter. CFD *follows* from locality in the same way that "realism" / hidden variables do. That is: the only way to locally (and, here crucially, non-conspiratorially) explain even the perfect correlations is with a "realistic" hidden-variable theory with pre-determined values for *all* possible measurements, i.e., a model with the CFD property. So... to whatever extent somebody thinks CFD needs to be assumed to then derive a Bell inequality, it doesn't provide any kind of "out" since CFD follows from locality. That is, the overall logic is still: locality --> X, and then X --> inequality. So whether X is just "realism" or "realism + CFD" or whatever, it simply doesn't make any difference to what the correct answer to this thread's poll is. So, having argued that it's irrelevant to the official subject of the thread, let me now actually answer the question. Do I believe in CFD? I'm actually not sure. Or: yes and no. Or: it depends on a really subtle point about what, exactly, CFD means. Let me try to explain. As I think everybody knows, my favorite extant quantum theory is the dBBB pilot-wave theory. So maybe we can just consider the question: does the pilot-wave theory exhibit the CFD property? To answer that, we have to be very careful. One's first thought is undoubtedly that, as a *deterministic* hidden variable theory, of course the pilot wave theory exhibits CFD: whatever the outcome is going to be, is determined by the initial conditions, so ... it exhibits CFD. Clear, right? On the other hand, I've already tried to make a point in this thread about how, although the pilot-wave theory assigns definite pre-existing values (that are then simply revealed in appropriate measurements) to particle positions, it does *not* do this in regard to spin. That is, the pilot-wave theory is in an important sense not "realistic" in regard to spin. And that starts to make it sound like, actually, at least in regard to the spin measurements that are the main subject of modern EPR-Bell discussions, perhaps the pilot-wave theory does *not*, after all, exhibit CFD. So, which is it? Actually both are true! The key point here is that, according to the pilot-wave theory, there will be many physically different ways of "measuring the same property". Here is the classic example that goes back to David Albert's classic book, "QM and Experience." Imagine a spin-1/2 particle whose wave function is in the "spin up along x" spin eigenstate. Now let's measure its spin along z. The point is, there are various ways of doing that. First, we might use a set of SG magnets that produce a field like B_z ~ B_0 + bz (i.e., a field in the +z direction that increases in the +z direction). Then it happens that if the particle starts in the upper half of its wave packet (upper here meaning w.r.t. the z-direction) it will come out the upper output port and be counted as "spin up along z"; whereas if it happens instead to start in the lower half of the wave packet it will come out the lower port and be counted as "spin down along z". So far so good. But notice that we could also have "measured the z-spin" using a SG device with fields like B_z ~ B_0 - bz (i.e., a field in the z-direction that *decreases* in the +z direction). Now, if the particle starts in the upper half of the packet it'll still come out of the upper port... *but now we'll call this "spin down along z"*. Whereas if it instead starts in the lower half of the packet it'll still come out of the lower port, but we'll now call this *spin up along z*. And if you follow that, you can see the point. Despite being fully deterministic, what the outcome of a "measurement of the z-spin" will be -- for the same exact initial state of the particle (including the "hidden variable"!) -- is not fixed. It depends on which *way* the measurement is carried out! Stepping back for a second, this all relates to the (rather weird) idea from ordinary QM that there is this a correspondence between experiments (that are usually thought of as "measuring some property" of something) and *operators*. So the point here is that, for the pilot-wave theory, this correspondence is actually many-to-one. That is, at least in some cases (spin being one of them), many physically distinct experiments all correspond to the same one operator (here, S_z). But (unsurprisingly) distinct experiments can have distinct results, even for the same input state. So... back finally to the original question... if what "CFD" means is that for each *operator*, there is some definite fact of the matter about what the outcome of an unperformed measurement would have been, then NO, the pilot-wave theory does *not* exhibit CFD. On the other hand, if "CFD" means that for each *specific experiment*, there is some definite fact of the matter about what the outcome would have been, then YES, of course -- the theory is deterministic, so of course there is a fact about how unperformed experiments would have come out had they been performed. This may seem like splitting hairs for no reason, but the fact is that all kinds of confusion has been caused by people just assuming -- wrongly, at least in so far as this particular candidate theory is concerned -- that it makes perfect sense to *identify* "physical properties" (that are revealed or made definite or whatever by appropriate measurements) with the corresponding QM operators. This is precisely what went wrong with all of the so-called "no hidden variable" theorems (Kochen-Specker, etc.). And it is also just the point that needs to be sorted out to understand whether the pilot-wave theory exhibits CFD or not. The answer, I guess, is: "it's complicated". That make any sense? The notion of 'particles' is oxymoronic. If microscopic entities obey Heisenberg’s uncertainty principle, as we know they do, one is forced to admit that the concept of “microscopic particle” is a self-contradictory concept. This is because if an entity obeys HUP, one cannot simultaneously determine its position and momentum and, as a consequence, one cannot determine, not even in principle, how the position of the entity will vary in time. Consequently, one cannot predict with certainty its future locations and it doesn't have the requisites of classical particles like exact position and momentum in spacetime. What is the reason why an entity of uncertain nature but evidently non-spatial should obey classical notions like locality at all times? ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal? It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism Recognitions: Gold Member Quote by ttn It depends on exactly what you mean by "realism". I think one of the easiest (for me) ways to understand "realism" as per pilot-wave is "contextual realism". Demystifier does a good job discussing this issue here when debating whether a particular paper discussed in that thread ruled out the pilot wave model: What their experiment demonstrates is that realism, if exists, must be not only nonlocal, but also contextual. Contextuality means that the value of the measured variable may change by the act of measurement. BM is both nonlocal and contextual, making it consistent with the predictions of standard QM as well as with their experiment. In fact, after Eq. (4), they discuss BM explicitly and explain why it is consistent with their results. Their "mistake" is their definition of "reality" as an assumption that all measurement outcomes are determined by pre-existing properties of particles independent of the measurement. This is actually the definition of non-contextual reality, not of reality in general. The general definition of reality is the assumption that some objective properties exist even when measurements are not performed. It does not mean that these properties cannot change by the physical act of measurement. In simpler terms, they do not show that Moon does not exist if nobody looks at it. They only show that Moon, if exists when nobody looks at it, must change its properties by looking at it. I also emphasize that their experiment only confirms a fact that was theoretically known for a long time: that QM is contextual. In this sense, they have not discovered something new about QM, but only confirmed something old. Non-local Realistic theories disproved http://www.physicsforums.com/showthread.php?t=167320 Since I hate writing stuff in my own words since others write it down so more eloquently the necessary contextuality present in the pilot-wave model is summarized in an easily understandible way (for me) here also: One of the basic ideas of Bohmian Mechanics is that position is the only basic observable to which all other observables of orthodox QM can be reduced. So, Bohmian Mechanics will qualify VD (value definiteness) as follows: “Not all observables defined in orthodox QM for a physical system are defined in Bohmian Mechanics, but those that are (i.e. only position) do have definite values at all times.” Both this modification of VD (value definiteness) and the rejection of NC (noncontextuality) immediately immunize Bohmian Mechanics against any no HV argument from the Kochen Specker Theorem. The Kochen-Specker Theorem http://plato.stanford.edu/entries/ko...ker/index.html So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction. Quote by nanosiborg I had been thinking that it would be pointless to make a local nonrealistic theory, since the question, following Einstein (and Bell) was if a local model with hidden variables can be compatible with QM? But a local nonrealistic (and necessarily nonviable because of explicit locality) theory could be used to illustrate that hidden variables, ie., the realism of LHV models, have nothing to do with LHV models' incompatibility with QM and experiment. Quote by ttn Well, you'd only convince the kind of person who voted (b) in the poll, if you somehow managed to show that *no* "local nonrealistic" model could match the quantum predictions. Just showcasing the silly local coin-flipping particles model doesn't do that. Yes, I see. Quote by ttn But I absolutely agree with the way you put it, about what the question is post-Einstein. Einstein already showed (in the EPR argument, or some less flubbed version of it -- people know that Podolsky wrote the paper without showing it to Einstein first and Einstein was pissed when he saw it, right?) that "realism"/LHV is the only way to locally explain the perfect correlations. Post-Einstein, the LHV program was the only viable hope for locality! And then Bell showed that this only viable hope won't work. So, *no* local theory will work. I'm happy to hear we're on the same page about that. But my point here is just that, really, the best way to convince somebody that "local non-realistic" theories aren't viable is to just run the proof that local theories aren't viable (full stop). But somehow this never actually works. People have this misconception in their heads that a "local non-realistic" theory can work, even though they can't produce an explicit example, and they just won't let go of it. Yes, I do think I'm following you on all this. That we're on the same page. Not sure when I changed from the "realism or locality has to go" way of thinking to the realization that it's all about the locality condition being incompatible with QM and experiment and that realism/hidden variables are actually irrelevant to that consideration. Quote by ttn Since it so perfectly captures the logic involved here, it's worth mentioning here the nice little paper by Tim Maudlin http://www.stat.physik.uni-potsdam.d...Bell_EPR-2.pdf where he introduces the phrase: "the fallacy of the unnecessary adjective". The idea is just that when somebody says "Bell proved that no local realist theory is viable", it is actually true -- but highly misleading since the extra adjective "realist" is totally superfluous. As Maudlin points out, you could also say "Bell proved that no local theory formulated in French is viable". It's true, he did! But that does not mean that we can avoid the spectre of nonlocality simply by re-formulating all our theories in English! Same with "realism". Yes, no "local realist" theory is viable. But anybody who thinks this means we can save locality by jettisoning realism, has been duped by the superfluous adjective fallacy. Yes, as I mentioned, I get this now, and feel like I've made progress in my understanding of Bell. I like the way Maudlin writes also. Thanks for the link. In the process of rereading it. Quote by nanosiborg I'd put it like this. Bell's formulation of locality, as it affects the general form of any model of any entanglement experiment designed to produce statistical dependence between the quantitative (data) attributes of spacelike separated paired detection events, refers to at least two things: 1) genuine relativistic causality, the independence of spacelike separated events, ie., that the result A doesn't depend on the setting b, and the result B doesn't depend on the setting a. 2) statistical independence, ie., that the result A doesn't alter the sample space for the result B, and vice versa. In other words, that the result at one end doesn't depend in any way on the result at the other end. Quote by ttn I don't understand what you mean here. I don't think I do either. I'm just fishing for any way to understand Bell's theorem that will allow me to retain the assumption that nature is evolving in accordance with the principle of local action. That nature is exclusively local. Because the assumption that nonlocality exists in nature is pretty heavy duty. Just want to make sure any possible nuances and subtleties have been dealt with. I've come to think that experimental loopholes and hidden variables ('realism') are unimportant. That it has to do solely with the explicit denotation of the locality assumption. So, I'm just looking for (imagining) possible hidden assumptions in the denotation of locality that might preclude nonlocality as the cause of Bell inequality violations. Quote by ttn For the usual case of two spin-entangled spin-1/2 particles, the sample space for Bob's measurement is just {+,-}. If the joint sample space is (+,-), (-,+), (+,+), (-,-), then a detection of, say, + at A does change the joint sample space from (+,-), (-,+), (+,+), (-,-) to (+,-), (+,+). But yes I see that the sample space at either end is always (+,-) no matter what. At least in real experiments. In the ideal, iff θ is either 0° or 90°, then a detection at one end would change the sample space at the other end. But the sample space of what's registered by the detectors isn't the sample space I was concerned about. There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with. Quote by ttn This is certainly not affected by anything Alice or her particle do. So if you're somehow worried that the thing you call "2) statistical independence" might actually be violated, I don't think it is. But I don't think that even matters, since I don't see anything like this "2) ..." being in any way assumed in Bell's proof. But, basically, I just can't follow what you say here. I think that statistical independence is explicated in the codification of Bell's locality condition. Whether or not it's relevant to the interpretation of Bell's theorem I have no idea at the moment. The more I think about it, the more it just seems too simplistic, too pedestrian. Quote by nanosiborg The problem is that a Bell-like (general) local form necessarily violates 2 (an incompatibility that has nothing to do with locality), because Bell tests are designed to produce statistical (ie., outcome) dependence via the selection process (which proceeds via exclusively local channels, and produces the correlations it does because of the entangling process which also proceeds via exclusively local channels, and produces a relationship between the entangled particles via, eg., emission from a common source, interaction, 'zapping' with identical stimulii, etc.). Quote by ttn Huh??? Well, the premise might be wrong, maybe this particular inconsistency between experimental design and Bell locality isn't significant or relevant to Bell inequality violations, but I have to believe that you understand the statement. Quote by nanosiborg Ok, I don't think it has anything to do with Jarrett's idea that "Bell locality" = "genuine locality" + "completeness", but rather the way I put it above, in terms of an incompatibility between the statistical dependence designed into the experiments and the statistical independence expressed by Bell locality. Is this a possibility, or has Bell (and/or you) dealt with this somewhere? Quote by ttn The closest I can come to making sense of your worry here is something like this: "Bell assumes that stuff going on by Bob should be independent of stuff going on by Alice, but the experiments reveal correlations, so one of Bell's premises isn't reflected in the experiment." I'm sure I have that wrong and you should correct me. But on the off chance that that's right, I think it would be better to express it this way: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal". That is, it sounds like you are trying to make "something about how the experimental data should come out" into a *premise* of Bell's argument, instead of the *conclusion* of the argument. But it's not a premise, it's the conclusion. And the fact that the real data contradicts that conclusion doesn't invalidate his reasoning; it just shows that his *actual* premise (namely, locality!) is false. In a previous post I said something like that Bell locality places upper and lower boundaries on the correlations, and that QM predicted correlations lie, almost entirely, outside those boundaries Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal." Or are you saying that that's the correct way of saying it? Or what? I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results. I don't yet understand how/why it's concluded that nature is nonlocal. Quote by nanosiborg There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with. Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory. Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal." Or are you saying that that's the correct way of saying it? Or what? That's the simple (and correct) way to express what I thought you were saying. I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results. I don't yet understand how/why it's concluded that nature is nonlocal. Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE. Quote by bohm2 So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction. Yes, that's right. Kochen-Specker rules out non-contextual hidden variable (VD) theories. The dBB pilot-wave theory is not a non-contextual hidden variable (VD) theory. And, of course, separately: Bell's theorem rules out local theories. The pilot-wave theory is not a local theory. People who voted for (b) in the poll evidently get these two theorems confused. They try to infer the conclusion of KS, from Bell. Quote by Quantumental ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal? I read it when it came out and haven't thought of it sense. In short, meh. It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism Haven't read DW's new book. Everything I've seen about the attempt to derive the Born rule from decision theory has been, to me, just ridiculous. But I would like to see DW's latest take on it. Not sure if you intended this, but (what I would call) the "ontology issue" and the "preferred basis issue" are certainly not the same thing. Not sure what you meant exactly with the last almost-sentence. (Shades of ... "the castle AAARRRGGGG") In my experience, whenever things are philosophically murky, and people are stuck into one or more "camps", it sometimes helps to ask a technical question whose answer is independent of how you interpret things, but which might throw some light on those interpretations. That's what Bell basically did with his inequality. They may not have solved anything about the interpretation of quantum mechanics, but certainly afterwards, any interpretation has to understood in light of his theorem. Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, $\Psi$. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"? Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object. But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually. Quote by ttn Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory. No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'. At the outset, given a uniform λ distribution (is this what's called rotational invariance?) and the rapid and random varying of the a and b settings, then would the sample space for a or b be all λ values? Anyway, whatever the sample space for a or b (depending on the details of the local model), then given a detection at, say, A, associated with some a, then would the sample space for b be a reduced set of possible λ values? Quote by ttn That's the simple (and correct) way to express what I thought you were saying. If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying. Quote by ttn Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE. Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality. Quote by stevendaryl Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, $\Psi$. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"? Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object. But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually. This is just the normal way that all MWI proponents already think about the theory. It's a theory of the whole universe, described the the universal wave function, obeying Schroedinger's equation at all times. (No collapse postulates or other funny business.) Decoherence gives rise to a coherent "branch" structure such that it's possible to think of each branch as a separate (or at least, independent) world. For more details, see any contemporary treatment of MWI, e.g., the David Wallace book that was mentioned earlier. (Incidentally, I just ordered myself a copy!) Quote by nanosiborg No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'. Well, what you suggested was a violation of what is actually called the "no conspiracy" assumption. I'm sure you didn't *mean* to endorse a conspiracy theory... (See the scholarpedia entry on Bell's theorem for more details on this no conspiracy assumption.) If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying. Well yeah, OK, but my point was kind of that, if I was understanding the first part (and now it sounds like I was?), then what actually follows logically is that nature is nonlocal. So I guess you should think about the reasoning some more. Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality. What you wrote after "unless" is just a way of saying that, actually, it wasn't established that "every possible local theory disagrees with experiment". Can we at least agree that, if every possible local theory disagrees with experiment, then nature is nonlocal -- full stop? Page 6 of 17 « First < 3 4 5 6 7 8 9 16 > Last » Thread Tools | | | | |--------------------------------------------------------------------------------------|-----------------|---------| | Similar Threads for: What do violations of Bell's inequalities tell us about nature? | | | | Thread | Forum | Replies | | | Quantum Physics | 97 | | | Quantum Physics | 38 | | | General Physics | 0 | | | General Physics | 0 | | | Quantum Physics | 8 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519047737121582, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/49641/plurisubharmonic-sublevel-sets
## plurisubharmonic sublevel sets ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a complex manifold, let $\Omega \subseteq {\bf C} \times X$ be defined by $\Omega = \{ (z,p) \in {\bf C} \times X : a(p) < Im z < - b(p) \}$ where $a$ and $b$ are plurisubharmonic functions on $X$ with $a + b < 0.$ Assume that $\Omega$ is a Stein manifold, is it true that $X$ is a Stein manifold ? If with the same assumtpion we suppose that $X$ is locally eucledean countable basis "complex manifold" but not in general Hausdorff, can we conclude that $X$ must be Hausdorff ? This question is related to the possibility to embedding a manifold with the holomorphic action of a real Lie group into some space where there is an action of the complexified group. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8937304615974426, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/251598/product-of-positive-definite-matrices
# Product of Positive Definite Matrices Let $X,Y,Z$ be positive definite matrices such that $XYZ$ is hermitian . How can we show that $XYZ$ is also a positive definite matrix? - – Inquest Dec 5 '12 at 15:39 ## 2 Answers Since by hypothesis $XYZ$ is Hermitian, then it is enough to show that its eigenvalues are positive. But the product of positive-definite matrices, even though not hermitian in general, always has positive eigenvalues. See a proof in my answer to this question: Product of positve definite matrix and seminegative definite matrix - Denote $H=XYZ$. Let $X^{1/2}$ be a positive definite square root of $X$ (it always exists -- think unitary diagonalization) and \begin{align} y &= X^{1/2}YX^{1/2},\\ z &= X^{-1/2}YX^{-1/2},\\ h &= X^{-1/2}HX^{-1/2} = X^{1/2}YZX^{-1/2} = yz. \end{align} Then $y$ and $z$ are positive definite and $h$ is Hermitian. Also, $H$ is positive definite iff $h$ is positive definite. Now, let $y^{1/2}$ be a positive definite square root of $y$. Then $h$ is similar to $y^{-1/2}hy^{1/2}=y^{1/2}zy^{1/2}$, which is positive definite. Therefore $h$ and in turn $H$ are positive definite. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089110493659973, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76913/is-the-nc-torus-a-quantum-group/83343
## Is the nc torus a quantum group? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The non-commutative n-torus appears in many applications of non-commutative geometry. To stay in the setting $n=2$: it is a C$^\ast$-algebra generated by unitaries $u$ and $v$, satisfying $u v = e^{i \theta} v u$. It is the deformation of the 2-torus, i.e. a group. So my question is: besides viewing the nc torus as a 'non-commutative space', is it also a compact quantum group? That is, is there Hopf algebraic structure in it? - ## 3 Answers The $C^*$-algebra versions are treated in this paper by Piotr Soltan: http://arxiv.org/abs/0904.3019 The abstract reads: We prove that some well known compact quantum spaces like quantum tori and some quantum two-spheres do not admit a compact quantum group structure. This is achieved by considering existence of traces, characters and nuclearity of the corresponding $\mathrm{C}^*$-algebras. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know about the $C^*$-algebra version, but I can tell you about the algebraic version (the algebra generated by $u$ and $v$, invertible, such that $uv = qvu$). It is not a Hopf algebra but a "braided group", that is, a Hopf algebra in some braided category (classical Hopf algebras being, in this parlance, "Hopf algebras in the category of vector spaces with the trivial twist"). Concretely, there is a map of algebras $A \to A \otimes A$ satisfying all the axioms you want, except that $A \otimes A$ is not made into an algebra in the way you think. If I were allowed a bit of self-advertising, I'd recommend §4 of http://arxiv.org/abs/0911.5287 Majid's book on quantum groups may have some formulae about the codiagonal in the quantum tori. - Despite the negative result quoted by MTS, there have been some attempts to put a Hopf-like structure on the quantum torus. One of these attemps, which seems orthogonal to the one mentioned by Pierre in his answer, is via Hopfish algebras. To be short, Hopfish algebras (after Tang-Weinstein-Zhu) are unital algebras equipped a coproduct, a counit and an antipode that are morphisms in the Morita category (they are bimodules,rather than actual algebra morphisms). The Hopfish structure on the quantum torus has been studied in details in this paper. To be complete, let me emphazise the following point (taken from the above paper): It is important to note that, although the irrational rotation algebra may be viewed as a deformation of the algebra of functions on a 2-dimensional torus, our hopfish structure is not a deformation of the Hopf structure associated with the group structure on the torus. Rather, the classical limit of our hopfish structure is a second symplectic groupoid structure on $T^∗\mathbb{T}^2$ (...), whose quantization is the multiplication in the irrational rotation algebra. We thus seem to have a symplectic double groupoid which does not arise from a Poisson Lie group. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264264702796936, "perplexity_flag": "head"}
http://mathoverflow.net/questions/29011?sort=votes
## Birkhoff ergodic theorem for dynamical systems driven by a Wiener process ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) At the risk of asking a stupid question I have the following problem. Suppose I have a measure preserving dynamical system `$(X, \mathcal{F}, \mu, T_s)$`, where • `$X$` is a set • $\mathcal{F}$ is a sigma-algebra on $X$, • $\mu$ is a probability measure on $X$, • $T_s:X \rightarrow X$, is a group of measure preserving transformations parametrized by $s \in \mathbb{R}$. Suppose that this dynamical system is ergodic, so that for any $f \in L^1(\mu)$, `$\lim_{t\rightarrow \infty}\frac{1}{2t}\int_{-t}^t f(T_s x) ds = \int f(x)d\mu(x)$`. Now let $B_s$ be a real valued Wiener process such that $B_0 = 0$, then I can define the following process: `$\frac{1}{t}\int_{0}^t f(T_{B_s} x) ds$` Does anybody know how this process would behave as $t\rightarrow \infty$? Intuitively I would expect it to converge to a similar constant for a.e realisation of the brownian motion, but I can't find a convincing argument. - 5 See Theorem 3 at the end of this paper: ncbi.nlm.nih.gov/pmc/articles/PMC1063816 – Steve Huntsman Jun 22 2010 at 1:34 ## 1 Answer Not a stupid question, but I think the answer is no. The paper Random Ergodic Theorems with Universally Representative Sequences by Lacey, Petersen, Wierdl and Rudolph gives a counterexample in the case where the system is being driven by a simple symmetric random walk (based on an application of Strassen's functional law of the iterated logarithm). I'm pretty sure the same technique would give a counterexample here. The paper can be found online at: http://www.numdam.org/item?id=AIHPB_1994__30_3_353_0 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8960978388786316, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3906553
Physics Forums ## some lagrangian question If we considered some coordinate as being a generalized one, like when we are considering spherical coordinates-let us suppose that I chose theta and phi as generalized coordinates. After deriving the Lagrangian equation it turned out that the equation doesnt depend on phi. Which means that derivative of the Lagrangian by phi is zero. Does this mean it is not a generalized coordinate? If not, what does it mean? And lastly, whats the difference if the case was that the equation doesnt depend on phi dot(the time derivative of phi) thanks in advance PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Quote by M. next If we considered some coordinate as being a generalized one, like when we are considering spherical coordinates-let us suppose that I chose theta and phi as generalized coordinates. After deriving the Lagrangian equation it turned out that the equation doesnt depend on phi. Which means that derivative of the Lagrangian by phi is zero. Does this mean it is not a generalized coordinate? If not, what does it mean? And lastly, whats the difference if the case was that the equation doesnt depend on phi dot(the time derivative of phi) thanks in advance Let's say we have some Lagrangian $L(\theta, \phi, \dot{\theta}, \dot{\phi})$ Let's look at the equation of motion for $\phi$. Recall that the generalized conjugate momentum to the coordinate $\phi$ is $p_{\phi}=(\frac{\partial L}{\partial \dot{\phi}})$ and so our equation of motion becomes $\frac{d}{d t}(\frac{\partial L}{\partial \dot{\phi}}) -(\frac{\partial L}{\partial \phi}) = \frac{d}{d t} p_{\phi} - (\frac{\partial L}{\partial \phi})= 0$ If L does not depend on $\phi$, then $(\frac{\partial L}{\partial \phi})= 0$, and so $p_{\phi}$ is constant in time; it is a conserved quantity. As for what happens when L does not depend on $\dot{\phi}$ we can look to the same equation of motion. This would be saying the same thing as $p_{\phi} = 0$. As far as dynamics go, I think what it means is that there would be no relevant dynamics in the $\phi$ direction, and you would instead look at how things are changing in the $\theta$ direction. hope this helps, -James Phi is still a generalized coordinate, and when you're considering your final equations of motion you will need to consider the time-dependence of the phi coordinate in your answer. The fact that the derivative of the Lagrangian is zero means, as was mentioned above, that you have a conserved quantity. For example, consider the Lagrangian for a small planet orbiting a very large star: assuming that Newtonian gravity is valid, you will get a potential energy that will have no angular dependence at all. If you were free to choose whatever reference frame you wanted, then you would be foolish not to choose one in which one of the generalized coordinates is zero. However, you can imagine a potential that is slightly more complicated in which there is no "phi" dependence, but some "theta" dependence (like the gravitational potential energy of a galactic disk). In this case, the fact that dL/d(phi) = 0 means that angular momentum is conserved about the axis of symmetry. This does not reduce the effective dimensionality of the solution set though. ## some lagrangian question I apologize for the REAL delay in replying, but I thank you both. Thread Tools | | | | |-----------------------------------------------|-------------------------------|---------| | Similar Threads for: some lagrangian question | | | | Thread | Forum | Replies | | | Special & General Relativity | 1 | | | Classical Physics | 1 | | | Introductory Physics Homework | 3 | | | Quantum Physics | 5 | | | Advanced Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479711651802063, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/170221-normal-vector.html
# Thread: 1. ## Normal vector Given the curve $C$ on $\mathbb R^3$ parametrized by $r(t).$ Find the normal vector on $r(1)=(3,1,3)$ if the osculator plane on that point is $3y-x=0$ and $r(1)\cdot N(1)>0$ with $r'(1)=(6,2,2).$ (I don't know if "osculator" makes sense here.) I hope you guys can help, I have an exam on march and I'm scared. I don't remember how to solve this, since I lost my calculus notebook. Thanks! 2. Hint : $\det [r(t)-r(1),r'(1),r''(1)]=k(3y-x)$ Fernando Revilla 3. Originally Posted by FernandoRevilla $\det [r(t)-r(1),r'(1),r''(1)]=k(3y-x)$ Okay, this may be related to the osculator plane, but I don't get the determinant and their components. Could you elaborate? Thanks! 4. Hi Fernando, is it possible that you may help me a bit more? I'll appreciate that! 5. $<br /> r(t)=(x(t),y(t),z(t))<br />$ $<br /> r(1)=(x(1),y(1),z(1))<br />$ $<br /> r'(t)=(x'(t),y'(t),z'(t))<br />$ $<br /> r'(1)=(x'(1),y'(1),z'(1)).<br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9154453277587891, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/combinatorics-on-words
Tagged Questions 1answer 181 views A property of periodic words Question is edited Perhaps this formulation is clearer. It is well known that if a power of a primitive (i.e. not a proper power) word $u$ contains two different occurrences of a … 4answers 384 views Subwords of cube-free binary words I'm currently working on subwords of cube-free binary words. A binary word is one composed of letters from a two-letter alphabet such as $\{0,1\}$. A word $y$ is a subword of $w$ … 1answer 387 views Analogues of the Knuth and Forgotten equivalences on permutations: have they been studied? Consider a totally ordered alphabet $A$ of $n$ letters. Let $W$ be the set of all words over $A$ which have no two letters equal. Then, for example, we can define the Knuth equival … 2answers 519 views Ubiquitous Zimin words Let $w$ be a word in letters $x_1,...,x_n$. A value of $w$ is any word of the form $w(u_1,...,u_n)$ where $u_1,...,u_n$ are words. For example, $abaaba$ is a value of $x^2$. A word … 3answers 648 views an operation on binary strings Recently, as part of some joint research, Tom Roby was led to a curious operation on strings of L's and R's which he calls "bounce-reading": We start by reading the string at the l … 1answer 277 views Notation for ends of a string I work now a lot with strings of characters and other finite sequences and found that I need many times a good notation for "cutting the end" a string. If $a$ is a finite sequence … 0answers 286 views Avoidable words Let $u(x_1,...,x_n)$ be a word, $k\in \mathbb{N}$. We say that $u$ is $k$-avoidable if there exists an infinite word in $k$ letters $\{a_1,...,a_k\}$ which does not contain values … 0answers 134 views Generalised de Bruijn Graph I have encountered sets of the following type, consisting of words over a finite aphabet $A$. If $S$ is such a set, then $S$ is finite, No word in $S$ is part of another element … 3answers 1k views Cube-free infinite binary words A word $y$ is a subword of $w$ if there exist words $x$ and $z$ (possibly empty) such that $w=xyz$. Thus, $01$ is a subword of $0110$, but $00$ is not a subword of $0110$. I'm in … 0answers 254 views Software for Combinatorial Algebra sought I am looking for software which helps me do straightforward tasks in combinatorial algebra. Let me give an example of what I mean by a straightforward task: I have two graded (ge …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203417897224426, "perplexity_flag": "head"}
http://mathoverflow.net/questions/59007?sort=oldest
## Multiplicity one prime in the factorisation of p-N ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm wondering if analytic number theorists can prove results which have the following flavor: So let $N$ be a large positive integer. Q: Can you always find a prime number $p$ in the interval $(N, 3N/2)$ for which there exists an odd prime $q$ which divides $p-N$ with multiplicity exactly one? If such a result can be found in the literature I would like to have a reference. I have just not the single idea about where to start in order to prove such a result. I kind of remember vaguely that every large enough even integer $N$ can be written as $p_1+p_2p_3$ where the $p_i$'s are prime numbers which is not that far from what I'm asking for. - 1 You're asking for primes p in arithmetic progressions (say with modulus the square of q)? This is much easier than approximations to Goldbach, unless the intervals are very short. – Charles Matthews Mar 20 2011 at 22:57 Following on Charles' comment, let $q$ be the smallest odd prime not dividing $N$, then there should be lots of primes $p$, $N\lt p\le 3N/2$, $N\equiv p\pmod q$, $N\not\equiv p\pmod{q^2}$. Time to rummage through the literature on primes in arithmetic progressions. – Gerry Myerson Mar 20 2011 at 23:31 Thanks Gerry for the explanation! – Hugo Chapdelaine Mar 21 2011 at 3:24 ## 2 Answers The number of primes in $[N,3N/2]$ grows as $\frac{N}{\log N}$, while the number of powerful numbers in $[1,N/2]$ grows as $\sqrt{N}$, so pretty quickly you will find primes $p\in [N,3N/2]$ so that $p-N$ is not powerful, i.e. has a prime divisor which has multiplicity 1. - Also note that the asymptotic results needed come with effective constants so this will hold for all $N>c$ for some $c$ we can compute. – Gjergji Zaimi Mar 21 2011 at 0:36 In particular, combining Golomb 1970 with Rosser 1941 allows an explicit (and small) bound on allowable N. – Charles Mar 21 2011 at 0:39 @Gjergji Zaimi: Sorry, didn't see your comment until I added mine. Yes, same idea. – Charles Mar 21 2011 at 0:40 Thanks a lot Gjergji, I might need some effective constant at some point. – Hugo Chapdelaine Mar 21 2011 at 3:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I cannot think of an exact reference but the result you are looking for can be obtained as follows: The number of primes $p\in(N,2N]$ such that $p-N$ is divisible by the square of a prime $q>\log N$ is $\ll N/\log^2N$ (this follows by any upper bound sieve). Also, the number of primes $p\in(N,2N]$ such that $p-N$ is composed only of prime numbers $\le\log N$ is at most the number of integers $m\le N$ which are $\log N$-smooth (i.e. have only only prime factors $\le\log N$). The number of such integers is at most $N^{1-1/2\log\log N}\ll N/\log^2N$ (see for example Theorem 1, Chapter III.5, in Tenenbaum's book "Introduction to analytic and probabilistic number theory"). So $$|\{N < p\le2N:\exists q~{\rm prime}~{\rm with}~q>\log N~{\rm and}~q\|p-N\}|=\frac N{\log N}+O\Bigl(\frac N{\log^2N}\Bigr).$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277337193489075, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/schrodinger-equation?page=2&sort=newest&pagesize=30
# Tagged Questions Partial differential equation which describes the time evolution of the wavefunction of a quantum system. It is one of the first and most fundamental equations of quantum mechanics. 2answers 271 views ### What does the general solution of the Schrodinger equation represent for the particle in a box problem? For the particle in an infinitely deep potential well, I have an intuitive picture of the separable solutions of the Schrodinger equation as being the wavefunctions for the different allowed energy ... 1answer 69 views ### Schrödinger operator with a potential defined implicitly let be the problem $$-\frac{d^{2}}{dx^{2}}y(x)+f(x)y(x)=E_{n}y(x)$$ however we have a problem, we do not know the potential but its inverse $$f^{-1}(x)=g(x)$$ we know $g(x)$ but not $f(x)$ ... 2answers 265 views ### Meaning of instantaneous probability densities in time dependent wavefunctions For a time dependent wavefunction, are the instantaneous probability densities meaningful? (The question applies for instances or more generally short lengths of time that are not multiples of the ... 2answers 87 views ### What is the spectrum of energies for the potential $a^{x}$? Given a certain potential $a^{x}$ with positive non-zero 'a' are there a discrete spectrum of energy state for the Schrodinger equation \frac{- \hbar ^{2}}{2m} ... 1answer 292 views ### Derivation of Bloch's theorem I'm having a problem following a derivation of Bloch's theorem, looking at a one dimensional lattice with $N$ nodes and spacing a, we impose periodic boundary conditions, meaning that the ... 1answer 303 views ### Solving time dependent Schrodinger equation in matrix form If we have a Hilbert space of $\mathbb{C}^3$ so that a wave function is a 3-component column vector $$\psi_t=(\psi_1(t),\psi_2(t),\psi_3(t))$$ With Hamiltonian $H$ given by H=\hbar\omega ... 0answers 168 views ### Force of a particles on a Potential Barrier [closed] A particle confined by a potential wall exerts some pressure on it. More specifically, suppose that the particle moves in this potential: V(x) ~=~\left\{ \begin{array}{lcc}\text{finite ... 1answer 1k views ### Bound States in a Double Delta Function Potential [closed] Let $V(x) = −u \delta(x) - v \delta(x − a)$ where $u, v > 0$ correspond to a potential with two $\delta$ wells. Let $v > u$. If $a$ is very large, there is certainly a bound state: the particle ... 1answer 141 views ### Inhomogenous schrodinger equation Please help me out in solving this inhomogeneous Schrodinger equation in Cylindrical co-ordinates [You may suggest if I have to go for mathematics]: \ddot R + \frac1r\dot ... 3answers 251 views ### Rationale for writing wave function as product of independent wave functions When solving Schrödinger's equation for a 3D quantum well with infinite barriers, my reference states that: \psi(x,y,z) = \psi(x)\psi(y)\psi(z) \quad\text{when}\quad V(x,y,z) = V(x) + V(y) + V(z) = ... 0answers 182 views ### Finding transcendental equation for the energy of a particle in delta potential well near infinite potential barrier [closed] I'm having trouble finding the transcendental equation for a particle in a delta potential settled near an infinite potential wall. The potential is given by V(x) = \begin{cases} \infty & x ... 1answer 70 views ### Why minibands are formed in superlattices? In a single, finite quantum well, there are energy levels defined by the eigenstates - the solutions of the Schroedinger's Equation. The corresponding wavefunctions leak to the barrier because of its ... 2answers 336 views ### Schrödinger's equation, time reversal, negative energy and antimatter You know how there are no antiparticles for the Schrödinger equation, I've been pushing around the equation and have found a solution that seems to indicate there are - I've probably missed something ... 2answers 391 views ### Solving the Schrödinger equation for the double-slit experiment I'm not sure if this is the right place to ask a question about the Schrödinger equation, but I'll take my chances anyway. Basically, I would like to know how one can set up a potential function that ... 1answer 79 views ### What are relativistic and radiative effects (in quantum simulation)? I'm reading about Quantum Monte Carlo, and I see that some people are trying to calculate hydrogen and helium energies as accurately as possible. QMC with Green's function or Diffusion QMC seem to be ... 2answers 492 views ### Barrier in an infinite double well I am stuck on a QM homework problem. The setup is this: (To be clear, the potential in the left and rightmost regions is $0$ while the potential in the center region is $V_0$, and the wavefunction ... 1answer 357 views ### Schrödinger equation with complex potential In 1 dimension what is the solution of the Schrödinger equation with potential $$V(x) = V_r + i V_i$$ Potentials are constant. 1answer 85 views ### Why is amplitude of a wavefunction to propagate from $q$ to $q'$ governed by $e^{-\frac{i}{\hbar}HT}$ unitary operator? In the textbook Quantum Field Theory by A. Zee, it says: In quantum mechanics, the amplitude to propagate from a point $q_i$ to a point $q_f$ in time $T$ is governed by the unitary operator ... 2answers 428 views ### What math is needed to understand the Schrödinger equation? If I now see the Schrödinger equation, I just see a bunch of weird symbols, but I want to know what it actually means. So I'm taking a course of Linear Algebra and I'm planning on starting with PDE's ... 2answers 187 views ### How to write Schrodinger equation when a particle with some spin quantity and orbital angular momentum Quantum mechanics: Suppose that there is a particle with orbital angular momentum $|L|$. But the particle also has spin quantity $|S|$. The question is, how do I reflect this into Schrodinger ... 5answers 525 views ### Derivation of Schrodinger equation for a system with position dependent effective mass How to derive the Schrodinger equation for a system with position dependent effective mass? For example, I encountered this equation when I first studied semiconductor hetero-structures. All the books ... 2answers 152 views ### $\nabla$ and non-locality in simple relativistic model of quantum mechanics In Wavefunction in quantum mechanics and locality, wavefunction is constrained by $H = \sqrt{m^2 - \hbar^2 \nabla^2}$, and taylor-expanding $H$ results in: H = \dots = m\sqrt{1 - \hbar^2/m^2 ... 1answer 343 views ### Solution to Klein-Gordon equation always valid? We know that there is a relativistic version of Schrodinger equation called Klein-Gordon equation. However, it has some problems and due to these problems, there is Dirac equation that handles these ... 1answer 233 views ### Wavefunction in quantum mechanics and locality Every wavefunction of a form $\Psi(x)$ can be described as a superposition of multiple free particle solutions. We can see the following Fourier transform: \psi(x) = \int e^{ik\cdot x} \psi(k) dk ... 1answer 160 views ### Complete set and Klein-Gordon equation In http://www.physics.ucdavis.edu/~cheng/teaching/230A-s07/rqm2_rev.pdf, it says that when there is some external potential, the Klein-Gordon equation is altered, and it says the following: The ... 1answer 304 views ### Electron Incident On A Finite Potential Barrier This is problem 2.8.3 from Miller's Quantum Mechanics For Scientists And Engineers. I'm getting stuck when I try to figure out the wave equation on the right-hand side of the barrier. The original ... 1answer 217 views ### Explanation of equation that shows a failed approach to relativize Schrodinger equation I'm reading the Wikipedia page for the Dirac equation: $\rho=\phi^*\phi\,$ ...... $J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$ with the conservation of probability ... 1answer 226 views ### How to obtain Dirac equation from Schrodinger equation and special relativity? I'm reading the Wikipedia page for the Dirac equation: The Dirac equation is superficially similar to the Schrödinger equation for a free massive particle: A) ... 1answer 82 views ### a positive potential as $x \rightarrow \infty$ let us suppose i can calculate the asymptotic of any potential $V(x)$ in one dimension , and that i manage to prove that $V(x) \ge 0$ as $x \rightarrow \infty$ could i conclude taht if or big ... 2answers 125 views ### Is there a time delay during tunnelling? A particle hitting a square potential barrier can tunnel through it to get to the other side and carry on. Is there a time delay in this process? 3answers 352 views ### Can we have discontinuous wavefunctions in the Infinite Square well? The energy eigenstates of the infinite square well problem look like the Fourier basis of L2 on the interval of the well. So then we should be able to for example make square waves that are an ... 1answer 329 views ### Finding $\psi(x,t)$ for a free particle starting from a Gaussian wave profile $\psi(x)$ Consider a free-particle with a Gaussian wavefunction, $$\psi(x)~=~\left(\frac{a}{\pi}\right)^{1/4}e^{-\frac12a x^2},$$ find $\psi(x,t)$. The wavefunction is already normalized, so the next thing to ... 1answer 321 views ### Even and Odd States of a 1D finite potential well Is it possible for a particle trapped in a 1D finite potential well to evolve from a even state to an odd state and vice-versa? Why? 1answer 318 views ### How does one solve the Schroedinger equation for a 2D, time-dependent harmonic potential? This is the Schroedinger equation with a particular 2D harmonic potential: \begin{multline}i\hbar\frac{\partial}{\partial t}\Psi(x_1,x_2,t) = \\ \Biggl[-\frac{\hbar^2}{2m}\nabla^2 + \frac{1}{2} ... 0answers 340 views ### Scattering on delta function potential Suppose a particle has energy $E>V(+/-\infty)=0$, then the solutions to the Schrodinger equation outside of the potential will be $\psi(x)=Ae^{i k x}+Be^{-i k x}$. How can one show or explain that ... 1answer 83 views ### Where can I find hamiltonians + lagrangians? Where would you say I can start learning about Hamiltonians, Lagrangians ... Jacobians? and the like? I was trying to read Ibach and Luth - Solid State Physics, and suddenly (suddenly a Hamiltonian ... 1answer 267 views ### The Hermiticity of the Laplacian (and other operators) Is the Laplacian operator, $\nabla^{2}$, a Hermitian operator? Alternatively: is the matrix representation of the Laplacian Hermitian? i.e. \langle \nabla^{2} x | y \rangle = \langle x | ... 4answers 399 views ### Which Schrodinger equation is correct? In the coordinate representation, in 1D, the wave function depends on space and time, $\Psi(x,t)$, accordingly the time dependent Schrodinger equation is H\Psi(x,t) = ... 1answer 251 views ### Is momentum conservation for the classical Schrödinger equation due to non-relativistic or due to some more exotic invariance? I had no problem appliying the Neothers theorem for translations to the non-relativistic Schrödinger equation \$\mathrm i\hbar\frac{\partial}{\partial t}\psi(\mathbf{r},t) \;=\; \left(- ... 2answers 108 views ### Time-dependence in LCAO I would like to study time-dependence (TD) in linear combinations of atomic orbitals (LCAO). The Hückel method enables quick and dirty determination of MOs for suitable systems (view link for ... 2answers 296 views ### Does String theory say that spacetime is not fundamental but should be considered an emergent phenomenon? Does String theory say that spacetime is not fundamental but should be considered an emergent phenomenon? If so, can quantum mechanics describe the universe at high energies where there is no ... 2answers 781 views ### Matrix Representations of Quantum States and Hamiltonians I am a high school student trying to teach himself quantum mechanics just for fun, and I am a bit confused. As a fun test of my programming/quantum mechanics skill, I decided to create a computer ... 2answers 203 views ### Many-worlds: how often is the split how many are the universes? (And how do you model this mathematically.) When I read descriptions of the many-worlds interpretation of quantum mechanics, they say things like "every possible outcome of every event defines or exists in its own history or world", but is this ... 2answers 124 views ### can we apply WKB method for curved space times let be the Hamiltonian of a surface $H= g_{a,b} p^{a}p^{b}$ (Einstein summation assumed) my question is if although the space time is curved then can we use the WKB approximation to get the quantum ... 4answers 132 views ### Why do we consider the evolution (usually in time) of a wave function? Why do we consider evolution of a wave function and why is the evolution parameter taken as time, in QM. If we look at a simple wave function $\psi(x,t) = e^{kx - \omega t}$, $x$ is a point in ... 1answer 340 views ### Bound states for sech-squared potential I'm working on an introductory qm project, hope somebody has the time to help me (despite the length of this post), it will be highly appreciated. My goal is to determine the bound states and their ... 2answers 1k views ### Is the Schrödinger equation derived or postulated? I'm an undergraduate mathematics student trying to understand some quantum mechanics, but I'm having a hard time understanding what is the status of the Schrödinger equation. In some places I've read ... 2answers 253 views ### Can we impose a boundary condition on the derivative of the wavefunction through the physical assumptions? Consider the Schrödinger equation for a particle in one dimension, where we have at least one boundary in the system (say the boundary is at $x=0$ and we are solving for $x>0$). Sometimes we want ... 1answer 108 views ### Apparent contradiction between calculations and intuition? I am rather confused because it would seem that mathematical conclusions I have drawn here goes against my physical intuition, though both aren't too reliable to begin with. We have a potential step ... 1answer 381 views ### Interpretation of the Random Schrödinger Equation I should preface this by admitting that my physics background is rather weak so I beg you to keep that in mind in your responses. I work in mathematics (specifically probability theory) and a paper ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028688669204712, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/95294/does-the-following-characterization-of-subgroups-of-gl-2-mathbbf-p-generali
## Does the following characterization of subgroups of $GL_2(\mathbb{F}_p)$ generalise? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $p$ be a prime number. By a Cartan subgroup of $GL_n(\mathbb{F}_p)$ I mean an absolutely semisimple maximal abelian subgroup. When $n=2$, it is well-known* that, for $G \subset GL_n(\mathbb{F}_p)$ of order prime to $p$, either $G$ is contained in a Cartan subgroup, or it is contained in the normalizer of a Cartan subgroup, or its image in $PGL_n(\mathbb{F}_p)$ is isomorphic to $A_4$, $A_5$, or $S_4$. I would like to know if there is a similar result for larger even values of $n$; where a subgroup of order prime to $p$ would either be contained in the normaliser of a Cartan, or its projective image be one in a finite list of groups. *See e.g. section 2 of Serre's "Propriétés galoisiennes..." paper. - 1 Presumably we should read "A_4, A_5, or S_4" as "a finite subgroup of PGL_2(\mathbb C)". – Will Sawin Apr 26 2012 at 20:45 @Will: Actually no, I really do mean in $PGL_2(\mathbb{F}_p)$. Now you might ask for which $p$ this is possible. Restricting to odd $p$, $A_4$ and $S_4$ always embed in $PGL_2(\mathbb{F}_p)$, and $A_5$ embeds if and only if $p \equiv \pm 1$ mod 5. – Barinder Banwait Apr 26 2012 at 21:23 @Will: Your comment is in the right direction, though it omits the cyclic groups. This comment does reinforce my sense that higher values of `$n$` will get arbitrarily hard for list-making. – Jim Humphreys Apr 26 2012 at 21:34 @Barinder: I don't understand your definition of "Cartan subgroup", since Serre is considering both the split and non-split maximal tori in a finite group of Lie type (not just the split tori). Aside from that, I doubt very much that anything as simple as the case `$n=2$` will occur for larger `$n$`. The subgroup structure rapidly gets much more complicated. – Jim Humphreys Apr 26 2012 at 21:36 @Jim: Thanks for your comment. But does my definition of Cartan subgroup really force me into the split case? I didn't think that it did. – Barinder Banwait Apr 26 2012 at 22:13 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171370267868042, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/9732/list
## Return to Answer 2 added 91 characters in body I don't think so (but I haven't checked this argument very thoroughly): First we claim that any such $f$ has degree two. Clearly the leading term of $f$ cannot be odd, so suppose by contradiction that $f$ has degree at least four. Pick a constant $R$ large enough so that in the region $D_1$ consisting of points satisfying $|x|, |y| \ge R$, there exists a constant $c$ such that $f(x, y) \ge c \text{ min}(|x|, |y|)^4$. (Edit: This isn't always possible, but I think it can be salvaged.) Then it's not hard to see that $\sum_{D_1} \frac{1}{f(x, y)}$ converges. But $D_2 = \mathbb{Z}^2 - D_1$ can be partitioned into $4R - 2$ not necessarily disjoint lines in which one of $x$ or $y$ is fixed. On any of these regions $f$ cannot be linear, so it either grows at least quadratically or is a constant; we can ignore the lines on which $f$ is constant. It follows that $\sum_L \frac{1}{f(x, y)}$ converges for any line $L$ on which $f$ is nonconstant, hence $\sum \frac{1}{f(x, y)}$ converges if we sum over every point of $\mathbb{Z}^2$ except the lines on which $f$ is constant. Since we have only thrown out finitely many of the values of $f$ in this sum, those values cannot contain every positive integer. But if $f$ is quadratic, it is a constant plus the sum of squares of two polynomials with rational coefficients and there are many integers not representable as the sum of squares of two rational numbers. 1 I don't think so (but I haven't checked this argument very thoroughly): First we claim that any such $f$ has degree two. Clearly the leading term of $f$ cannot be odd, so suppose by contradiction that $f$ has degree at least four. Pick a constant $R$ large enough so that in the region $D_1$ consisting of points satisfying $|x|, |y| \ge R$, there exists a constant $c$ such that $f(x, y) \ge c \text{ min}(|x|, |y|)^4$. Then it's not hard to see that $\sum_{D_1} \frac{1}{f(x, y)}$ converges. But $D_2 = \mathbb{Z}^2 - D_1$ can be partitioned into $4R - 2$ not necessarily disjoint lines in which one of $x$ or $y$ is fixed. On any of these regions $f$ cannot be linear, so it either grows at least quadratically or is a constant; we can ignore the lines on which $f$ is constant. It follows that $\sum_L \frac{1}{f(x, y)}$ converges for any line $L$ on which $f$ is nonconstant, hence $\sum \frac{1}{f(x, y)}$ converges if we sum over every point of $\mathbb{Z}^2$ except the lines on which $f$ is constant. Since we have only thrown out finitely many of the values of $f$ in this sum, those values cannot contain every positive integer. But if $f$ is quadratic, it is a constant plus the sum of squares of two polynomials with rational coefficients and there are many integers not representable as the sum of squares of two rational numbers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582585692405701, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/152747-volume-slicing.html
# Thread: 1. ## Volume By Slicing The question is read as follows: Solid lies between planes perpendicular to the x-axis at x=1 and x=-1 . The cross sections are perpendicular to the x-axis are circular disk.s whose diamters run from the parabola y=x^2 to the parabola y=2-x^2 FInd the volume of the solid. I tried doing this by the given integral - v= int (a to b) A(x) dx =int (a to b) pi R(x)^2 dx I ve determined that the radius should equal to 2 . I dont know where to go from here to determine the limits of integration (a and b values) and such. Any help is appreciated 2. Originally Posted by dreamx20 The question is read as follows: Solid lies between planes perpendicular to the x-axis at x=1 and x=-1 . The cross sections are perpendicular to the x-axis are circular disk.s whose diamters run from the parabola y=x^2 to the parabola y=2-x^2 FInd the volume of the solid. I tried doing this by the given integral - v= int (a to b) A(x) dx =int (a to b) pi R(x)^2 dx I ve determined that the radius should equal to 2 . I dont know where to go from here to determine the limits of integration (a and b values) and such. Any help is appreciated 1. Draw a sketch of the situation. 2. The midpoints of the disks are placed on the line y = 1. Thus the radius of the disks can be calculated by $r = 1-x^2$ 3. The limits of integration are obviously given by the x-coordinates of the points of intersection between the two graphs. 4. I've got a volume of $V=\frac{16}{15} \pi \ vol.units$ Attached Thumbnails 3. Since, as earboth says, the radius of each disk is $1- x^2$, the area is $\pi(1- x^2)^2= \pi(1- 2x^2+ x^4)$. The thickness of each disk is along the x-axis and so is "dx". The volume of a disk is "area times thickness", $\pi(1- 2x^2+ x^4)dx$. "Add" the thicknesses of all the disks. In the limit, that becomes the integral $\pi\int_{-1}^1 1- 2x^2+ x^4 dx$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171987771987915, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/soft-question+mathematical-physics
# Tagged Questions 3answers 515 views ### Are Mathematical Physics and Occam's Razor compatible? Occam's Razor and mathematical beauty appear to be compatible when reviewing Michael Atiyah's video. But are the high levels of complexity associated with mathematical physics compatible with ... 0answers 51 views ### Reference Request: Classical Mechanics with Symplectic Reduction [closed] I am trying to find a supplement to appendix of Cushman & Bates' book on Global aspects of Classical Integrable Systems, that is less terse and explains mechanics with Lie groups (with dual of Lie ... 2answers 159 views ### How much pure math should a physics/microelectronics person know [duplicate] I do condensed matter physics modeling in my phd and I was struck up learning quite an amount of physics. But while having done lot of physics courses, I see that if I learn pure math I would ... 3answers 271 views ### Results of Statistical Mechanics first obtained by formal mathematical methods I have a question that seems natural in Physics and Mathematics mainly in Statistical Mechanics of Equilibrium. Results that are proven by formal mathematical methods that were already seem intuitive ... 0answers 58 views ### Is there a book that discusses General Relativity in terms of Modern Differential Geometry? [duplicate] All of the physics books that I've seen which discuss General Relativity do so in terms of coordinates - the tensor calculus - even though the naturally relevant entities are invariant under general ... 0answers 73 views ### Physics textbook for mathematicians [duplicate] Before this post gets marked as duplicate, I've checked book book recommendations among other posts but I don't think they really answer this fairly niche question. I am looking to compile a list of ... 0answers 188 views ### Apostol or Spivak for mathematical physics? [closed] I came across many recommendations for both of these books, but I'm not sure which one should I use to study calculus... I know most of the methods used in calculus and I use them frequently, but I'm ... 0answers 109 views ### Course advice for someone interested in strings and mathematical physics [closed] I'll be doing Introductory General Relativity and Graduate Quantum Mechanics II next semester. I still need to choose 2 (or maybe 3, but I don't want to overload too much) from the following: ... 2answers 234 views ### Intuition for Path Integrals and How to Evaluate Them I'm just starting to come across path integrals in quantum field theory, and want to get the right intuition for the them from the start. The amplitude for propagation from $x_a$ to $x_b$ is typically ... 0answers 192 views ### Interesting Math Topics Useful for Physics [closed] What are some interesting, but less popular, math topics that are useful for physics that can be self-studied? Specifically, topics that might ultimately be useful in high energy theory (even if it is ... 2answers 385 views ### How should a theoretical physicist study maths? [duplicate] Possible Duplicate: How should a physics student study mathematics? If some-one wants to do research in string theory for example, Would the Nakahara Topology, geometry and physics book and ... 2answers 194 views ### Electromagnetism for Mathematician I am trying to find a book on electromagnetism for mathematician (so it has to be rigorous). Preferably a book that extensively uses Stoke's theorem for Maxwell's equations (unlike other books that on ... 5answers 418 views ### Is physics rigorous in the mathematical sense? I am a student studying Mathematics with no prior knowledge of Physics whatsoever except for very simple equations. I would like to ask, due to my experience with Mathematics: Is there a set of ... 2answers 227 views ### Mathematically challenging areas in Quantum information theory and quantum cryptography I am a physics undergrad and thinking of exploring quantum information theory. I had a look at some books in my college library. What area in QIT, is the most mathematically challenging and rigorous? ... 2answers 343 views ### Book covering Topology required for physics and applications I am a physics undergrad, and interested to learn Topology so far as it has use in Physics. Currently I am trying to study Topological solitons but bogged down by some topological concepts. I am not ... 2answers 390 views ### Introduction to string theory I am in the last year of MSc. and would like to read string theory. I have the Zwiebach Book, but along with it what other advanced book can be followed, which can be a complimentary to Zwiebach. I ... 3answers 980 views ### What math do I need for mathematical physics? In what manner should I learn math? [closed] I'm a freshman undergraduate. I've got my sight on mathematical physics. I love math but I don't have the talent nor the inclination for purely abstract mathematics. I also love physics. The only ... 2answers 264 views ### Interesting topics to research in mathematical physics for undergraduates I'm planning on getting into research in mathematical physics and was wondering about interesting topics I can get into and possibly make some progress on. I'm particularity fond of abstract algebra ... 2answers 536 views ### Sources to learn about Greens functions For a physics major, what are the best books/references on Greens functions for self-studying? My mathematical background is on the level of Mathematical Methods in the physical sciences by Mary ... 10answers 652 views ### Readable books on advanced topics [closed] I realise that there are already a few questions looking for general book recommendations, but the motivation and type of book I'm looking for here is a little different, so I hope you can indulge me. ... 0answers 130 views ### Journals on mathematics similar to the American Journal of Physics and the Physics Teacher [closed] For the moderators: Please feel free to transfer this question to math.stackexchange if you find that it does not fit physics.stackexchange. It is known that American Journal of Physics and the ... 3answers 192 views ### Mathematical Physics Book Recommendation [duplicate] Possible Duplicate: Best books for mathematical background? I want to learn contemporary mathematical physics, so that, for example, I can read Witten's latest paper without checking other ... 3answers 1k views ### Use of advanced mathematics in astronomy, like topology, abstract algebra, or others I know that topology, abstract algebra, K-theory, Riemannian geometry and others, can be used in physics. Are some of these areas used in astronomy, and are some astronomical theories based on them? ... 2answers 173 views ### Physics talk with an emphasis on Mathematics [closed] I have to give a 10 minute physics talk that have to involve a fair bit of mathematics -- i.e. not just qualitative/handwaving material to some undergrads. I have wasted the last 3 hours looking for ... 0answers 33 views ### What is the importance of studying degeneration on $M_g$ Let $M_g$ be the moduli space of smooth curves of genus $g$. Let $\overline{M_g}$ be its compactification; the moduli space of stable curves of genus $g$. It seems to be important in physics to study ... 5answers 196 views ### Where do theta functions and canonical Green functions appear in physics In the beginning of Section 5 in his article, Wentworth mentions a result of Bost and proves it using the spin-1 bosonization formula. This result provides a link between theta functions, canonical ... 1answer 129 views ### Probablistic problems in physics? So I have an opportunity to do math research with a probabilist. I would like to propose to him a project that relates to physics, as it would be a good way to learn about another field. What are some ... 5answers 423 views ### What do theoretical physicists need from computer scientists? I recently co-authored a paper (not online yet unfortunately) with some chemists that essentially provided answers to the question, "What do chemists need from computer scientists?" This included the ... 0answers 65 views ### How to calculate 2D soft-body Physics [duplicate] Possible Duplicate: 2d soft body physics mathematics The definition of rigid body in Box2d is A chunk of matter that is so strong that the distance between any two bits of matter on ... 1answer 162 views ### Did classical applications of density functional theory precede its use as an electronic structure method? Density Functional Theory (DFT) is usually considered an electronic structure method, however a paper by Argaman and Makov highlights the applicability of the DFT formalism to classical systems, such ... 0answers 289 views ### errata for Morse & Feshbach - Methods of Theoretical Physics [closed] Anyone knows where I can find an errata (or any related material, such as solution sheets, etc) for this book? Thanks. Note: This is not a physics question, but this book is so popular among ... 1answer 426 views ### How to invent a theory? [closed] Is it possible to write down step-by-step instructions for inventing a new theory? I've been wondering if there exists some 'recipe' or proceedure for inventing a new theory. Presumably some ... 1answer 340 views ### What is the mathematical nature of space time quantization in string theory/super string theory? I don't know much about string theory, apart from it being a theory of everything which brings QM, QED and nuclear forces and gravity under one single roof. I am curious to know from a mathematical ... 10answers 2k views ### Physics for mathematicians How and from where does a mathematician learn physics from a mathematical stand point? I am reading the book by Spivak Elementary Mechanics from a mathematicians view point. The first couple of pages ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318345785140991, "perplexity_flag": "middle"}
http://physics.aps.org/story/print/v6/st22
# Focus: Why Now? Published November 16, 2000  |  Phys. Rev. Focus 6, 22 (2000)  |  DOI: 10.1103/PhysRevFocus.6.22 Cosmologists wonder why we happen to live in the era when the three major components of the Universe exist in similar quantities. One explanation involves only the most basic parameters from high energy physics. For cosmologists the old Chinese curse, “May you live in interesting times,” is beginning to seem all too appropriate. According to the latest evidence, a mysterious “antigravity” force–the so-called dark energy–appears to be a major component of the cosmos. The dark energy density is thought to remain constant in time, while matter continues to be “diluted” as the Universe expands. That raises a question: Why do we happen to live in the “interesting times” when the density of dark energy and that of matter happen to be similar, when one is not overwhelmed by the other? That’s the riddle a team of theorists have tackled in the 14 November PRL, and they claim to have found a simple explanation involving only the most basic parameters from high energy physics. The team, led by Nima Arkani-Hamed of the University of California at Berkeley, focuses on two key energy scales: the so-called Planck mass $MPl$ ~ $1018$ GeV, at which quantum gravity effects become important, and the electroweak mass $MEW$ ~ $103$ GeV, characteristic of the energy at which the electromagnetic and weak interactions become unified. They start by noting that these two mass scales can be combined to produce a roughly accurate value for the dark energy density: ( $MEW2/$ $MPl)4.$ (Energy density has units of $GeV4$ in this system.) Using supersymmetry and other particle physics concepts, they sketch out some ideas for why these two fundamental parameters might be so easily related to the dark energy density. They then point out another apparent coincidence: the similarity between the matter density of the Universe and a third important cosmological parameter, the energy density of heat radiation. Most of the matter in the Universe is thought to be dark matter, but despite its mysterious nature, cosmologists believe they understand in general terms how the density of all matter evolves as the Universe expands and cools. Using the century-old Stefan-Boltzmann law, which relates temperature to the corresponding amount of radiation, Arkani-Hamed and his colleagues estimated the temperature of the Universe at which the amounts of matter and radiation should coincide. Arkani-Hamed explains that the result is about $MEW2/$ $MPl,$ which works out to 10 K, close to the Universe’s current temperature. According to the Stefan-Boltzmann law, at that temperature the matter and radiation energy densities are ( $MEW2/$ $MPl)4--exactly$ the formula the team found gives roughly the correct value for the dark energy density. In other words, we should now be living at a time when the energy density of radiation, dark matter, and dark energy should all coincide, just as the observations suggest. The result is a bold assault on a deep mystery, says Joe Lykken, of the Fermi National Accelerator Laboratory near Chicago: “This apparent triple coincidence is certainly an important puzzle in modern cosmology, and if the $MEW2/$ $MPl$ relation is true, it does indeed explain the coincidence in a deep way.” Lykken adds, however, that the rough derivations of Arkani-Hamed and his colleagues are based on toy models. “The only way to prove their conjecture is to discover and verify some big, new, beyond-the-standard-model theory that has the desired properties.” Arkani-Hamed agrees, but adds, “These coincidences are puzzling, and it’s time to take them seriously–they may be giving us a clue towards such a theory.” –Robert Matthews Robert Matthews is science correspondent for the Sunday Telegraph, London, UK. ### Highlighted article #### New Perspective on Cosmic Coincidence Problems Nima Arkani-Hamed, Lawrence J. Hall, Christopher Kolda, and Hitoshi Murayama Published November 20, 2000 ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241902828216553, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/5450/what-if-interaction-wipes-out-my-direct-effects-in-regression/5455
# What if interaction wipes out my direct effects in regression? In a regression, the interaction term wipes out both related direct effects. Do I drop the interaction or report the outcome? The interaction was not part of the original hypothesis. - 2 you could probably get a better answer if you provided more details about your experimental design, research question, and statistical model. – David Dec 13 '10 at 23:52 I have survey data, v1 and v2 predict the outcome, as I expected; however, the interaction between v1 (dichotomous) and v2 (5 groups) is not significant -- and (my question) it makes my v1 and v2 direct effects non-significant too. I can't find an example on reporting this in the literature. – Jen Dec 14 '10 at 0:03 If the v1:v2 interaction is not significant, do you need to have it included in the model? – Christopher Aden Dec 14 '10 at 1:40 – Glen Dec 14 '10 at 3:00 ## 3 Answers I think this one is tricky; as you hint, there's 'moral hazard' here: if you hadn't looked at the interaction at all, you'd be free and clear, but now that you have there is a suspicion of data-dredging if you drop it. The key is probably a change in the meaning of your effects when you go from the main-effects-only to the interaction model. What you get for the 'main effects' depends very much on how your treatments and contrasts are coded. In R, the default is treatment contrasts with the first factor levels (the ones with the first names in alphabetical order unless you have gone out of your way to code them differently) as the baseline levels. Say (for simplicity) that you have two levels, 'control' and 'trt', for each factor. Without the interaction, the meaning of the 'v1.trt' parameter (assuming treatment contrasts as is the default in R) is "average difference between 'v1.control' and 'v1.trt' group"; the meaning of the 'v2.trt' parameter is "average difference between 'v2.control' and 'v2.trt'". With the interaction, 'v1.trt' is the average difference between 'v1.control' and 'v1.trt' in the 'v2.control' group, and similarly 'v2.trt' is the average difference between v2 groups in the 'v1.control' group. Thus, if you have fairly small treatment effects in each of the control groups, but a large effect in the treatment groups, you could easily see what you're seeing. The only way I can see this happening without a significant interaction term, however, is if all the effects are fairly weak (so that what you really mean by "the effect disappeared" is that you went from p=0.06 to p=0.04, across the magic significance line). Another possibility is that you are 'using up too many degrees of freedom' -- that is, the parameter estimates don't actually change that much, but the residual error term is sufficiently inflated by having to estimate another 4 [ = (2-1)*(5-1)] parameters that your significant terms become non-significant. Again, I would only expect this with a small data set/relatively weak effects. One possible solution is to move to sum contrasts, although this is also delicate -- you have to be convinced that 'average effect' is meaningful in your case. The very best thing is to plot your data and to look at the coefficients and understand what's happening in terms of the estimated parameters. Hope that helps. - Thanks Ben! I'll let you know what works out. – Jen Dec 14 '10 at 4:24 1 There's no moral hazard. The calculation of the main effects with the interaction included is quite different from the calculation without it. You have to do the additive model to report the main effects and then include the interaction in a separate model anyway. You ignore the main effects in the model that includes the interaction because they're not really main effects, they're effects at specific levels of the other predictors (including the interaction). – John Jul 10 '12 at 14:08 Are you sure the variables have been appropriately expressed? Consider two independent variables $X_1$ and $X_2$. The problem statement asserts that you are getting a good fit in the form $$Y = \beta_0 + \beta_{12} X_1 X_2 + \epsilon$$ If there is some evidence that the variance of the residuals increases with $Y$, then a better model uses multiplicative error, of which one form is $$Y = \beta_0 + \left( \beta_{12} X_1 X_2 \right) \delta$$ This can be rewritten $$\log(Y - \beta_0) = \log(\beta_{12}) + \log(X_1) + \log(X_2) + \log(\delta);$$ that is, if you re-express your variables in the form $$\eqalign{ \eta =& \log(Y - \beta_0) \cr \xi_1 =& \log(X_1)\cr \xi_2 =& \log(X_2)\cr \zeta =& \log(\delta) \sim N(0, \sigma^2) }$$ then the model is linear and likely has homoscedastic residuals: $$\eta = \gamma_0 + \gamma_1 \xi_1 + \gamma_2 \xi_2 + \zeta,$$ and it may just so happen that $\gamma_1$ and $\gamma_2$ are both close to 1. The value of $\beta_0$ can be discovered through standard methods of exploratory data analysis or, sometimes, is indicated by the nature of the variable. (For instance, it might be a theoretical minimum value attainable by $Y$.) Alternatively, suppose $\beta_0$ is positive and sizable (within the context of the data) but $\sqrt{\beta_0}$ is inconsequentially small. Then the original fit can be re-expressed as $$Y = (\theta_1 + X_1) (\theta_2 + X_2) + \epsilon$$ where $\theta_1 \theta_2 = \beta_0$ and both $\theta_1$ and $\theta_2$ are small. Here, the missing cross terms $\theta_1 X_2$ and $\theta_2 X_1$ are presumed small enough to be subsumed within the error term $\epsilon$. Again, assuming a multiplicative error and taking logarithms gives a model with only direct effects and no interaction. This analysis shows how it is possible--even likely in some applications--to have a model in which the only effects appear to be interactions. This arises when the variables (independent, dependent, or both) are presented to you in an unsuitable form and their logarithms are a more effective target for modeling. The distributions of the variables and of the initial residuals provide the clues needed to determine whether this may be the case: skewed distributions of the variables and heteroscedasticity of the residuals (specifically, having variances roughly proportional to the predicted values) are the indicators. - Hmmm. This all seems plausible but more complex than my solution (the comments on the original question suggest that the predictors are both categorical). But as usual, the answer is "look at the data" (or the residuals). – Ben Bolker Dec 14 '10 at 15:34 @Ben I agree but I don't understand where the perception of "more complex" comes from, because analysis of univariate distributions and post-hoc analysis of residuals are essential in any regression exercise. The only extra work required here is to think about what these analyses mean. – whuber♦ Dec 14 '10 at 15:53 Perhaps by "more complex" I just mean "In my experience, I have seen the issues I referred to in my answer (contrast coding) arise more frequently than those you referred to (non-additivity)" -- but this is really a statement about the kinds of data/people I work with rather than about the world. – Ben Bolker Dec 14 '10 at 16:23 In a regular multiple regression with two quantitative predictor variables, including their interaction just means including their observation-wise product as an additional predictor variable: $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 (X_1 \cdot X_2) = (b_0 + b_2 X_2) + (b_1 + b_3 X_2) X_1$ This typically introduces high multicollinearity since the product will strongly correlate with both original variables. With multicollinearity, individual parameter estimates depend strongly on which other variables are considered - like in your case. As a counter-measure, centering the variables often reduces multicollinearity when the interaction is considered. I'm not sure if this directly applies to your case since you seem to have categorical predictors but use the term "regression" instead of "ANOVA". Of course the latter case is essentially the same model, but only after choosing the contrast coding scheme as Ben explained. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423273205757141, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/07/21/pullbacks-on-cohomology/?like=1&source=post_flair&_wpnonce=a34a077b61
# The Unapologetic Mathematician ## Pullbacks on Cohomology We’ve seen that if $f:M\to N$ is a smooth map of manifolds that we can pull back differential forms, and that this pullback $f^*:\Omega(N)\to\Omega(M)$ is a degree-zero homomorphism of graded algebras. But now that we’ve seen that $\Omega(M)$ and $\Omega(N)$ are differential graded algebras, it would be nice if the pullback respected this structure as well. And luckily enough, it does! Specifically, the pullback $f^*$ commutes with the exterior derivatives on $\Omega(M)$ and $\Omega(N)$, both of which are (somewhat unfortunately) written as $d$. If we temporarily write them as $d_M$ and $d_N$, then we can write our assertion as $f^*(d_N\omega)=d_M(f^*\omega)$ for all $k$-forms $\omega$ on $N$. First, we show that this is true for a function $\phi\in\Omega^0(N)$. It we pick a test vector field $X\in\mathfrak{X}(M)$, then we can check $\displaystyle\begin{aligned}\left[f^*(d\phi)\right](X)&=\left[d\phi\circ f\right](f_*(X))\\&=\left[f_*(X)\right]\phi\\&=X(\phi\circ f)\\&=\left[d(\phi\circ f)\right](X)\\&=\left[d(f^*\phi)\right](X)\end{aligned}$ For other $k$-forms it will make life easier to write out $\omega$ as a sum $\displaystyle\omega=\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}$ Then we can write the left side of our assertion as $\displaystyle\begin{aligned}f^*\left(d\left(\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\right)&=f^*\left(\sum\limits_Id\alpha_I\wedge dx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\\&=\sum\limits_If^*(d\alpha_I)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\\&=\sum\limits_Id(f^*\alpha_I)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\\&=\sum\limits_Id(\alpha_I\circ f)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\end{aligned}$ and the right side as $\displaystyle\begin{aligned}d\left(f^*\left(\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\right)&=d\left(\sum\limits_I(\alpha_I\circ f)f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\right)\\&=d\left(\sum\limits_I(\alpha_I\circ f)d(f^*x^{i_1})\wedge\dots\wedge d(f^*x^{i_k})\right)\\&=\sum\limits_Id(\alpha_I\circ f)\wedge d(f^*x^{i_1})\wedge\dots\wedge d(f^*x^{i_k})\\&=\sum\limits_Id(\alpha_I\circ f)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\end{aligned}$ So these really are the same. The useful thing about this fact that pullbacks commute with the exterior derivative is that it makes pullbacks into a chain map between the chains of the $\Omega^k(N)$ and $\Omega^k(M)$. And then immediately we get homomorphisms $H^k(N)\to H^k(M)$, which we also write as $f^*$. If you want, you can walk the diagrams yourself to verify that a cohomology class in $H^k(N)$ is sent to a unique, well-defined cohomology class in $H^k(M)$, but it’d probably be more worth it to go back to read over the general proof that chain maps give homomorphisms on homology. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 4 Comments » 1. [...] spaces are all contravariant functors on the category of smooth manifolds. We’ve even seen how it acts on smooth maps. All we really need to do is check that it plays nice with [...] Pingback by | July 23, 2011 | Reply 2. [...] where each term omits exactly one of the basic -forms. Since everything in sight — the differential operator and both integrals — is -linear, we can just use one of these terms. And so we can calculate the pullbacks: [...] Pingback by | August 18, 2011 | Reply 3. [...] the exterior derivative — gives us a chain complex. Since pullbacks of differential forms commute with the exterior derivative, they define a chain map between two chain [...] Pingback by | December 2, 2011 | Reply 4. [...] The exterior derivative is a derivative, The exterior derivative is nilpotent, De Rham Cohomology, Pullbacks on Cohomology, De Rham cohomology is functorial, The Interior [...] Pingback by | August 21, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314354062080383, "perplexity_flag": "head"}
http://scicomp.stackexchange.com/questions/3262/how-to-choose-a-method-for-solving-linear-equations
# How to choose a method for solving linear equations To my knowledge, there are 4 ways to solving a system of linear equations (correct me if there are more): 1. If the system matrix is a full-rank square matrix, you can use Cramer’s Rule; 2. Compute the inverse or the pseudoinverse of the system matrix; 3. Use matrix decomposition methods (Gaussian or Gauss-Jordan elimination is considered as LU decomposition); 4. Use iterative methods, such as the conjugate gradient method. In fact, you almost never want to solving the equations by using Cramer's rule or computing the inverse or pseudoinverse, especially for high dimensional matrices, so the first question is when to use decomposition methods and iterative methods, respectively. I guess it depends on the size and properties of the system matrix. The second question is, to your knowledge, what kind of decomposition methods or iterative methods are most suitable for certain system matrix in terms of numerical stability and efficiency. For example, the conjugate gradient method is used to solve equations where the matrix is symmetric and positive definite, although it can also be applied to any linear equations by converting $\mathbf{A}x=b$ to $\mathbf{A}^{\rm T}\mathbf{A}x=\mathbf{A}^{\rm T}b$. Also for positive definite matrix, you can use Cholesky decomposition method to seek the solution. But I don't know when to choose the CG method and when to choose Cholesky decomposition. My feeling is we'd better use CG method for large matrices. For rectangular matrices, we can either use QR decomposition or SVD, but again I don't know how to choose one of them. For other matrices, I don't now how to choose the appropriate solver, such Hermitian/symmetric matrices, sparse matrices, band matrices etc. - – Paul♦ Sep 11 '12 at 16:37 Hi @Paul, thanks for your comments, is that thread only about sparse matrices or any matrix? – chaohuang Sep 11 '12 at 17:34 3 Your question has massive scope and may be a bit too broad for the Q&A format that we have here on the stackexchange... is there a particular class of matrix system that you are interested in? – Paul♦ Sep 11 '12 at 22:41 3 @chaohuang There are numerous books on this subject. This question is a bit like asking a medical doctor how they choose treatments "in general". If you want to ask a question that is not specific to a certain class of problems, you should put in the work to become familiar enough with the field to ask something precise. Otherwise, explain the specific problem that you are concerned with. – Jed Brown Sep 12 '12 at 2:52 2 From the FAQ: If you can imagine an entire book that answers your question, you’re asking too much. There are entire journals, and hundreds of books, associated with this question. – David Ketcheson Sep 12 '12 at 5:09 show 6 more comments ## 2 Answers Your question is a bit like asking for which screwdriver to choose depending on the drive (slot, Phillips, Torx, ...): Besides there being too many, the choice also depends on whether you want to just tighten one screw or assemble a whole set of library shelves. Nevertheless, in partial answer to your question, here are some of the issues you should keep in mind when choosing a method for solving the linear system $Ax=b$. I will also restrict myself to invertible matrices; the cases of over- or underdetermined systems are a different matter and should really be separate questions. As you rightly noted, option 1 and 2 are right out: Computing and applying the inverse matrix is a tremendously bad idea, since it is much more expensive and often numerically less stable than applying one of the other algorithms. That leaves you with the choice between direct and iterative methods. The first thing to consider is not the matrix $A$, but what you expect from the numerical solution $\tilde x$: 1. How accurate does it have to be? Does $\tilde x$ have to solve the system up to machine precision, or are you satisfied with $\tilde x$ satisfying (say) $\|\tilde x - x^*\| < 10^{-3}$, where $x^*$ is the exact solution? 2. How fast do you need it? The only relevant metric here is clock time on your machine - a method which scales perfectly on a huge cluster might not be the best choice if you don't have one of those, but you do have one of those shiny new Tesla cards. As there's no such thing as a free lunch, you usually have to decide on a trade-off between the two. After that, you start looking at the matrix $A$ (and your hardware) to decide on a good method (or rather, the method for which you can find a good implementation). (Note how I avoided writing "best" here...) The most relevant properties here are • The structure: Is $A$ symmetric? Is it dense or sparse? Banded? • The eigenvalues: Are they all positive (i.e., is $A$ positive definite)? Are they clustered? Do some of them have very small or very large magnitude? With this in mind, you then have to trawl the (huge) literature and evaluate the different methods you find for your specific problem. Here are some general remarks: • If you really need (close to) machine precision for your solution, or if your matrix is small (say, up to $1000$ rows), it is hard to beat direct methods, especially for dense systems (since in this case, every matrix multiplication will be $\mathcal{O}(n^2)$, and if you need a lot of iterations, this might not be far from the $\mathcal{O}(n^3)$ a direct method needs). Also, LU decomposition (with pivoting) works for any invertible matrix, as opposed to most iterative methods. (Of course, if $A$ is symmetric and positive definite, you'd use Cholesky.) This is also true for (large) sparse matrices if you don't run into memory problems: Sparse matrices in general do not have a sparse LU decomposition, and if the factors do not fit into (fast) memory, these methods becomes unusable. In addition, direct methods have been around for a long time, and very high quality software exists (e.g., UMFPACK, MUMPS, SuperLU for sparse matrices) which can automatically exploit the band structure of $A$. • If you need less accuracy, or cannot use direct methods, choose a Krylov method (e.g., CG if $A$ is symmetric positive definite, GMRES or BiCGStab if not) instead of a stationary method (such as Jacobi or Gauss-Seidel): These usually work much better, since their convergence is not determined by the spectral radius of $A$ but by (the square root) of the condition number and does not depend on the structure of the matrix. However, to get really good performance from a Krylov method, you need to choose a good preconditioner for your matrix - and that is more a craft than a science... • If you repeatedly need to solve linear systems with the same matrix and different right hand sides, direct methods can still be faster than iterative methods since you only need to compute the decomposition once. (This assumes sequential solution; if you have all the right hand sides at the same time, you can use block Krylov methods.) Of course, these are just very rough guidelines: For any of the above statements, there likely exists a matrix for which the converse is true... Since you asked for references in the comments, here are some textbooks and review papers to get you started. (Neither of these - nor the set - is comprehensive; this question is much too broad, and depends too much on your particular problem.) • Golub, van Loan: Matrix Computations (still the classical reference on matrix algorithms; does not explicitly treat sparse matrices; a bit terse) • Davis: Direct Methods for Sparse Linear Systems (a good introduction on decomposition methods for sparse matrices) • Duff: Direct Methods (review paper; more details on modern "multifrontal" direct methods for sparse matrices) • Saad: Iterative methods for sparse linear systems (the theory and - to a lesser extent - practice of Krylov methods) - Thanks man! Very helpful. – chaohuang Sep 16 '12 at 12:59 2 I like your analogy of the screwdriver! – Paul♦ Sep 16 '12 at 17:25 @chaohuang If this answered your question, you should accept it. (If it didn't, feel free to point out what is missing.) – Christian Clason Nov 29 '12 at 13:05 @ChristianClason accepted it. I was waiting and hoping someone could shed some light on the issue of rectangular matrices. Since it has been a long time, I guess there will never be such an answer :( – chaohuang Nov 29 '12 at 16:06 @chaohuang Thank you. If you're still interested in rectangular matrices, you should pose a (linked) question on "How to choose a method for solving overdetermined systems". – Christian Clason Nov 29 '12 at 16:10 show 2 more comments The decision tree in Section 4 of the relevant chapter in the NAG Library Manual answers (in part) some of your questions. - Nice! Thanks a lot! – chaohuang Jan 12 at 4:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278546571731567, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/18328/what-is-a-single-word-that-describes-the-idea-of-the-second-time-derivative-of-e/18338
# What is a single word that describes the idea of the second time derivative of energy? I think about position, its time derivative speed, and its second time derivative, acceleration. I would like to identify a single word that can be used as a handle for the second time derivative of energy (i.e., the time derivative of power). If there is a widely used term, I'd prefer to use it. If not, I'd like to get your suggestions as to what a term might be. Any ideas? - 3 "Power-up rate"? :) – Lagerbaer Dec 15 '11 at 22:57 A heuristical argument why this is not really a physically relevant quantity: the nontrivial physical solutions to second-order differential equations are essentially oscillatory, but if you study nontrivial energy changes these are typically due to thermodynamic processes involving entropy increase and therefore not reversible. – leftaroundabout Dec 16 '11 at 2:16 Within the full systems studied by proper physics, the single word you're looking for is "zero". The reason is called the energy conservation law. ;-) Well, a more accurate term has several words: "a silly awkward way to write zero as a time derivative of yet another zero". :-) – Luboš Motl Dec 16 '11 at 17:03 ## 2 Answers Within power systems such as regional or national electricity grids, $\frac{\mathrm{d}^2E}{\mathrm{d}t^2}$ is called the slew rate: it's used to denote the rate of change of power demanded from, or supplied to, electricity grids. It's typically either expressed as MW/s or GW/h, being two time periods of interest in balancing electricity grids. Inconveniently, you might find that (in some contexts) slew rate is also used to refer to rate of change of voltage, or of current, with respect to time (in units of V/s or A/s respectively). - The quantity itself, $\frac{\mathrm{d}^2E}{\mathrm{d}t^2}$, is not widely used (as a matter of fact I can't think of any equation in which it appears off the top of my head), so correspondingly there is no widely used term for it. You can just say "second time derivative of energy" or "rate of change of power" or some such thing and it will get the point across. - Thanks David, this explains why searching Google turned up nothing. If you are curious, we are considering the second time derivative of energy as it applies to the rate of change of metabolic rate incurred by different activities. – Sipp Dec 16 '11 at 14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582298994064331, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/131280-contour-integrals.html
# Thread: 1. ## contour integrals Compute integral of: Integral over C of 1/z dz where C is the unit circle centered at some point z. |z.| >2 2. well a unit circle in 1 not z= 2 try putting z into polar form and then do a contour integral 3. the center of the circle is at a point z, |z|>2 its just saying the unit circle is shifted from the origin.. 4. would it be written then |z-2|>2 5. Originally Posted by stumped765 the center of the circle is at a point z, |z|>2 its just saying the unit circle is shifted from the origin.. Then use something like $z_0$ to differentiate it from the variable $z$. The unit circle about $z_0$ with radius 1 is $|z-z_0|= 1$ and $z= z_0+ e^{i\theta}$ with $\theta$a going from 0 to $2\pi$. Then $dz= ie^{i\theta}d\theta$ $\oint \frac{1}{z}dz= \int_0^{2\pi}\frac{ie^{i\theta}}{z_0+ e^{i\theta}}d\theta$. However, you should know that $\frac{1}{z}$ is analytic everywhere except where z= 0 which, with $|z- z_0|> 2$, means everywhere inside this contour. 6. Originally Posted by stumped765 Compute integral of: Integral over C of 1/z dz where C is the unit circle centered at some point z. |z.| >2 Hi. Here's something fun to check your knowledge about contour integrals: pin the center of the unit circle at the point 2i. Now, drop it straight through the singular point of $1/z$ at the origin until the center rests at -2i. Now, how does the value of the integral $\mathop\oint\limits_{C} \frac{1}{z}dz$ change as the circle falls?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114346504211426, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/68660-binomial-distribution-problems.html
# Thread: 1. ## Binomial Distribution Problems I believe that all of these are binomial distribution problems, but I'm not really sure how to approach them. Any help would be appreciated greatly. 1. Estimate the probability that, in a group of five people, at least two of them have the same zodiacal sign. (There are 12 zodiacal signs; assume that each sign is equally likely for any person.) 2. 30% of the workers in a workforce are women. A company hires 100 workers of which 25 are women. What is the probability this (the hiring of 24 women or less) occurred by chance? 3. To ensure a high male/female ratio, the ruler of a mythical island decrees couples may keep having children until they have a girl. If the decree is followed, what will the male/female ratio be on the island? 2. Originally Posted by blondsk8rguy I believe that all of these are binomial distribution problems, but I'm not really sure how to approach them. Any help would be appreciated greatly. 1. Estimate the probability that, in a group of five people, at least two of them have the same zodiacal sign. (There are 12 zodiacal signs; assume that each sign is equally likely for any person.) Mr F says: First assume a particular zodiac sign (Aries, say) and calculate the probability that at least two in the group have that sign. Binomial where n = 5, p = 1/12. Now multiply that answer by 12 (why?). 2. 30% of the workers in a workforce are women. A company hires 100 workers of which 25 are women. What is the probability this (the hiring of 24 women or less) occurred by chance? Mr F says: Binomial where n = 100, p = 0.3. Calculate ${\color{red}\Pr(X \leq 25)}$. You could probably use the normal approximation if you don't have access to technology. 3. To ensure a high male/female ratio, the ruler of a mythical island decrees couples may keep having children until they have a girl. If the decree is followed, what will the male/female ratio be on the island? 3. Presumambly there is no more than 1 girl per family. So you need to calculate the expected number of boys in a family. So use the geoemtric distribution (Geometric distribution - Wikipedia, the free encyclopedia - use a support of k = 0, 1, 2, 3, ....) to get the expected number of births until a girl is obtained. The answer may surprise you. 3. Thanks for your help. I didn't realize that my teacher had said that the problems could use geometric distributions. Here's my progress on each of the questions: 1. I used a binomcdf function with n=5 and p=1/12 ranging from 2 to 5 to account for combinations of 2, 3, 4, or all 5 people sharing the same sign. The multiplication by 12 is due to the possibility of any zodiac combination, and comes from nCr(12, 1), right? So I got about 70%, which seems somewhat high, but is still reasonable. 2. This problem seems really straightforward now.... I just performed a binomcdf function for n=100 and p=.3 from 0 to 25. This gave about 16%, which seems low but is probably reasonable. 3. The answer to this problem was really surprising.... When I found the expected value of the geometric distribution, I got 1. Does that mean that the ratio ends up being 1:1? There's also a second part to the first question, which involves estimating the probability that at least one of the five people has the same zodiacal sign as yours. For this, would I do a binomcdf operation for n=5 and p=1/12 from 1 to 5 and not multiply by 12? That gives about 35%, which seems to make sense. Thanks so much. 4. Originally Posted by blondsk8rguy Thanks for your help. I didn't realize that my teacher had said that the problems could use geometric distributions. Here's my progress on each of the questions: 1. I used a binomcdf function with n=5 and p=1/12 ranging from 2 to 5 to account for combinations of 2, 3, 4, or all 5 people sharing the same sign. The multiplication by 12 is due to the possibility of any zodiac combination, and comes from nCr(12, 1), right? So I got about 70%, which seems somewhat high, but is still reasonable. 2. This problem seems really straightforward now.... I just performed a binomcdf function for n=100 and p=.3 from 0 to 25. This gave about 16%, which seems low but is probably reasonable. 3. The answer to this problem was really surprising.... When I found the expected value of the geometric distribution, I got 1. Does that mean that the ratio ends up being 1:1? There's also a second part to the first question, which involves estimating the probability that at least one of the five people has the same zodiacal sign as yours. For this, would I do a binomcdf operation for n=5 and p=1/12 from 1 to 5 and not multiply by 12? That gives about 35%, which seems to make sense. Thanks so much. Everything you say is correct. (For Q3 it's the expected ratio) 5. Originally Posted by blondsk8rguy I believe that all of these are binomial distribution problems, but I'm not really sure how to approach them. Any help would be appreciated greatly. 1. Estimate the probability that, in a group of five people, at least two of them have the same zodiacal sign. (There are 12 zodiacal signs; assume that each sign is equally likely for any person.) Mr F says: First assume a particular zodiac sign (Aries, say) and calculate the probability that at least two in the group have that sign. Binomial where n = 5, p = 1/12. Now multiply that answer by 12 (why?). 2. 30% of the workers in a workforce are women. A company hires 100 workers of which 25 are women. What is the probability this (the hiring of 24 women or less) occurred by chance? Mr F says: Binomial where n = 100, p = 0.3. Calculate ${\color{red}\Pr(X \leq 25)}$. You could probably use the normal approximation if you don't have access to technology. 3. To ensure a high male/female ratio, the ruler of a mythical island decrees couples may keep having children until they have a girl. If the decree is followed, what will the male/female ratio be on the island? 3. Presumambly there is no more than 1 girl per family. So you need to calculate the expected number of boys in a family. So use the geoemtric distribution (Geometric distribution - Wikipedia, the free encyclopedia - use a support of k = 0, 1, 2, 3, ....) to get the expected number of births until a girl is obtained. The answer may surprise you. My answer to Q1 is very wrong. I don't have time now but will post a correct solution later. 6. Originally Posted by blondsk8rguy I believe that all of these are binomial distribution problems, but I'm not really sure how to approach them. Any help would be appreciated greatly. 1. Estimate the probability that, in a group of five people, at least two of them have the same zodiacal sign. (There are 12 zodiacal signs; assume that each sign is equally likely for any person.) [snip] Pr(at least two people in a group of five have the same zodiac sign) = 1 - Pr(no people in a group of five have the same zodiac sign). Pr(no people in a group of five have the same zodiac sign) $= 1 \cdot \left(1 - \frac{1}{12} \right) \cdot \left(1 - \frac{2}{12} \right) \cdot \left(1 - \frac{3}{12} \right) \cdot \left(1 - \frac{4}{12} \right)$ $= \left(\frac{11}{12} \right) \cdot \left(\frac{10}{12} \right) \cdot \left(\frac{9}{12} \right) \cdot \left(\frac{8}{12} \right) = \, ....$ It is left as a simple exercise to explain why my original answer is wrong.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.970259428024292, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/219943/approximation-of-stochastic-differential-equations?answertab=active
# Approximation of stochastic differential equations Consider the two following real Stochastic Differential Equations (SDE) starting from the same initial condition: $$dx_t = f(x_t)dt + \sigma dB_t$$ $$dy_t = f(y_t)g_{\epsilon}(y_t)dt + \sigma dB_t$$ where $f$ and $g_{\epsilon}$ are such that there exists strong solutions to both SDEs (typically local Lipschitz assumptions on the coefficients). We assume that $|1-g_{\epsilon}(y)|\leq \epsilon$ for all $y\in \mathbb{R}$. I want to prove the following convergence: for all finite time $T>0$, $$\lim_{\epsilon \to 0} \mathbb{E}\left[\sup_{0\leq t \leq T} |x_t-y_t|\right]=0$$ I would like to know how to prove it without a global Lipschitz assumption (think for instance that there may be some quadratic terms in $f$). Can anyone explain me how to do it rigorously or point me to some article/book where it is already done ? Thanks ! - @ Mellow : I think that the differnce process z t =x t −y t (if x and y start at the same point) follows "almost" an ODE ( no Brownian term ) and it is stochastic only in the drift term, maybe a classical ODE method would do the trick Best regards – TheBridge Oct 24 '12 at 14:44 @TheBridge : thanks. Of course studying z_t and applying Gronwall Lemma is the classical strategy. But here the problem is that you cannot bound the Lipschitz coefficient because it is only assumed locally lipschitz... – mellow Oct 25 '12 at 7:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121160507202148, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/155495/solve-the-general-cubic-by-factoring?answertab=oldest
# Solve the general cubic by factoring After learning Cardano's solution of the cubic, I decided to look at Ferrari's solution of the quartic (both articles on Wikipedia). At the end of the article on the quartic function was an an alternate method to solve the cubic by factoring into two quadratic terms - something like factoring the monic quartic $$x^4+ax^3+bx^2+cx+d=0$$ into $$(x^2+px+q)(x^2+rx+s).$$ They used Vieta's formula (correct me if I am wrong) in order to acquire the "resolvent" cubic (I assume that is a cubic required to be solved for algebraic solutions of the quartic) - what they essentially did was expanded the two quadratic factors, and set them equal to their corresponding coefficients. I tried to do this with the monic cubic, first attempting to use three linear factors, then attempting to use one linear and one quadratic factor. Below I have divided by a for the first equation and expanded the second equation, setting coefficients equal. $$ax^3+bx^2+cx+d=0$$ $$(x+p)(x^2+qx+r)\;\longrightarrow \; p+q=0, \; r+pq=\frac{c}{a}, \; pr=\frac{d}{a}$$ Note that $p+q=0$ when $b=0$, which can be the case of all cubics after application of the Tschirnhaus transformation. However, I found that I could not solve this system - only piece it back together into a cubic. If there is a solution, how does one apply it? - You have to solve a cubic to do it. In other words, it will not give you a "nicer" way. – André Nicolas Jun 8 '12 at 7:02 Is there any simple explanation as to why it works for the quartic but not for the cubic? I mean, for the quartic, one can reduce the problem to a cubic. Why is it not possible to reduce the cubic into a quadratic? – inkyvoyd Jun 8 '12 at 7:04 1 I do not know of one. There is a complicated Galois Theory reason. Sort of connected with the fact that $3$ is prime. – André Nicolas Jun 8 '12 at 7:08 In that case, do you mean that one must "solve the cubic" to solve the cubic? Or, does this only apply for the roots of the polynomial? – inkyvoyd Jun 8 '12 at 7:27 It applies equally to expressing as a product of linear term and quadratic. The linear term gives an immediate root, and after we know the coefficients of the quadratic, the other two roots are easy. – André Nicolas Jun 8 '12 at 12:25 ## 1 Answer I don't understand the question. Cardano's method does solve the cubic by replacing it with a quadratic. First you make a substitution to bring it to the form $x^3-px-q=0$; then you make another substitution that looks like turning it into an equation of degree 6, but on closer inspection you find it's a quadratic in the cube of the variable. So you have replaced the cubic with that quadratic. - The question has nothing to do with cardano's method. I was trying to factor the cubic into the a linear and quadratic factor. Andre Nicolas explained to me that I could not reduce the problem that way, however. – inkyvoyd Jun 8 '12 at 15:56 Your exact words: "Why is it not possible to reduce the cubic into a quadratic?" My answer: it is possible to reduce the cubic to a quadratic, and that's exactly what Cardano does. True, it doesn't factor the cubic into a linear and a quadratic, but that wasn't what you asked for in the comment I quoted. – Gerry Myerson Jun 9 '12 at 1:10 I do realize what exactly was wrong with how I phrased my words. I guess I should've said something closer to "Why is it not possible to factor a cubic into a linear and quadratic factor?" – inkyvoyd Jun 9 '12 at 7:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617805480957031, "perplexity_flag": "head"}
http://mathoverflow.net/questions/22203/unbiased-estimate-of-the-variance-of-an-unnormalised-weighted-mean/59403
## Unbiased estimate of the variance of an *unnormalised* weighted mean ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a follow-up question to this one: http://mathoverflow.net/questions/11803/unbiased-estimate-of-the-variance-of-a-weighted-mean Specifically, how do I generalise the result given here (and on Wikipedia) for the unbiased sample estimate of the variance of a weighted population to the case where the weights are not normalised to 1? (or equivalently are not in the standard simplex, as in the previous question's answer derivation) I'm not sure how much of the previous answer relied on the weights being in the unit simplex, but it's clear that the given answer contains denominator terms like $1 - \sum_i w_i^2$ which aren't going to be nice if $\sum_i w_i^2 > 1$! Maybe there's a simple ansatz for modification to unnormalized weights, but it's not obvious to me which to choose! Thanks! Andy - ## 1 Answer Hi, Rather long after your question, but it can be done directly in the same way Matus did it, or you can simply use the following: Matus assumed weights Wi which sum to 1. Suppose you have weights Ui, and write V1 = sum of the Ui, and V2 = sum of the Ui^2, consistent with the Wikipedia entry for weighted sample variance. Then we can put Wi = Ui/V1. Now, look at the factor 1 / (1 - sum(Wi^2)), replace the Wi with Ui/V1, multiply top and bottom lines by V1^2 and - voila! - you get V1^2 / { V1^2 - V2 } . However, like Matus, I'm wondering when you would ever use such a "weighted sample variance" - see my question as a response to the original post. I suspect there is much confusion over the different reasons for weighting. Kathy - Hi Kathy... thanks for the response, and sorry that mine has also taken a long time to get around to. For the particular problem that we had, I believe that we found a suitable solution some time ago by use of an "effective N", computed as $(\sum W_i)^2 / \sum W_i^2$. I'd have to have a bit of a think to see if that is equivalent to the substitutions that you propose. Some histogramming code that implements this scheme (and which produces reasonable-looking results) is here: projects.hepforge.org/rivet/trac/browser/trunk/… – Andy Buckley Jun 2 2011 at 7:14 I concur that there seem to be quite differing opinions on what weights are for. In my case, they come from samplers used in physics code: to generate adequate statistical coverage for regions of the sampling phase space which are physically suppressed, the sampled function is multiplied by an enhancement function. The raw distributions are then unphysical, so sampled points need to be down-weighted by the relevant enhancement factor when computing observables: this weight needs to be propagated into the calculation of uncertainties. Hope that clarifies a bit. Thanks again :) – Andy Buckley Jun 2 2011 at 7:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321853518486023, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/97957-counting-lattice-points.html
# Thread: 1. ## Counting lattice points. Let n be a positive integer, and R be the region defined by the simultaneous conditions x-y< n , x+y < n and x > 0. In terms of n how many lattice points are contained in R? If I didn't misunderstand the question I got 0 points for n=1 , 1 for n=2, 4 for n=3, 9 for n=4 . 16 for n=5. How do I write this in terms of n? Vicky. 2. How about: $\left({n-1}\right)^2$? Should be straightforward to prove it. 3. Thank You!!!!!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8479629755020142, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/46626/list
## Return to Question 3 added 138 characters in body; deleted 6 characters in body; added 1 characters in body Let $c_0$ be the Banach space of doubly infinite sequences $$\lbrace a_n: -\infty\lt n\lt \infty, \lim_{|n|\to \infty} a_n=0 \rbrace.$$ Let $T$ be the space of $2\pi$ periodic functions integrable on $[0,2\pi]$. Let $$S=\lbrace \lbrace a_n\rbrace \in c_0: a_n=\hat{f}(n) \forall n \mbox{ for some function } f\in T\rbrace.$$T\rbrace,$$where $\hat{f}(n)$ denotes the $n$-th Fourier coefficient of $f$, i.e.$$\hat{f}(n)=\frac{1}{2\pi}\int_{-\pi}^\pi f(x)e^{-inx}\,dx. When I was a graduate student, I was told that no known characterizations of $S$ were known. Is this still true? 2 added 22 characters in body; added 12 characters in body Let $c_0$ be the Banach space of doubly infinite sequences $${ $\lbrace a_n: -\infty\lt n\lt \infty, \lim_{|n|\to \infty} a_n=0 }.$$ \rbrace. Let $T$ be the space of $2\pi$ periodic functions integrable on $[0,2\pi]$. Let $$S={ {a_n}\in $S=\lbrace \lbrace a_n\rbrace \in c_0: a_n=\hat{f}(n) \forall n \mbox{ for some function } f\in T}.$$T\rbrace. When I was a graduate student, I was told that no known characterizations of $S$ were known. Is this still true? 1 # Characterizations of a linear subspace associated with Fourier series Let $c_0$ be the Banach space of doubly infinite sequences $${ a_n: -\infty\lt n\lt \infty, \lim_{|n|\to \infty} a_n=0 }.$$ Let $T$ be the space of $2\pi$ periodic functions integrable on $[0,2\pi]$. Let $$S={ {a_n}\in c_0: a_n=\hat{f}(n) \forall n \mbox{ for some function } f\in T}.$$ When I was a graduate student, I was told that no known characterizations of $S$ were known. Is this still true?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337601661682129, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/228182/how-many-solutions-does-cos97x-x-have
# How many solutions does $\cos(97x)=x$ have? How many solutions does $\cos(97x)=x$ have? I have plot the function. However I don't know how to solve the problem without computer. Can anyone give a fast solution without a computer? - ## 2 Answers Let us concentrate on $[0,1]$ as suggested by Siminore. The period of $\cos(97x)$ is $\frac{2\pi}{97}$. On $[0,1]$, the function repeats itself $$\frac{1}{\frac{2\pi}{97}}\approx 15.43$$ On each period, the functions $\cos(97x),x$ meet twice, so you get at least $30$ meetings. Since it does almost half a period after its $15^{th}$ before going over $1$, you can convince yourself that on the $0.43$ part, they will meet again, so we have $31$ meetings. On $[-1,0]$ the function will also repeat approximatively $15.43$ times, however, this time, it won't be enough to get another meeting, so there will only be $30$ on that part. There is then a total of $61$ solutions. For clarity, they meet $30$ times in $[0,15\times\frac{2\pi}{97}]$ and since $15\times\frac{2\pi}{97}\approx 0.97<1$ and $\cos(97\times 1)\approx -0.925$, the functions will meet once more in $[15\times\frac{2\pi}{97},1]$. On the other hand, after the $15^{th}$ period of $\cos(97x)$ (Imagine it starts from $0$ and goes backward to $-1$), $x$ is about $-0.97$ and decreasing, while $cos(97x)$ is at $1$ decreasing up to $\cos(-97)\approx -0.925$, not enough for another meeting - But this is not symmetric, and there is actually one less solution on the $x<0$ side. – Jonathan Nov 3 '12 at 16:11 @Jonathan you are right, I should have plotted both side – Jean-Sébastien Nov 3 '12 at 16:14 Since $-1 \leq \cos (97x) \leq 1$, you can restrict your attention to the interval $[-1,1]$, and, even better, to $[0,1]$ by evenness. If you sketch the graph of $x \mapsto \cos (97x)$, you'll understand that you need to count the "bumps" of this function lying in the upper half-plane, and add 1 since the first "bump" is crossed by the $y$-axis. See the graph here. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961015522480011, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=134996
Physics Forums Thread Closed Page 1 of 2 1 2 > ## The true nature of length contraction I stumbled on a book that seems to throw the concept of length contraction upside down to me. Maybe someone can help me here. All the books I've read to date, a popular example might be Elegant Universe, say that an object moving near the speed of light past an observer will appear squashed or contracted along its length. Green even had images in his book of a normal racecar at rest (as seen from the side) and one moving near light speed, which was the same exact image just squashed into a smaller size from left to right. Now I'm reading a book called Einstein's Universe by Nigel Calder. He talks about a spaceship passing the Earth from east to west at near light speed and viewing it from a telescope: "As you turn the telescope straight upwards, to try to see the spaceship at its moment of closest approach, you will see its tail facing you. In other words, instead of facing along its line of travel past the Earth, the spaceship appears to be turned to a point away from the Earth. Even at less extreme speeds, a passing spaceship will appear to be swivelled away from the Earth. You will see part of its tail when you would expect to see the ship from sideways-on. Again the reason is that the light entering a telescope pointing straight outwards from the Earth has been launched somewhat backwards from the spaceship, allowing for the aberration. Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle." I hope you can see my confusion. I'll also add that Calder's book was written in 1979 so it can possibly be outdaded info. In previous chapters he also talks about seeing around corners as you approach light speed, which is another concept I am unfamiliar with. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Mentor Blog Entries: 1 There's nothing wrong with Calder's statement. Note that he is talking about the visual appearance of a rapidly moving object, which does not take into consideration that light from different parts of a huge object takes different times to reach your eye. Relativity says the measured length of a passing spaceship will be contracted--but those measurements assume that you've taken into account the travel time of the light involved. The apparent rotation of the spaceship is a famous effect called the Penrose-Terrell rotation. It's not really rotated, it just looks that way. Unfortunately Brian Greene was being a bit casual when he said that moving objects appear squashed. What he meant is that the moving object will be measured as being shorter. This sloppy terminology is common practice in popular books and is the source of some confusion. Good for you that you caught it! Recognitions: Homework Help Science Advisor Quote by denni89627 I stumbled on a book that seems to throw the concept of length contraction upside down to me. Maybe someone can help me here. All the books I've read to date, a popular example might be Elegant Universe, say that an object moving near the speed of light past an observer will appear squashed or contracted along its length. Green even had images in his book of a normal racecar at rest (as seen from the side) and one moving near light speed, which was the same exact image just squashed into a smaller size from left to right. Now I'm reading a book called Einstein's Universe by Nigel Calder. He talks about a spaceship passing the Earth from east to west at near light speed and viewing it from a telescope: "As you turn the telescope straight upwards, to try to see the spaceship at its moment of closest approach, you will see its tail facing you. In other words, instead of facing along its line of travel past the Earth, the spaceship appears to be turned to a point away from the Earth. Even at less extreme speeds, a passing spaceship will appear to be swivelled away from the Earth. You will see part of its tail when you would expect to see the ship from sideways-on. Again the reason is that the light entering a telescope pointing straight outwards from the Earth has been launched somewhat backwards from the spaceship, allowing for the aberration. Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle." I hope you can see my confusion. I'll also add that Calder's book was written in 1979 so it can possibly be outdaded info. In previous chapters he also talks about seeing around corners as you approach light speed, which is another concept I am unfamiliar with. Calder is right, Green wrong. That is why Green sells more books than whatsisname. I believe that Terril was the first to publish a detailed derivation of Calder's description. ## The true nature of length contraction Thanks for the reply. You're very clear and I understand what you're saying, but doesn't that contradict the last sentance in the quote from Calder's book? From what I gather he's saying the contraction is ONLY a product of the angle the ship is seen from, not the measurable length. I guess both would have to be accounted for but I don't see why he would leave the latter out. Also, before I get slammed for making things up, Greene may have said "measured" instead of "appeared" when talking about contraction. I lent the book to a friend so I can't confirm. I always thought of it as an appearance though, whether it was presented to me incorrectly or not. Thanks for clearing that up. Dennis Recognitions: Science Advisor Quote by Meir Achuz Calder is right, Green wrong. That is why Green sells more books than whatsisname. I believe that James Trefil was the first to publish a detailed derivation of Calder's description. Calder is wrong in his last two sentences when he says: Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle. Length contraction is not a trick of perspective, it is what's left once you account for delays due to light propogation, or measure the object's length using purely local measurements (for instance, you could use Einstein's original notion of a network of rulers and synchronized clocks, and measure the position of the front and back at a given time by noting the marks on the ruler that each were passing when clocks next to those marks both read the same time). At worst, Greene is only guilty of a sloppy use of language, but then it is common to use the word "observed" for what someone measures in their own coordinate system, not what they actually see using light-signals. Mentor Blog Entries: 1 Quote by denni89627 Thanks for the reply. You're very clear and I understand what you're saying, but doesn't that contradict the last sentance in the quote from Calder's book? From what I gather he's saying the contraction is ONLY a product of the angle the ship is seen from, not the measurable length. I guess both would have to be accounted for but I don't see why he would leave the latter out. I don't have Calder's book, but if he said that length contraction is only a product of the angle the ship is seen from that would be laughably wrong. In that quoted passage, it seems clear that Calder is talking about the visual appearance of moving objects, not their actual--and quite real--relativistic contraction. If he's sophisticated enough to be aware of Penrose-Terrell rotation, it would be pretty amazing if he "forgot" to mention plain old--and quite real--length contraction. As JesseM said, length contraction is not a trick of perception. If Calder is stating that, he's wrong. (But I don't deduce that from that quote.) Is that the only mention of length contraction that he makes? Quote by denni89627 All the books I've read to date, a popular example might be Elegant Universe, say that an object moving near the speed of light past an observer will appear squashed or contracted along its length. Brian Green even had images in his book of a normal racecar at rest (as seen from the side) and one moving near light speed, which was the same exact image just squashed into a smaller size from left to right. Well, Brian Greene knows what he's talking about, this much is certain. There are few who present abstract ideas as well as he does. That said, if you yourself accelerated up to 0.866c inertial, you'd see all planets whizzing by you squished to an ellipsoid, 50% as long as they exist in their own proper frame. Is the contraction real? Indeed. If you flew straight into a planet, you wouldn't touch it until you met its surface, and that surface is 50% contracted. It's not as though the contraction is an illusion, and you'd strike the planet before getting to it, such as say at the location where the surface would be if it were spherical. The contraction is real, but you see it only if it is moving wrt you. Technically, Brian Greene's got it right. Quote by denni89627 Now I'm reading a book called Einstein's Universe by Nigel Calder. He talks about a spaceship passing the Earth from east to west at near light speed and viewing it from a telescope: "As you turn the telescope straight upwards, to try to see the spaceship at its moment of closest approach, you will see its tail facing you. In other words, instead of facing along its line of travel past the Earth, the spaceship appears to be turned to a point away from the Earth. Even at less extreme speeds, a passing spaceship will appear to be swivelled away from the Earth. You will see part of its tail when you would expect to see the ship from sideways-on. Again the reason is that the light entering a telescope pointing straight outwards from the Earth has been launched somewhat backwards from the spaceship, allowing for the aberration. Indeed, Penrose and Terrell revealed more about the effects of high speed than even Einstein imagined, however these effects are geometric abberation. I haven't studied this in the past in any detail, however it doesn't change the fact that Greene is also correct. Greene was just focusing his attention on relativistic effects, and not the optical effects. Here's a couple links wrt relativistic effects ... http://math.ucr.edu/home/baez/physic...spaceship.html http://www.fourmilab.ch/cship/lorentz.html Quote by denni89627 Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle." I hope you can see my confusion. I'll also add that Calder's book was written in 1979 so it can possibly be outdaded info. In previous chapters he also talks about seeing around corners as you approach light speed, which is another concept I am unfamiliar with. Well, I don't have any idea what Nigel is talking about there. As stated, the statement is incorrect from everything I've ever learned or read on the subject. The vessel is length contracted plain and simple. However, there is more to it than a length contraction. The vessel is also rotated in spacetime. This may be what Nigel is trying to say? Hermann Minkowski showed that if the time axis is considered as a complex spatial axis, we then have a 4-space vice a 3-space plus time. In Minkowski space, the spaceship frame is rotated wrt your frame as a stationary observer. However, we cannot see this rotation readily, as it would go somewhat hidden from us at casual glance. The spaceship is contracted in length per you, but not per it. It's analogous to viewing an 8 inch pencil from the side. Rotate the pencil, and the pencil appears shorter. Now you'd of course know and be able to tell that the pencil is rotated in 3-space and does not really change in length, because we see depth. However, when high velocity produces this rotation, we cannot see the complex spatial axis (ie time axis) since we don't perceive time the same ways as space. We don't see the depth into the temporal dimension. So the spaceship appears contracted and not rotated (neglecting abberation). However if the vessel had 10 windows with a clock in each window, all clocks in sync per the onboard passengers, you as the stationary observer would see those clock readouts displaying different times asynchronously. They would not be in sync per you, even though they are in sync in the vessel itself. This would be the proof of the frame rotation, and explains very elogantly why a vessel can contract per an observer while never change in its proper length per itself. This is Lorentz Symmetry. Quote by denni89627 I stumbled on a book that seems to throw the concept of length contraction upside down to me. Maybe someone can help me here. All the books I've read to date, a popular example might be Elegant Universe, say that an object moving near the speed of light past an observer will appear squashed or contracted along its length. Green even had images in his book of a normal racecar at rest (as seen from the side) and one moving near light speed, which was the same exact image just squashed into a smaller size from left to right. Now I'm reading a book called Einstein's Universe by Nigel Calder. He talks about a spaceship passing the Earth from east to west at near light speed and viewing it from a telescope: "As you turn the telescope straight upwards, to try to see the spaceship at its moment of closest approach, you will see its tail facing you. In other words, instead of facing along its line of travel past the Earth, the spaceship appears to be turned to a point away from the Earth. Even at less extreme speeds, a passing spaceship will appear to be swivelled away from the Earth. You will see part of its tail when you would expect to see the ship from sideways-on. Again the reason is that the light entering a telescope pointing straight outwards from the Earth has been launched somewhat backwards from the spaceship, allowing for the aberration. Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle." I hope you can see my confusion. I'll also add that Calder's book was written in 1979 so it can possibly be outdaded info. In previous chapters he also talks about seeing around corners as you approach light speed, which is another concept I am unfamiliar with. have please a critical look at Physics, abstract physics/0507016 on arxiv Mentor Blog Entries: 1 Quote by pess5 Well, Brian Greene knows what he's talking about, this much is certain. There are few who present abstract ideas as well as he does. That said, if you yourself accelerated up to 0.866c inertial, you'd see all planets whizzing by you squished to an ellipsoid, 50% as long as they exist in their own proper frame. Is the contraction real? Indeed. If you flew straight into a planet, you wouldn't touch it until you met its surface, and that surface is 50% contracted. It's not as though the contraction is an illusion, and you'd strike the planet before getting to it, such as say at the location where the surface would be if it were spherical. The contraction is real, but you see it only if it is moving wrt you. Technically, Brian Greene's got it right. It is very difficult to avoid using the terms "see" and "appear" when describing Lorentz contraction (I do it myself ), but those terms can cause some confusion. I think we are talking past each other a bit. There are two different effects being discussed: (1) Real relativistic length contraction of rapidly moving objects. (2) The visual appearance of rapidly moving objects. Number 1, the relativistic Lorentz contraction, is by far the most important and is discussed in just about every book on relativity. Unfortunately, sometimes it is described as "rapidly moving objects appear contracted along their direction of motion", which may lead some to conclude that it is just appearance and not real, just an optical illusion. (Like how a pencil in a half full glass of water appears bent at the interface, but is in reality perfectly straight.) Lorentz contraction is not an optical illusion. Number 2 is a subtle point about how rapidly moving object would appear if photographed (by a really high-speed camera) or viewed as they sped by. Oddly, it turns out that under many conditions you will not see the Lorentz contraction; instead you see the object rotated. This is an optical illusion, referred to as the Penrose-Terrell effect (after the two folks who independently figured it out in 1959). (This has nothing to do with rotation in spacetime.) Most popular books don't bring it up. But apparently there are exceptions! Brian Greene was obviously talking about effect #1. If he used the word "appear", that is unfortunate. I forgive him! Calder, at least in that quoted passage, was obviously talking about the much less important effect #2. Quote by Doc Al I don't have Calder's book, but if he said that length contraction is only a product of the angle the ship is seen from that would be laughably wrong. In that quoted passage, it seems clear that Calder is talking about the visual appearance of moving objects, not their actual--and quite real--relativistic contraction. If he's sophisticated enough to be aware of Penrose-Terrell rotation, it would be pretty amazing if he "forgot" to mention plain old--and quite real--length contraction. As JesseM said, length contraction is not a trick of perception. If Calder is stating that, he's wrong. (But I don't deduce that from that quote.) Is that the only mention of length contraction that he makes? Unfortunately that is the only mention of length contraction in Calder's book. (I still have a couple chapters left but at a glance it doesn't look promising.) I don't know why he would leave the subject out but it appears he did. I'm even thinking he's just plain wrong, primarily due to his use of the word "only" in the last sentance from the quote. It was still a really good book for laymen and I enjoyed it much. Picked it up for a buck at a second hand store in Brooklyn. It may be out of print but it's a fun read if you can find it. No math, just cool stuff to think about. Mentor Blog Entries: 1 Quote by denni89627 Unfortunately that is the only mention of length contraction in Calder's book. (I still have a couple chapters left but at a glance it doesn't look promising.) I don't know why he would leave the subject out but it appears he did. I'm even thinking he's just plain wrong, primarily due to his use of the word "only" in the last sentance from the quote. If that's the only mention of length contraction, then he's done a grave disservice to his readers. But, strictly speaking, I have no problem with his statement: "Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle." Assuming that he meant the word "appears" in the same sense that I discussed above. (He must mean that or why in the world would he have mentioned the apparent rotation of the object!) But if he doesn't contrast this statement of appearances, with a clear discussion of real relativistic length contraction--he should be shot! It was still a really good book for laymen and I enjoyed it much. Picked it up for a buck at a second hand store in Brooklyn. It may be out of print but it's a fun read if you can find it. No math, just cool stuff to think about. It's still in print. (It was reissued in 2005--in celebration of 100 years of relativity.) You got me curious--I just reserved it from the library. It will take a week to get to me, but I'll give it a quick skim when I get it. Recognitions: Science Advisor Quote by Doc Al If that's the only mention of length contraction, then he's done a grave disservice to his readers. But, strictly speaking, I have no problem with his statement:"Many accounts of relativity say, quite incorrectly, that a passing spaceship appears unnaturally squashed or contracted along its length. It DOES appear foreshortened but only in accordance with the entirely natural perspective of an object seen from an angle."Assuming that he meant the word "appears" in the same sense that I discussed above. (He must mean that or why in the world would he have mentioned the apparent rotation of the object!) But if he doesn't contrast this statement of appearances, with a clear discussion of real relativistic length contraction--he should be shot! I agree that the use of "appears" makes the statement slightly more justifiable, but I'd still say the statment "but only in accordance with with the entirely natural perspective of an object seen from an angle" is false. Correct me if I'm wrong, but an object flying by you at a very high proportion of c would appear both weirdly distorted thanks to the Penrose-Terrell effect, but also squashed significantly in its direction of motion, and at least some of the visual squashing would be due to genuine length contraction in your frame. Another way of saying this is that if you were looking at the light signals from an object moving at a high fraction of c in a purely Newtonian universe (assume you're in the rest frame of the ether so that all light signals still travel at c in your frame), you'd probably still see some distortions similar to the Penrose-Terrell effect, but you wouldn't see the same degree of visual squashing that you would in a relativistic universe. Mentor Blog Entries: 1 Quote by JesseM Correct me if I'm wrong, but an object flying by you at a very high proportion of c would appear both weirdly distorted thanks to the Penrose-Terrell effect, but also squashed significantly in its direction of motion, and at least some of the visual squashing would be due to genuine length contraction in your frame. I admit that I'm a bit rusty on the details, but I think the answer is no. Terrell's 1959 paper on this was even titled "Invisibility of the Lorentz Contraction". Blog Entries: 47 Recognitions: Gold Member Homework Help Science Advisor Quote by Doc Al Quote by denni89627 It was still a really good book for laymen and I enjoyed it much. Picked it up for a buck at a second hand store in Brooklyn. It may be out of print but it's a fun read if you can find it. No math, just cool stuff to think about. It's still in print. (It was reissued in 2005--in celebration of 100 years of relativity.) You got me curious--I just reserved it from the library. It will take a week to get to me, but I'll give it a quick skim when I get it. Concerning Einstein's Universe by Nigel Calder... the PBS video with Peter Ustinov (which my dad suggested I should watch when it was first shown on PBS) plus the book (which, by chance, my uncle gave to me) gave me my first glimpse of relativity... As a video and pop-book, it inspired me to seek out successively more advanced books to learn more about relativity... eventually steering the course of my education. The video is now available on DVD http://store.corinthfilms.com/produc...productID=2467 and here is the book http://www.amazon.com/Einsteins-Univ.../dp/0517385708. (Don't buy up all of the DVDs... I haven't ordered mine yet. ) Quote by Doc Al It's still in print. (It was reissued in 2005--in celebration of 100 years of relativity.) You got me curious--I just reserved it from the library. It will take a week to get to me, but I'll give it a quick skim when I get it. Good, I'm sure you'll enjoy the book regardless. In my edition it's chapter 14 : The Universal Correction, where all this is discussed. Quote by JesseM I agree that the use of "appears" makes the statement slightly more justifiable, but I'd still say the statment "but only in accordance with with the entirely natural perspective of an object seen from an angle" is false. This is what I have a problem with too. To me it's as if he's dismissing length contraction. Maybe this guy is a friggin genius and wants the reader to figure out that truth in the universe is relative too. From an abberation reference frame the statement is true, but from a relativistic one it is false. Checkmate! Recognitions: Science Advisor Quote by Doc Al I admit that I'm a bit rusty on the details, but I think the answer is no. Terrell's 1959 paper on this was even titled "Invisibility of the Lorentz Contraction". Interesting, I hadn't known that, thanks. Googling that paper title, I found the abstract here: It is shown that, if the apparent directions of objects are plotted as points on a sphere surrounding the observer, the Lorentz transformation corresponds to a conformal transformation on the surface of this sphere. Thus, for sufficiently small subtended solid angle, an object will appear—optically—the same shape to all observers. A sphere will photograph with precisely the same circular outline whether stationary or in motion with respect to the camera. An object of less symmetry than a sphere, such as a meter stick, will appear, when in rapid motion with respect to an observer, to have undergone rotation, not contraction. The extent of this rotation is given by the aberration angle ($$\theta$$-$$\theta'$$), in which $$\theta$$ is the angle at which the object is seen by the observer and $$\theta'$$ is the angle at which the object would be seen by another observer at the same point stationary with respect to the object. Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio, irrespective of their velocity relative to the meter stick. Even if methods of measuring distance, such as stereoscopic photography, are used, the Lorentz contraction will not be visible, although correction for the finite velocity of light will reveal it to be present. I assume it's still true, though, that a moving object's shape in a relativistic universe will look different than it would in the Newtonian scenario I imagined above? Perhaps in a Newtonian universe, the effect of light from different parts of the object taking different times to reach you would be to stretch the image in the direction of motion, and the Lorentz contraction is in some sense cancelling that out? Mentor Blog Entries: 1 I'll have to review the paper (I'm sure I have in my pile at home) but I think you are on the right track. The light travel time stretches out the image just enough to "cancel" the Lorentz contraction. Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: The true nature of length contraction | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 2 | | | Special & General Relativity | 0 | | | Introductory Physics Homework | 0 | | | Advanced Physics Homework | 5 | | | Special & General Relativity | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9724971652030945, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/277235/two-identical-point-charges-cant-collide/278355
two identical point charges can't collide I've convinced myself intuitively that if you place two massless classical particles with the same charge in $\mathbb{R}^n$, with arbitrary initial velocities and (distinct) positions, they will never collide. However, I'm have a heck of a time trying to prove it, and would appreciate some help. Formally, consider $q_1, q_2: \mathbb{R} \rightarrow \mathbb{R}^n$ satisfying $$\ddot{q_i} = \frac{1}{\|q_i - q_j\|^3} (q_i - q_j)$$With $q_1(0) \neq q_2(0)$. The claim is that $q_1(t) \neq q_2(t)$ for all $t > 0$. So my questions are (i) is this true? (ii) what happens if we replace the exponent 3 in the denominator with say $\alpha > 0$ ? N.B. The question's already a bit long, but I'd be happy to post my thoughts so far. Edit All the answers were very helpful, thanks so much everyone! - 4 Can you show that your system of trajectories has some conserved quantity, like energy, and that this constraint enforces a minimum distance between the two masses? Also it might be easier to work in the centre-of-mass frame. – Eckhard Jan 13 at 22:21 1 Eckhard's approach sounds like the way to go. The worst-case scenario I can imagine is if you fired both particles directly at each other with enormous kinetic energy. As they near each other, that KE gets traded for potential. Actually occupying the same position would imply infinite potential energy, right? – AndrewG Jan 13 at 22:24 1 @uncookedfalcon if you wish to analyze the problem in arbitrary dimension, the field is no longer inverse-square dependent. – Jorge Campos Jan 13 at 23:26 1 @uncookedfalcon I must apologize, it seems that I was terribly confused. Haskell has cleared things up nicely. – AndrewG Jan 14 at 1:41 1 yeah no worries! – uncookedfalcon Jan 14 at 1:45 show 16 more comments 3 Answers First let me address the second question. Notice that the electric field is no longer inverse-square dependent in dimensions other than $3$. The fundamental equation here is $\nabla\cdot \mathbf E=\rho$ (the divergence of the field is the charge density.) In three dimensions the field caused by a point particle will be indeed radial field with magnitude $E(r)=q/4\pi r^2$. In other dimensions the field caused by a point-particle at the origin is still radial but one has $$\mathrm{vol}(S^{n-1})E(r)=\int_{S^{n-1}}\mathbf{E}\cdot d\mathbf{s} =\int_{n-ball} \,\nabla\cdot \mathbf{E}\,d^{n}x =\int_{n-ball}\rho d^{n}x= q,$$after using Gauss theorem. Then in if you want to consider the field in $\mathbb{R}^n$, its norm is $$E(r)=\frac{\Gamma(n/2)}{2 \pi^{n/2}}\frac{q}{r^{n-1}}.$$ Notice that you still get energy conservation. Assuming $1<n\neq 2$, the potential energy goes as $r^{-(n-2)}$, whereas for $n=2$ the potential goes as $\mathrm{ln}(r)$. Since the charges are initially at different positions, $U_i$, the initial potential energy is finite. Since $T_i+U_i=T_f+E_f$ and the kinetic energy is always positive, you need an infinite initial kinetik energy to make them collide, which is impossible. Then the charges never collide. This answers the first question too, for arbitrary dimension. - Thanks so much Jorge! This was super helpful :) – uncookedfalcon Jan 14 at 1:15 One technicality/question-to apply the analysis of $E$ you provide to the situation at hand, naively I want to use the frame of reference centered at one of the particles, so that the other is now subject to a field on $\mathbb{R}^n - 0$ given by $E$. If this indeed what you had in mind, is it clear that this particle-centered frame of reference is inertial? Thanks! – uncookedfalcon Jan 14 at 1:21 1 @uncookedfalcon Yes, but if you don't like that frame, you can replace $\mathbf{r}$ everywhere by $\mathbf q_1-\mathbf q_2$. The same analysis caries over to this "more general" frame. Regarding your question, both frames are inertial (because setting the force to zero, you get a particle that follows a stright line with constant velocity as the solution to the differential equation). Does that answer your question about frames? – Jorge Campos Jan 14 at 14:58 That absolutely does. Thanks so much! – uncookedfalcon Jan 14 at 19:01 @uncookedfalcon: The frame centered at any of the particles cannot be inertial. This is because with respect to the inertial center-of-mass frame, each particle is undergoing acceleration due to the mutual repulsion between them. – Haskell Curry Jan 14 at 19:11 show 2 more comments By describing all motion with respect to the center-of-mass frame, we can restrict our attention to $\mathbb{R}^{2}$ only. In what follows, $\mathbf{q}_{1},\mathbf{q}_{2}: \mathbb{R} \to \mathbb{R}^{n}$ denote the displacement functions of two particles with respect to the center-of-mass frame, where the center-of-mass is fixed at the origin of $\mathbb{R}^{n}$. For central-force motion involving only two particles, the trajectories $\mathbf{q}_{1}$ and $\mathbf{q}_{2}$ are seen to lie strictly within a $2$-dimensional subspace $\Pi$ of $\mathbb{R}^{n}$. If the affine vectors ${\dot{\mathbf{q}}_{1}}(0)$ and ${\dot{\mathbf{q}}_{2}}(0)$ are oriented such that they do not simultaneously point toward/away from the origin, then $\Pi$ is uniquely determined. What I have done above is to choose an isometry $T \in \mathbf{O}(n,\mathbb{R})$ in order to obtain $$T[\Pi] \subseteq \mathbb{R}^{2} \times \underbrace{\{ 0 \} \times \cdots \times \{ 0 \}}_{\text{$ n - 2 $ times}}.$$ This allows us to shift our focus to $\mathbb{R}^{2}$. Clearly, the chosen isometrically-linear coordinate transformation does not affect the physics that is being described by the equations of motion specified by the OP above. With this in mind, note that for $\alpha = 3$, what we have is basically the well-studied Coulomb Collision Problem. Depending on the orientation of the affine vectors ${\dot{\mathbf{q}}_{1}}(0)$ and ${\dot{\mathbf{q}}_{2}}(0)$, the trajectories lie in • non-intersecting hyperbolas or • non-intersecting segments of a single straight line. I find it rather interesting that the derivation of the Rutherford Scattering Formula in atomic physics relies upon this fact. For $\alpha \in \mathbb{R}_{> 0} \setminus \{ 3 \}$ in general, we no longer have a nice description of the trajectories involved. However, one can easily use an energy-conservation argument to prove that trajectories cannot collide, and this is precisely what Jorge has described in his solution. - Stupid question re: "the trajectories lie strictly in a 2-dimensional plane of $\mathbb{R}^n$ containing the initial velocity vectors" - I'm interpreting this is saying $q_i(t) \in \langle \dot{q_i}(0) \rangle$ for all $t$ (in the case $\dot{q_i}(0)$ are linearly independent)...for $t = 0$ why can't I simply pick $q_i(0)$ to be outside of this span? – uncookedfalcon Jan 14 at 0:35 1 @uncookedfalcon: I should have mentioned ‘with respect to the center-of-mass frame’. If you analyze the motion with respect to some other frame, then it is clear that the trajectories will not lie in a single $2$-dimensional plane. :) – Haskell Curry Jan 14 at 1:22 perfect! that does the trick :p – uncookedfalcon Jan 14 at 1:25 This is as good a place as any to mention how you can derive conservation of energy from scratch, starting with nothing but a differential equation for $\ddot q_i$... as long as your force term is conservative, that is, it is the negative gradient of a scalar "potential energy" function. (I'll deal only with a one-particle system, but you can handle multiple particles simply by packing in all the position variables $q_1, q_2, \ldots, q_m$ into a single vector in $\mathbb R^{mn}$.) Consider $\ddot q=f(q)$ where $f$ is conservative, i.e. $f(q) = -\frac{\mathrm d}{\mathrm dq} U(q)$ for some scalar-valued potential $U$. Introduce a momentum variable $p=\dot q$ so that $\dot p = f(q).$ Observe that $p = \frac{\mathrm d}{\mathrm dp} T(p)$ where $T(p) = \frac12\lVert p\rVert^2$. Define the energy function $H(q,p) = U(q) + T(p)$, and observe that $$\dot H(q,p) = \frac{\partial H}{\partial q}\cdot\dot q + \frac{\partial H}{\partial p}\cdot\dot p = -\dot p\cdot\dot q + \dot q\cdot\dot p = 0,$$ so $H$ is constant over time for any solution. For your problem, your $U(q_1,q_2)$ will depend only on $\lVert q_1-q_2\rVert$, and you'll want to check whether $H$ is infinite in a colliding configuration. I think you need $\alpha>1$ for that to happen. - Thanks writing this out! However, I still can't see how to apply the case of particles moving in a conservative field to the case at hand. The only way I could see to do it would be to take the frame of reference of say $q_1$, but I somehow doubt that this frame is inertial. Do you have any thoughts on this? (this is identical to my comment on Jorge's answer) – uncookedfalcon Jan 14 at 4:13 @uncookedfalcon: Suppose $U(q_1,q_2)=a \lVert q_1-q_2\rVert^b$ for some unknown $a$ and $b$. Compute $-\frac{\partial}{\partial q_1}U(q_1,q_2)$ and compare with your desired force $(q_1-q_2)/\lVert q_1-q_2\rVert^\alpha$. – Rahul Narain Jan 14 at 5:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259864091873169, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/56241-finding-norm-scalar-product-sin-ax-cos-bx-vector-space-c-0-pi-print.html
# Finding the norm and scalar product of sin(aX),cos(bX) in the vector space C(0,Pi) Printable View • October 28th 2008, 01:23 PM partickrock Finding the norm and scalar product of sin(aX),cos(bX) in the vector space C(0,Pi) Hi im trying to calculate the norm and scaler product of sin ax and cos bx in vector space C(0,Pi) . i think the solution might be to projection but im not entirely sure how to do it. • October 31st 2008, 12:32 AM CaptainBlack Quote: Originally Posted by partickrock Hi im trying to calculate the norm and scaler product of sin ax and cos bx in vector space C(0,Pi) . i think the solution might be to projection but im not entirely sure how to do it. A vector space as such does not have a norm or inner (scalar) product. The usual inner product on the space of continuous real functions on a closed interval $[a,b]$ is: $\langle a,b \rangle =\int_a^b a(x)b(x) \ dx$ CB All times are GMT -8. The time now is 12:56 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8924311995506287, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/5036/spectra-of-c-algebras/5058
## Spectra of $C^*$ algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Gelfand-Naimark structure theorem for $C^*$ algebras gives a canonical isometric * isomorphism between any commutative unital $C^*$ algebra $A$ and the algebra of continuous complex-valued functions on $A$^. This is the spectrum (or structure space) of $A$, i.e. the non-zero multiplicative linear continuous functionals with the topology of pointwise convergence (alias weak*), which is compact and hausdorff. Apart from the easy case $A = C(X)$, with $X$ compact hausdorff, for which $A$^ is $X$ itself, there are a lot of non trivial and not immediately visible examples of spectra, for example: If $X$ loc. compact hausdorff $A = C_b(X)$ (continuous and bounded functions with uniform topology) is a $C^*$ algebra. If X is non compact then A^ cannot be $X$ and is in fact $\beta X$, the Stone-Cech compactification of $X$. If $X$ is loc. compact hausdorff and you take $C_0(X)$, then you get another compactification of $X$. If instead you simply take $C(X)$ for $X$ compact non-hausdorff you get a natural "hausdorfization" of $X$. I'm particularly interested in the existence of other constructions which can be described by gelfand theory as above. I mean to associate functorially to each space (in an appropriate subcategory of Top, maybe not full) a $C^*$ algebra and then to look at its spectra. A related question: what are the spectra of $L^\infty(R)$, and similar algebras (maybe $L^\infty(G)$, G loc. compact group with haar measure)? - ## 5 Answers The spectrum of $L^\infty(R)$ is the hyperstonean space associated with the measurable space R. More information can be found in Takesaki's Theory of Operator Algebras I, Chapter III, Section 1, available here: http://gen.lib.rus.ec/get?md5=7F0A9F06741272684D62426E348670B1 - 3 Yes, but in a sense, this is just a way of getting out of difficulty by naming a hard-to-understand object, right? I am not saying this is useless or that one can't say a lot about this space, but it doesn't really give you a handle on what the spectrum is. (Though, as Bill Clinton famously said, it depends on what the meaning of the word “is” is.) – Harald Hanche-Olsen Nov 11 2009 at 15:55 1 Well, you can say a lot about this space. Bounded functions on it are in bijective correspondence with equivalence classes of bounded functions on the original measurable space; clopen sets are in bijective correspondence with equivalence classes of measurable sets etc. etc. etc. From the viewpoint of topology this space is weird because it is extremally disconnected, therefore to understand what this space really is you need to use quite different methods from what you are used to. – Dmitri Pavlov Nov 11 2009 at 17:23 1 After all, the category of commutative von Neumann algebras is contravariantly equivalent to the category of hyperstonean spaces and hyperstonean maps between them, and this category is equivalent to the category of measurable spaces and measurable maps between them. Thus understanding hyperstonean spaces is the same thing as understanding measure theory. There is a complete classification of measurable spaces: Every measurable space is a coproduct of points and real lines. Thus the spectrum of $L^\infty(R)$ is the only non-trivial interesting example of a measurable space. – Dmitri Pavlov Nov 11 2009 at 17:27 Hmm. I'll add hyperstonean spaces to the list of things I want to learn more about some day. Thanks. – Harald Hanche-Olsen Nov 11 2009 at 18:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For $L^\infty(X)$, the spectrum is the Stone space of the algebra of measurable sets mod null sets. This is because a character is determined by what it does on characteristic functions because their span is dense. - Your question (especially the first part) is a bit vague, but I'll shoot: A very nice example is provided by Carleson's corona theorem, stating that the unit disk is dense in the spectrum of the Hardy space $H^\infty$ (the bounded holomorphic functions on the unit disk). As for the spectra of $L^\infty$, I don't think you can ever come up with a concrete example of a character on this space. You actually need the axiom of choice to prove that the spectrum is nonempty. Likewise with the points of the spectrum of $H^\infty$ outside the unit disk. - I believed you could describe the spectrum of the algebra of translation invariant bounded operators on L^2(R^n), which is isomorphic to L^\infty. – Gian Maria Dall'Ara Nov 11 2009 at 15:21 That isomorphism provides the most concrete representation of the algebra you mention, via multiplication of the Fourier transform. I don't see how this helps in describing the spectrum, though of course maybe it's just my ignorance showing. – Harald Hanche-Olsen Nov 11 2009 at 15:47 I think, if X is locally compact and Hausdorff, then the spectrum of C_0(X) is just X. You can get the one point compactification of X by looking at the spectrum of the unitisation of . This is the vector space with the unique C*-norm. (Just embed it into C_b(X) for example). The spectrum of will in general be very large: I don't know any "nice" way of describing it. - I really meant to add a unit to $C_0(X)$, otherwise it is not a unital $C^*$ algebra and you cannot expect a compact spectrum, and hence the compactification of $X$ – Gian Maria Dall'Ara Nov 11 2009 at 13:32 The Gelfand representation also works for non-unital commutative C^*-algebras. In this case, it establishes a category equivalence to the category of locally compact Hausdorff spaces with proper maps (implemented by C_0(.) and the spectrum). Hence Matthew's comment, the spectrum of C_0(X) is just X. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218333959579468, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/121454/how-can-we-determine-if-every-matrix-of-mathbbr2-times-2-can-be-written/121718
# How can we determine if every matrix of $\mathbb{R}^{2 \times 2}$ can be written as a linear combination of specific $A, B$ matrices We have these two matrices: $$K = \left(\begin{matrix} 2 & 1 \\ 8 & 7\end{matrix}\right), \quad L = \left(\begin{matrix} 2 & 1 \\ 2 & 7 \end{matrix} \right)$$ We have been asked if every matrix of $\mathbb{R}^{2 \times 2}$ can be written as a linear combination of $K$ and $L$ matrices. This means that the set $\{K,L\}$ is a base of $\mathbb{R}^{2 \times 2}$, right? I've thought of this: For $K$ and $L$ matrices to be a base of $\mathbb{R}^{2 \times 2}$ they must be linearly independent, is that correct? $a,b$ numbers of $\mathbb{R}$ $a \cdot K + b \cdot L = 0$, where $0$ is the $\left(\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}\right)$ matrix. So: $$\begin{array}{cccc} 2a &+& 2b &=& 0 \\ a &+& b &=& 0 \\ 8a &+& 2b &=& 0 \\ 7a &+& 7b &=& 0 \end{array}$$ (the solution set of this system is empty set?) How can I think of that? Thank you! - 1 First, notice that the solution set is not empty, but rather $(a,b)=(0,0)$, which proves that $K$ and $L$ are independent. Still, that does not prove that they form a basis for $\mathbb R^{2 \times 2}$. – Théophile Mar 17 '12 at 20:04 Now, as for a basis, consider that every $2 \times 2$ matrix has 4 independent variables. Is it possible to cover all possibilities using only 2 matrices? (No, since $2 < 4$.) – Théophile Mar 17 '12 at 20:06 1 What you are proving is that $K$ and $L$ are linearly independent, which they indeed are (so $a = b = 0$ is the only solution). However, for certain vectors to form a basis of a vector space, you need more than independence. Can any matrix be written as a linear combination of these two matrices? – TMM Mar 17 '12 at 20:17 As the above comment says you need $4$ matrices (provided they are linearly independent). – Daniel Montealegre Mar 17 '12 at 20:18 @Théophile: Yes that's right, (a,b)=(0,0)! Why the variables of a 2x2 matrix are independent? :S So we would need to have 4 matrices, right? Because with the given 2 we cannot create the other 2 variables, right? – Chris Mar 17 '12 at 20:32 show 1 more comment ## 3 Answers Hint: The dimension of ${\sf M}_2({\mathbb R})$ is $4.$ So the basis has to have how many matrices?! - You mean that dim$\mathbb{R}^{2 \times 2}$ = 4, right? It must have 4 matrices! Sorry, I got confused with subspaces :S – Chris Mar 17 '12 at 20:36 Right. ${\sf M}_2(\mathbb{R})$ is the space of all $2\times 2$ matrices with entries from $\mathbb{R}.$ – user2468 Mar 17 '12 at 20:37 – user2468 Mar 17 '12 at 20:39 Thank you, J.D.! – Chris Mar 17 '12 at 20:47 Instead of writing a $2\times 2$ matrix as $$\begin{pmatrix} a & b \\ c & d\end{pmatrix},$$ write it unconventionally as $(a,b,c,d)$. Now do you see that the vector space of $2\times 2$ matrices, with the usual addition, is $4$-dimensional? - Yes, thank you Andre! I will try to "see" it that way! – Chris Mar 17 '12 at 22:01 What about noting that $\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 1 & 0\end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}$ forms a basis of $\mathbb{R}^{2\times 2}$ and so as all basis of a vector space have the same number of elements and as you only have 2 elements then these cannot span. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945441722869873, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/298661/saying-a-in-b-in-category-theory/298681
# Saying $a \in b$ in category theory Suppose I have a category $C$ of sets, and $a,b \in C$. How can I express, in the language of category theory, that $a \in b$? (To clarify: the objects of $C$ are actually sets, and I want to express that $a$ actually is a member of $b$, in the underlying set theory.) I am aware of the construction of an "element of a set" as a morphism $f : 1 \rightarrow b$. But I don't know how to say of an object $a$ that it is an element of $b$. One idea would be to form the coslice category $1 \downarrow C$ (the category of morphisms $g : 1 \rightarrow x$ in $C$) and assume we have an isomorphism $f : (1 \downarrow C) \rightarrow C$ which does the right thing. As a newbie, I'm not positive that works. But even if it does, it has the disadvantage that we need to introduce the functor $f$. Is there a better way? - You should make your question more precise. 1) Which properties of $\in$ do you demand? (Of course it shouldn't be an arbitrary relation) 2) What is a category of sets? Do you mean a concrete category? Or a (full) subcategory of the category of sets? 3) Motiviation, see Berci's comment below :). – Martin Brandenburg Feb 9 at 11:18 3 Why would you want to talk about $a\in b$? Else your idea is correct. Unfortunately not so many categories will satisfy that $f$ is isomorphism/equivalence of categories. – Berci Feb 9 at 11:19 Martin: Is my clarification satisfactory? Berci: I assume there's some philosophical reason why I might not? E.g., that part of the point of thinking of the sets as a category is to forget many of their features? Would you like to say anything about this? As far as $f$ not being an isomorphism, do you mean that it won't be except in weird corner cases? I'm getting a sense that might be true from looking at the morphisms in the coslice category. – Nick Thomas Feb 9 at 11:33 2 @NickThomas $a \in b$ is not expressible in category theory, for the simple reason that the category cannot distinguish between sets that have the same cardinality. – Zhen Lin Feb 9 at 11:36 1 @NickThomas, Sec 2.3 (pp.34-38) of Lawvere & Rosebrugh's "Sets for Mathematics" shows how to express inclusion and membership categorically in terms of morphisms (as hinted by Martin in his answer). – alancalvitti Feb 11 at 22:57 show 3 more comments ## 3 Answers Any "categorical definition" should be invariant under equivalences of categories. If $p$ is any set with only one element $\star$, then $X \mapsto X \cdot p$ is an auto-equivalence of $\mathsf{Set}$. Here, we have $X \cdot p := \coprod_{x \in X} p := \bigcup_{x \in X} p \times \{x\} = \{(\star,x) : x \in X\}$. If $X$ is empty, then $X \cdot p$ is also empty. But the elements of $X \cdot p$ are never empty. Thus, if $\emptyset \in X$, then $\emptyset \cdot p = \emptyset \notin X \cdot p$. Therefore, there is no categorical definition of $\in$. This is not really a coincidence or even a defect of category theory. Instead, category theory systematically replaces membership by morphisms. See also ETCS. - Well, it only shows that category equivalence is not in friendship with the 'real' membership relation. – Berci Feb 9 at 12:42 @Berci: Actually, this example is an isomorphism of categories! – Hurkyl Feb 9 at 14:19 Hurkyl: It is not an isomorphism. As I've said, there is no set in the image which contains $\emptyset$. @Berci: What do you mean by "only"? Can you imagine any reasonable "categorical definition" which is not invariant under equivalences? – Martin Brandenburg Feb 9 at 14:25 +1 Your example is very nice @MartinBrandenburg . Is this original to you or could you perhaps point to some reference which describes this and other proofs of "evil" mathematical definitions? – magma Feb 10 at 18:08 If your category has a terminal object and power objects, then for each object $X$ there is a distinguished element $\{ X \} : 1 \to \mathcal{P} X$. If you had a monomorphism $\mathcal{P} X \to Y$, then one might opt to use it to interpret $\mathcal{P}X$ as a subobject of $Y$. In this context, it would make sense to call $X$ an element of $Y$. But it only makes sense in this context. We could extend it to subobjects $Z \subseteq Y$ and say that $X \in Z$ if and only if our chosen morphism $\mathcal{P} X \to Y$ factors through a representative of the subobject $Z$. This might be more meaningful if you equipped your category with extra structure: a distinguished class of monomorphisms that you call "$\subseteq$". Presumably, you'd want the objects together with all of the distinguished morphisms to form a poset, and probably other features. With this structure in place, we might now define $X \in Z$ to mean that there exists a commutative diagram $$\begin{matrix} 1 & \to & Z \\ \downarrow & & {\small|}\!\cap \\ \mathcal{P}X &\subseteq& Y \end{matrix}$$ where the left arrow is $\{ X \}$. That said, it would be a very unusual situation for any of this to be of use. Other notions of element find much more utility, such as: • The notion of a (global) element $1 \to X$ • The notion of a generalized element: any morphism at all with codomain $X$ • Restrictions of which objects can be used as the domain are sometimes useful • The relation $\in$ related to power objects - @Martin: But if $X \in Z$, then setting $Y = Z \cup \mathcal{P}X$ will give a diagram. – Hurkyl Feb 9 at 14:51 However, if one insists on expressing $\in$, it is possible (and sometimes it is indeed done), in exactly the same manner as you described: an element of an object $x$ could be defined as a morphism $1\to x$, just as in $\Bbb{Set}$. But in general, what these common source object, "$1$" would be? The terminal object? For example in the category of groups, these $1\to G$ homomorphisms are all trivial. On the other hand, the elements of $G$ are represented by $\Bbb Z\to G$ homomorphisms (identifying the element as the image of $1\in\Bbb Z$). In general universal algebra, the free algebra on $1$ generator will play the role of the common source "$1$". This way formulas like $u\in x$ are expressible, however, as Martin commented below, while $x$ is an object, $u$ is an other kind of entity (an arrow to $x$). For the wished interpretation where $u$ is also an object, a canonical (or at least, fixed) functor $f:C\to (1\downarrow C)$ is indeed needed, with the predescribed $1$ object, (but it usually makes no sense). - 2 The OP wants $x \in y$ for two objects of $C$. This is not possible with the definition, where an element is a morphism from the terminal object. – Martin Brandenburg Feb 9 at 12:07 Hmm.. I see. Then the isomorphism $f:(1\downarrow C)\to C$ is indeed needed.. Well, I thought about it as a more general question. – Berci Feb 9 at 12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344990849494934, "perplexity_flag": "head"}
http://mathoverflow.net/questions/105246/undergraduate-topology/105258
## Undergraduate Topology ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am developing an introductory topology course for undergraduates, and I am wondering what topics to cover. At my institution, real analysis is not a prerequisite for the course, so it is more than likely that the intended audience has not been exposed to this material. Does anyone have any suggestions? - 7 Welcome to MO! Since this question does not have one right answer, but rather asks for a list of suggestion or varying opinions it should be ask in Community Wiki mode. To achieve this, please 'edit' the question (button below the text of the question) and tick the appropraiet box and save this edit. (I also flagged for moderators to do this in case you do not see the request in time, this has implications for all answers, or shoudl have difficulty doing this.) – quid Aug 22 at 16:23 ## 7 Answers I've found that doing low-dimensional manifold topology is very appealing to undergraduates. I used the "Topology Now!" text by Messer and Straffin and, while the text isn't perfect, the approach was wonderfully successful. - 1 I second the suggestion to focus on low-dimensional topology. Some particularly fun parts of low-dimensional topology for a first course are: classification of surfaces (including Part 1 of Conway et al "Symmetry of Things"); some baby knot theory (Reidemeister moves, quandle invariants like the number of three-colorings of the knot diagram, and also some invariants that are not diagram-based). It is also worth telling the students without proof that intuition from low dimensions often fails as you move higher up. Exotic smooth structures and non-smoothable manifolds come to mind. – Theo Johnson-Freyd Aug 22 at 17:57 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I second Aeryk's suggestion to focus on low-dimensional topology. More generally, I think that algebraic topology can be more exciting than point-set topology. That said, when I was an undergraduate, I do remember being quite excited about the Bourbaki program and point-set definitions and so on. I remember one "first course in topology" that alternated days: low-dimensional topology on even days and point-set on odd. Except the point-set portion began with set theory, cardinal and ordinal numbers, and the axiom of choice; then moved on to metric spaces; and only then introduced point-set topology. I basically think that to motivate the point-set definitions, you had better start with metric spaces. If on the other hand you focus more on broadly-defined algebraic topology, then in addition to the low-dimensional topology of manifolds (surfaces, knots, etc.), another good topic is Brower fixed-point theorem as an application of fundamental group functor on pointed spaces. Perhaps, if you are very ambitious, you can prove that 2dTQFT = commmutative Frobenius algebra, and talk more generally about cobordism equivalence. Oh, and especially given the recent sad news, be sure to include a little Morse theory and Outside In. - 2 +1 for "to motivate the point-set definitions, you had better start with metric spaces. (I'm not saying you should do point-set topology, but that if you do it, metric spaces should come first.) – Andreas Blass Aug 22 at 22:57 In the 1970s I developed an undergraduate course on knots, (source book was by Crowell and Fox) to replace general topology and homology, as it was very easy for students to understand the point of the course, there were interesting relations with group theory, and lots of specific calculations and other things to do. The course was eventually taken over by others, and resulted in a book, Knots and Surfaces, by N.D. Gilbert and T. Porter which had good reviews. For me, it led to giving popular talks on "How mathematics gets into Knots", and eventually to the exhibition you can see on the web site for the Centre for the Popularisation of Mathematics. In these talks I could also talk about mathematics, including, for example, the importance of analogy in mathematics. This led to one boy at a talk for children, some aged 12, asking: "Are there infinitely many prime knots? " Wow! So giving this undergraduate course has led to all sorts of fun and rewarding things! My copper pentoil knot used with string to demonstrate the ideas of the fundamental group has also travelled to many countries, see for example the pdf of a William J. Spencer Lecture in Kansas, April, 2012. - Ronald,you don't get enough credit for your very inventive and original attempts to reform a basic course in topology. You should get more. – Andrew L Nov 18 at 5:07 I think you could do a lot worse than to focus on modern applications by using the texts of Edelsbrunner and Harer and/or Zomorodian as touchstones and an avenue towards current work in topological data analysis. These books are self-contained treatments that focus on Morse theory and homology over $\mathbb{Z}/2\mathbb{Z}$, and there are a lot of materials and software available for a course to be built from. - As a resource for low-dimensional topology, I would suggest The shape of space by Jeffrey Weeks. It covers how to build compact two-manifolds and some three manifolds. Exercises include playing tic-tac-toe on surfaces and forming sums of surfaces. It then moves on to some three dimensional topology with a heavy focus on attempting to visualize the spaces. - We give a Geometry and Topology course at Macquarie for students with no real analysis, using notes written by a colleague. Here is the homepage for the course, so you can get an idea of what we do. - If the intent is to provide breadth, then many of the suggestions others have made are quite appealing, especially if it is made clear what branches of topology are being introduced and what a student should do outside of class to develop depth in any or all of the branches. If the intent is to provide depth, there are likely several texts out there, one for each branch, with suggestions. I remember covering Munkres first course in Topology starting with chapter 2; even though we skipped over the set theory and foundations, I was intrigued enough by them to study set theory and foundations while in graduate school. Although the class did not go all the way through the book that first semester, we got exposed to quite a bit, and I developed more ofa taste for formalism from that class more than from any other that I took as an undergraduate. If the intent is to provide both depth and breadth, I suggest part of it be run as a student seminar. A later toplogy course I took had me present Sard's theorem; if nothing else came from that course I at least know how to prepare to explain Sard'd theorem for my next opportunity. Gerhard "And This Was Decades Ago" Paseman, 2012.08.22 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620208144187927, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/29314-probability-help-please.html
# Thread: 1. ## Probability help please Hey guys, I need some help on this problem please --- Verify the following extension of the addition rule by a) an appropriate Venn diagram and b) by a formal argument using the axioms of probability and the propositions in the chapter. $P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(A \cap C) - P(B \cap C) + P(A \cap B \cap C)$ I don't understand the last part which says: $+ P(A \cap B \cap C)$ because they are now adding some of the same elements twice. But the question does say VERIFY and not prove... Thanks in advance. 2. $A \cap B \cap C$ is the intersection between the three sets A, B and C The following Venn diagram may be of help: 3. Originally Posted by colby2152 $A \cap B \cap C$ is the intersection between the three sets A, B and C The following Venn diagram may be of help: Ah i get it now! After we removed the intersections of A and B, A and C, etc, then we have removed an essential part of the circles, namely where they all intersect, and we have to add that part again. Thanks Colby!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436126947402954, "perplexity_flag": "head"}
http://psychology.wikia.com/wiki/Artificial_neural_network?diff=prev&oldid=106851
Changes: Artificial neural network Edit Back to page | | | | | |------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | AWeidman (Talk | contribs)m (Reverted edits by 77.225.13.125 (talk) to last version by Dr Joe Kiff) | | (→‎Background) | | | Line 12: | | Line 12: | | | | | | | | | ==Background== | | ==Background== | | − | There is no precise agreed definition among researchers as to what a [[neural network]] is, but most would agree that it involves a network of simple processing elements ([[neurons]]) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. The original inspiration for the technique was from examination of the [[central nervous system]] and the neurons (and their [[axons]], [[dendrites]] and [[synapses]]) which constitute one of its most significant information processing elements (see [[Neuroscience]]). In a neural network model, simple [[Node (neural networks)|nodes]] (called variously "neurons", "neurodes", "PEs" ("processing elements") or "units") are connected together to form a network of nodes &mdash; hence the term "neural network." While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow. | + | There is no precise agreed definition among researchers as to what a [[neural network]] is, but most would agree that it involves a network of simple processing elements ([[neurons]]) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. The original inspiration for the technique was from examination of the [[central nervous system]] and the neurons (and their [[axons]], [[dendrites]] and [[synapses]]) which constitute one of its most significant information processing elements (see [[Neuroscience]]). In a neural network model, simple [[Node (neural networks)|nodes]] (called variously "neurons", "neurodes", "PEs" ("processing elements") or "units") are connected together to form a network of nodes — hence the term "neural network." While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow. | | | | | | | − | These networks are also similar to the [[biological neural networks]] in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned (see also [[connectionism]]). Currently, the term ANN tends to refer mostly to neural network models employed in [[statistics]] and [[artificial intelligence]]. [[Neural network]] models designed with emulation of the [[central nervous system]] (CNS) in mind are a subject of [[theoretical neuroscience]]. | + | These networks are also similar to the [[biological neural networks]] in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned (see also [[connectionism]]). Currently, the term ANN tends to refer mostly to neural network models employed in [[statistics]] and [[artificial intelligence]]. [[Neural network]] models designed with emulation of the [[central nervous system]] (CNS) in mind are a subject of [[theoretical neuroscience]]. | | | | | | | | In modern [[Neural network software|software implementations]] of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as [[artificial neuron]]s) are used as components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such [[adaptive systems]] is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. What they do however have in common is the principle of non-linear, distributed, parallel and local processing and adaptation. | | In modern [[Neural network software|software implementations]] of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as [[artificial neuron]]s) are used as components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such [[adaptive systems]] is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. What they do however have in common is the principle of non-linear, distributed, parallel and local processing and adaptation. | | Line 39: | | Line 39: | | | | | | | | | {{See also|graphical models}} | | {{See also|graphical models}} | | | | + | | | | | | | | | ===Learning=== | | ===Learning=== | | Line 91: | | Line 92: | | | | | | | | | ===Learning algorithms=== | | ===Learning algorithms=== | | − | Training a neural network model essentially means selecting one model from the set of allowed models (or, in a [[Bayesian]] framework, determining a distribution over the set of allowed models) that minimises the cost criterion. There are numerous algorithms available for training neural network models; most of them can be viewed as a straightforward application of [[Optimization (mathematics)|optimization]] theory and [[statistical estimation]]. | + | Training a neural network model essentially means selecting one model from the set of allowed models (or, in a [[Bayesian]] framework, determining a distribution over the set of allowed models) that minimises the cost criterion. There are numerous algorithms available for training neural network models; most of them can be viewed as a straightforward application of [[Optimization (mathematics)|optimization]] theory and [[statistical estimation]]. | | | | | | | | Most of the algorithms used in training artificial neural networks are employing some form of [[gradient descent]]. This is done by simply taking the derivative of the cost function with respect to the network parameters and then changing those parameters in a [[gradient-related]] direction. | | Most of the algorithms used in training artificial neural networks are employing some form of [[gradient descent]]. This is done by simply taking the derivative of the cost function with respect to the network parameters and then changing those parameters in a [[gradient-related]] direction. | Latest revision as of 08:36, September 23, 2009 Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Psychology: Debates · Journals · Psychologists An artificial neural network (ANN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network. (The term "neural network" can also mean biological-type systems.) In more practical terms neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data. Background There is no precise agreed definition among researchers as to what a neural network is, but most would agree that it involves a network of simple processing elements (neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. The original inspiration for the technique was from examination of the central nervous system and the neurons (and their axons, dendrites and synapses) which constitute one of its most significant information processing elements (see Neuroscience). In a neural network model, simple nodes (called variously "neurons", "neurodes", "PEs" ("processing elements") or "units") are connected together to form a network of nodes — hence the term "neural network." While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow. These networks are also similar to the biological neural networks in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned (see also connectionism). Currently, the term ANN tends to refer mostly to neural network models employed in statistics and artificial intelligence. Neural network models designed with emulation of the central nervous system (CNS) in mind are a subject of theoretical neuroscience. In modern software implementations of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as artificial neurons) are used as components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such adaptive systems is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. What they do however have in common is the principle of non-linear, distributed, parallel and local processing and adaptation. Models Neural network models in artificial intelligence are usually referred to as artificial neural networks (ANNs); these are essentially simple mathematical models defining a function $f : X \rightarrow Y$. Each type of ANN model corresponds to a class of such functions. The network in artificial neural network The word network in the term 'artificial neural network' arises because the function $f(x)$ is defined as a composition of other functions $g_i(x)$, which can further be defined as a composition of other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between variables. A widely used type of composition is the nonlinear weighted sum, where $f (x) = K \left(\sum_i w_i g_i(x)\right)$, where $K$ is some predefined function, such as the hyperbolic tangent. It will be convenient for the following to refer to a collection of functions $g_i$ as simply a vector $g = (g_1, g_2, \ldots, g_n)$. This figure depicts such a decomposition of $f$, with dependencies between variables indicated by arrows. These can be interpreted in two ways. The first view is the functional view: the input $x$ is transformed into a 3-dimensional vector $h$, which is then transformed into a 2-dimensional vector $g$, which is finally transformed into $f$. This view is most commonly encountered in the context of optimization. The second view is the probabilistic view: the random variable $F = f(G)$ depends upon the random variable $G = g(H)$, which depends upon $H=h(X)$, which depends upon the random variable $X$. This view is most commonly encountered in the context of graphical models. The two views are largely equivalent. In either case, for this particular network architecture, the components of individual layers are independent of each other (e.g., the components of $g$ are independent of each other given their input $h$). This naturally enables a degree of parallelism in the implementation. Networks such as the previous one are commonly called feedforward, because their graph is a directed acyclic graph. Networks with cycles are commonly called recurrent. Such networks are commonly depicted in the manner shown at the top of the figure above, where $f$ is shown as being dependent upon itself. However, there is an implied temporal dependence which is not shown. What this actually means in practice is that the value of $f$ at some point in time $t$ depends upon the values of $f$ at zero or at one or more other points in time. The graphical model at the bottom of the figure illustrates the case: the value of $f$ at time $t$ only depends upon its last value. Models such as these, which have no dependencies in the future, are called causal models. See also: graphical models Learning However interesting such functions may be in themselves, what has attracted the most interest in neural networks is the possibility of learning, which in practice means the following: Given a specific task to solve, and a class of functions $F$, learning means using a set of observations, in order to find $f^* \in F$ which solves the task in an optimal sense. This entails defining a cost function $C : F \rightarrow \mathbb{R}$ such that, for the optimal solution $f^*$, $C(f^*) \leq C(f)$ $\forall f \in F$ (no solution has a cost less than the cost of the optimal solution). The cost function $C$ is an important concept in learning, as it is a measure of how far away we are from an optimal solution to the problem that we want to solve. Learning algorithms search through the solution space in order to find a function that has the smallest possible cost. For applications where the solution is dependent on some data, the cost must necessarily be a function of the observations, otherwise we would not be modelling anything related to the data. It is frequently defined as a statistic to which only approximations can be made. As a simple example consider the problem of finding the model $f$ which minimizes $C=E\left[|f(x) - y|^2\right]$, for data pairs $(x,y)$ drawn from some distribution $\mathcal{D}$. In practical situations we would only have $N$ samples from $\mathcal{D}$ and thus, for the above example, we would only minimize $\hat{C}=\frac{1}{N}\sum_{i=1}^N |f(x_i)-y_i|^2$. Thus, the cost is minimized over a sample of the data rather than the true data distribution. When $N \rightarrow \infty$ some form of online learning must be used, where the cost is partially minimized as each new example is seen. While online learning is often used when $\mathcal{D}$ is fixed, it is most useful in the case where the distribution changes slowly over time. In neural network methods, some form of online learning is frequently also used for finite datasets. See also: Optimization (mathematics), Statistical Estimation, Machine Learning Choosing a cost function While it is possible to arbitrarily define some ad hoc cost function, frequently a particular cost will be used either because it has desirable properties (such as convexity) or because it arises naturally from a particular formulation of the problem (i.e., In a probabilistic formulation the posterior probability of the model can be used as an inverse cost). Ultimately, the cost function will depend on the task we wish to perform. The three main categories of learning tasks are overviewed below. Learning paradigms There are three major learning paradigms, each corresponding to a particular abstract learning task. These are supervised learning, unsupervised learning and reinforcement learning. Usually any given type of network architecture can be employed in any of those tasks. Supervised learning In supervised learning, we are given a set of example pairs $(x, y), x \in X, y \in Y$ and the aim is to find a function f in the allowed class of functions that matches the examples. In other words, we wish to infer the mapping implied by the data; the cost function is related to the mismatch between our mapping and the data and it implicitly contains prior knowledge about the problem domain. A commonly used cost is the mean-squared error which tries to minimise the average error between the network's output, f(x), and the target value y over all the example pairs. When one tries to minimise this cost using gradient descent for the class of neural networks called Multi-Layer Perceptrons, one obtains the well-known backpropagation algorithm for training neural networks. Tasks that fall within the paradigm of supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). The supervised learning paradigm is also applicable to sequential data (e.g., for speech and gesture recognition). Unsupervised learning In unsupervised learning we are given some data $x$, and the cost function to be minimised can be any function of the data $x$ and the network's output, $f$. The cost function is dependent on the task (what we are trying to model) and our a priori assumptions (the implicit properties of our model, its parameters and the observed variables). As a trivial example, consider the model $f(x) = a$, where $a$ is a constant and the cost $C=(E[x] - f(x))^2$. Minimising this cost will give us a value of $a$ that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: For example in compression it could be related to the mutual information between x and y. In statistical modelling, it could be related to the posterior probability of the model given the data. (Note that in both of those examples those quantities would be maximised rather than minimised) Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering. Reinforcement learning In reinforcement learning, data $x$ is usually not given, but generated by an agent's interactions with the environment. At each point in time $t$, the agent performs an action $y_t$ and the environment generates an observation $x_t$ and an instantaneous cost $c_t$, according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimises some measure of a long-term cost, i.e. the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but can be estimated. More formally, the environment is modeled as a Markov decision process (MDP) with states ${s_1,...,s_n} \in S$ and actions ${a_1,...,a_m} \in A$ with the following probability distributions: the instantaneous cost distribution $P(c_t|s_t)$, the observation distribution $P(x_t|s_t)$ and the transition $P(s_{t+1}|s_t, a_t)$, while a policy is defined as conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the policy that minimises the cost, i.e. the MC for which the cost is minimal. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks. See also: dynamic programming, stochastic control Learning algorithms Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimises the cost criterion. There are numerous algorithms available for training neural network models; most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Most of the algorithms used in training artificial neural networks are employing some form of gradient descent. This is done by simply taking the derivative of the cost function with respect to the network parameters and then changing those parameters in a gradient-related direction. Evolutionary methods, simulated annealing, and Expectation-maximization and non-parametric methods are among other commonly used methods for training neural networks. See also machine learning. Employing artificial neural networks Perhaps the greatest advantage of ANNs is their ability to be used as an arbitrary function approximation mechanism which 'learns' from observed data. However, using them is not so straightforward and a relatively good understanding of the underlying theory is essential. • Choice of model: This will depend on the data representation and the application. Overly complex models tend to lead to problems with learning. • Learning algorithm: There are numerous tradeoffs between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular fixed dataset. However selecting and tuning an algorithm for training on unseen data requires a significant amount of experimentation. • Robustness: If the model, cost function and learning algorithm are selected appropriately the resulting ANN can be extremely robust. With the correct implementation ANNs can be used naturally in online learning and large dataset applications. Their simple implementation and the existence of mostly local dependencies exhibited in the structure allows for fast, parallel implementations in hardware. Applications The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical. Real life applications The tasks to which artificial neural networks are applied tend to fall within the following broad categories: • Function approximation, or regression analysis, including time series prediction and modeling. • Classification, including pattern and sequence recognition, novelty detection and sequential decision making. • Data processing, including filtering, clustering, blind source separation and compression. Application areas include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition and more), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering. Neural network software Main article: Neural network software Neural network software is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems. Types of neural networks Feedforward neural network The feedforward neural networks are the first and arguably simplest type of artificial neural networks devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network. Single-layer perceptron The earliest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. In this way it can be considered the simplest kind of feed-forward network. The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). Neurons with this kind of activation function are also called McCulloch-Pitts neurons or threshold neurons. In the literature the term perceptron often refers to networks consisting of just one of these units. They were described by Warren McCulloch and Walter Pitts in the 1940s. A perceptron can be created using any values for the activated and deactivated states as long as the threshold value lies between the two. Most perceptrons have outputs of 1 or -1 with a threshold of 0 and there is some evidence that such networks can be trained more quickly than networks created from nodes with different activation and deactivation values. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. Single-unit perceptrons are only capable of learning linearly separable patterns; in 1969 in a famous monograph entitled Perceptrons Marvin Minsky and Seymour Papert showed that it was impossible for a single-layer perceptron network to learn an XOR function. They conjectured (incorrectly) that a similar result would hold for a multi-layer perceptron network. Although a single threshold unit is quite limited in its computational power, it has been shown that networks of parallel threshold units can approximate any continuous function from a compact interval of the real numbers into the interval [-1,1]. This very recent result can be found in [Auer, Burgsteiner, Maass: The p-delta learning rule for parallel perceptrons, 2001 (state Jan 2003: submitted for publication)]. A single-layer neural network can compute a continuous output instead of a step function. A common choice is the so-called logistic function: $y = \frac{1}{1+e^{-x}}$ With this choice, the single-layer network is identical to the logistic regression model, widely used in statistical modelling. The logistic function is also known as the sigmoid function. It has a continuous derivative, which allows it to be used in backpropagation. This function is also preferred because its derivative is easily calculated: $y^' = y(1-y)$ Multi-layer perceptron This class of networks consists of multiple layers of computational units, usually interconnected in a feed-forward way. Each neuron in one layer has directed connections to the neurons of the subsequent layer. In many applications the units of these networks apply a sigmoid function as an activation function. The universal approximation theorem for neural networks states that every continuous function that maps intervals of real numbers to some output interval of real numbers can be approximated arbitrarily closely by a multi-layer perceptron with just one hidden layer. This result holds only for restricted classes of activation functions, e.g. for the sigmoidal functions. Multi-layer networks use a variety of learning techniques, the most popular being back-propagation. Here the output values are compared with the correct answer to compute the value of some predefined error-function. By various techniques the error is then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount. After repeating this process for a sufficiently large number of training cycles the network will usually converge to some state where the error of the calculations is small. In this case one says that the network has learned a certain target function. To adjust weights properly one applies a general method for non-linear optimization task that is called gradient descent. For this, the derivative of the error function with respect to the network weights is calculated and the weights are then changed such that the error decreases (thus going downhill on the surface of the error function). For this reason back-propagation can only be applied on networks with differentiable activation functions. In general the problem of teaching a network to perform well, even on samples that were not used as training samples, is a quite subtle issue that requires additional techniques. This is especially important for cases where only very limited numbers of training samples are available. The danger is that the network overfits the training data and fails to capture the true statistical process generating the data. Computational learning theory is concerned with training classifiers on a limited amount of data. In the context of neural networks a simple heuristic, called early stopping, often ensures that the network will generalize well to examples not in the training set. Other typical problems of the back-propagation algorithm are the speed of convergence and the possibility of ending up in a local minimum of the error function. Today there are practical solutions that make back-propagation in multi-layer perceptrons the solution of choice for many machine learning tasks. ADALINE Adaptive Linear Neuron or later called Adaptive Linear Element. It was developed by Professor Bernard Widrow and his graduate student Ted Hoff at Stanford University in 1960. It's based on McCulloch-Pitts model. It consists of a weight, a bias and a summation function. Operation: $y_i=wx_i+b$ Its adaptation is defined through a cost function (error metric) of the residual $e=d_i-(b+wx_i)$ where $d_i$ is the desired input. With the MSE error metric $E=\frac{1}{2N}\sum_i^N e_i^2$ the adapted weight and bias become: $b=\frac{\sum_i x_i^2\sum_i d_i - \sum_i x_i \sum_i x_i d_i}{N(\sum_i(x_i - \bar x)^2)}$ and $w=\frac{\sum_i(x_i - \bar x)(d_i - \bar d)}{\sum_i(x_i - \bar x)^2}$ While the Adaline is through this capable of simple linear regression, it has limited practical use. There is an extension of the Adaline, called the Multiple Adaline (MADALINE) that consists of two or more adalines serially connected. Radial basis function (RBF) Main article: Radial basis function Radial Basis Functions are powerful techniques for interpolation in multidimensional space. A RBF is a function which has built into a distance criterion with respect to a centre. Radial basis functions have been applied in the area of neural networks where they may be used as a replacement for the sigmoidal hidden layer transfer characteristic in multi-layer perceptrons. RBF networks have 2 layers of processing: In the first, input is mapped onto each RBF in the 'hidden' layer. The RBF chosen is usually a Gaussian. In regression problems the output layer is then a linear combination of hidden layer values representing mean predicted output. The interpretation of this output layer value is the same as a regression model in statistics. In classification problems the output layer is typically a sigmoid function of a linear combination of hidden layer values, representing a posterior probability. Performance in both cases is often improved by shrinkage techniques, known as ridge regression in classical statistics and known to correspond to a prior belief in small parameter values (and therefore smooth output functions) in a Bayesian framework. RBF networks have the advantage of not suffering from local minima in the same way as multi-layer perceptrons. This is because the only parameters that are adjusted in the learning process are the linear mapping from hidden layer to output layer. Linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. In regression problems this can be found in one matrix operation. In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with using iterated reweighted least squares. RBF networks have the disadvantage of requiring good coverage of the input space by radial basis functions. RBF centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, representational resources may be wasted on areas of the input space that are irrelevant to the learning task. A common solution is to associate each data point with its own centre, although this can make the linear system to be solved in the final layer rather large, and requires shrinkage techniques to avoid overfitting. Associating each input datum with an RBF leads naturally to kernel methods such as Support Vector Machines and Gaussian Processes (the RBF is the kernel function). All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model. Like Gaussian Processes, and unlike SVMs, RBF networks are typically trained in a Maximum Likelihood framework by maximizing the probability (minimizing the error) of the data under the model. SVMs take a different approach to avoiding overfitting by maximizing instead a margin. RBF networks are outperformed in most classification applications by SVMs. In regression applications they can be competitive when the dimensionality of the input space is relatively small. Kohonen self-organizing network The self-organizing map (SOM) invented by Teuvo Kohonen uses a form of unsupervised learning. A set of artificial neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and the SOM will attempt to preserve these. Recurrent network Contrary to feedforward networks, recurrent neural networks (RNs) are models with bi-directional data flow. While a feedforward network propagates data linearly from input to output, RNs also propagate data from later processing stages to earlier stages. Simple recurrent network A simple recurrent network (SRN) is a variation on the multi-layer perceptron, sometimes called an "Elman network" due to its invention by Jeff Elman. A three-layer network is used, with the addition of a set of "context units" in the input layer. There are connections from the middle (hidden) layer to these context units fixed with a weight of one. At each time step, the input is propagated in a standard feed-forward fashion, and then a learning rule (usually back-propagation) is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multi-layer perceptron. In a fully recurrent network, every neuron receives inputs from every other neuron in the network. These networks are not arranged in layers. Usually only a subset of the neurons receive external inputs in addition to the inputs from all the other neurons, and another disjunct subset of neurons report their output externally as well as sending it to all the neurons. These distinctive inputs and outputs perform the function of the input and output layers of a feed-forward or simple recurrent network, and also join all the other neurons in the recurrent processing. Hopfield network The Hopfield network is a recurrent neural network in which all connections are symmetric. Invented by John Hopfield in 1982, this network guarantees that its dynamics will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration. Stochastic neural networks A stochastic neural network differs from a regular neural network in the fact that it introduces random variations into the network. In a probabilistic view of neural networks, such random variations can be viewed as a form of statistical sampling, such as Monte Carlo sampling. Boltzmann machine The Boltzmann machine can be thought of as a noisy Hopfield network. Invented by Geoff Hinton and Terry Sejnowski in 1985, the Boltzmann machine is important because it is one of the first neural networks to demonstrate learning of latent variables (hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm of Geoff Hinton (circa 2000) allows models such as Boltzmann machines and products of experts to be trained much faster. Modular neural networks Biological studies showed that the human brain functions not as a single massive network, but as a collection of small networks. This realisation gave birth to the concept of modular neural networks, in which several small networks cooperate or compete to solve problems. Committee of machines A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result compared to other neural network models. In fact in many cases, starting with the same architecture and training but using different initial random weights gives vastly different networks. A CoM tends to stabilize the result. The CoM is similar to the general machine learning bagging method, except that the necessary variety of machines in the committee is obtained by training from different random starting weights rather than training on different randomly selected subsets of the training data. Associative Neural Network (ASNN) The ASNN is an extension of the committee of machines that goes beyond a simple/weighted average of different models. ASNN represents a combination of an ensemble of feed-forward neural networks and the k-nearest neighbour technique (kNN). It uses the correlation between ensemble responses as a measure of distance amid the analysed cases for the kNN. This corrects the bias of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data becomes available, the network instantly improves its predictive ability and provides data approximation (self-learn the data) without a need to retrain the ensemble. Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models. The method is demonstrated at www.vcclab.org, where you can either use it online or download it. Other types of networks These special networks do not fit in any of the previous categories. Holographic associative memory Holographic associative memory represents a family of analog, correlation-based, associative, stimulus-response memories, where information is mapped onto the phase orientation of complex numbers operating. These models exhibit some remarkable characteristics such as generalization, pattern recognition with instanteneously changeable attention, and ability to retrieve very small patterns. Instantaneously trained networks Instantaneously trained neural networks (ITNNs) are also called "Kak networks" after their inventor Subhash Kak. They were inspired by the phenomenon of short-term learning that seems to occur instantaneously. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data, but versions for continuous data that require small additional processing are also available. Spiking neural networks Spiking (or pulsed) neural networks (SNNs) are models which explicitly take into account the timing of inputs. The network input and output are usually represented as series of spikes (delta function or more complex shapes). SNNs have an advantage of being able to continuously process information. They are often implemented as recurrent networks. Networks of spiking neurons -- and the temporal correlations of neural assemblies in such networks -- have been used to model figure/ground separation and region linking in the visual system (see e.g. Reitboeck et.al.in Haken and Stadler: Synergetics of the Brain. Berlin, 1989). Gerstner and Kistler have a freely-available online textbook on Spiking Neuron Models. Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a potentially unlimited memory capacity. In June 2005 IBM announced construction of a Blue Gene supercomputer dedicated to the simulation of a large recurrent spiking neural network [1]. Dynamic neural networks Dynamic neural networks not only deal with nonlinear multivariate behaviour, but also include (learning of) time-dependent behaviour such as various transient phenomena and delay effects. Meijer has a Ph.D. thesis online where regular feedforward perception networks are generalized with differential equations, using variable time step algorithms for learning in the time domain and including algorithms for learning in the frequency domain (in that case linearized around a set of static bias points). Cascading neural networks Cascade-Correlation is an architecture and supervised learning algorithm developed by Scott Fahlman. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network. Neuro-fuzzy networks A neuro-fuzzy network is a fuzzy inference system in the body of an artificial neural network. Depending on the FIS type, there are several layers that simulate the processes involved in a fuzzy inference like fuzzification, inference, aggregation and defuzzification. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system. Theoretical properties Capacity Artificial neural network models have a property called 'capacity', which roughly corresponds to their ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Convergence Nothing can be said in general about convergence since it depends on a number of factors. Firstly, there may exist many local minima. This depends on the cost function and the model. Secondly, the optimisation method used might not be guaranteed to converge when far away from a local minimum. Thirdly, for a very large amount of data or parameters, some methods become impractical. In general, it has been found that theoretical guarantees regarding convergence are not always a very reliable guide to practical application. Generalisation and statistics In applications where the goal is to create a system that generalises well in unseen examples, the problem of overtraining has emerged. This arises in overcomplex or overspecified systems when the capacity of the network significantly exceeds the needed free parameters. There are two schools of thought for avoiding this problem: The first is to use cross-validation and similar techniques to check for the presence of overtraining and optimally select hyperparameters such as to minimise the generalisation error. The second is to use some form of regularisation. This is a concept that emerges naturally in a probabilistic (Bayesian) framework, where the regularisation can be performed by putting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimise over two quantities: the 'empirical risk' and the 'structural risk', which roughly correspond to the error over the training set and the predicted error in unseen data due to overfitting. Supervised neural networks that use an MSE cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of the output of the network, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified. By assigning a softmax activation function on the output layer of the neural network (or a softmax component in a component-based neural network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is very useful in classification as it gives a certainty measure on classifications. The softmax activation function: $y_i=\frac{e^{x_i}}{\sum_{j=1}^c e^{x_j}}$ Dynamical properties 1. REDIRECT Template:Expert-subject Various techniques originally developed for studying disordered magnetic systems (spin glasses) have been successfully applied to simple neural network architectures, such as the perceptron. Influential work by E. Gardner and B. Derrida has revealed many interesting properties about perceptrons with real-valued synaptic weights, while later work by W. Krauth and M. Mezard has extended these principles to binary-valued synapses. Patents • Arima, et al., U.S. Patent 5,293,457 ,"Neural network integrated circuit device having self-organizing function". March 8, 1994. Bibliography • Abdi, H., Valentin, D., Edelman, B.E. (1999). Neural Networks. Thousand Oaks: Sage. • Bar-Yam, Yaneer (2003). Dynamics of Complex Systems, Chapter 2. • Bar-Yam, Yaneer (2003). Dynamics of Complex Systems, Chapter 3. • Bar-Yam, Yaneer (2005). Making Things Work. Please see Chapter 3 • Bhagat, P.M. (2005) Pattern Recognition in Industry, Elsevier. ISBN 0-08-044538-1 • Bishop, C.M. (1995) Neural Networks for Pattern Recognition, Oxford: Oxford University Press. ISBN 0-19-853849-9 (hardback) or ISBN 0-19-853864-2 (paperback) • Duda, R.O., Hart, P.E., Stork, D.G. (2001) Pattern classification (2nd edition), Wiley, ISBN 0-471-05669-3 • Gurney, K. (1997) An Introduction to Neural Networks London: Routledge. ISBN 1-85728-673-1 (hardback) or ISBN 1-85728-503-4 (paperback) • Haykin, S. (1999) Neural Networks: A Comprehensive Foundation, Prentice Hall, ISBN 0-13-273350-1 • Fahlman, S, Lebiere, C (1991). The Cascade-Correlation Learning Architecture, created for National Science Foundation, Contract Number EET-8716324, and Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976 under Contract F33615-87-C-1499. electronic version • Hertz, J., Palmer, R.G., Krogh. A.S. (1990) Introduction to the theory of neural computation, Perseus Books. ISBN 0-201-51560-1 • Lawrence, Jeanette (1994) Introduction to Neural Networks, California Scientific Software Press. ISBN 1-883157-00-5 • Masters, Timothy (1994) Signal and Image Processing with Neural Networks, John Wiley & Sons, Inc. ISBN 0-471-04963-8 • Ripley, Brian D. (1996) Pattern Recognition and Neural Networks, Cambridge • Smith, Murray (1993) Neural Networks for Statistical Modeling, Van Nostrand Reinhold, ISBN 0-442-01310-8 • Wasserman, Philip (1993) Advanced Methods in Neural Computing, Van Nostrand Reinhold, ISBN 0-442-00461-3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177000522613525, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/70790?sort=votes
## The NP version of Matiyasevich’s theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) By Matiyasevich, for every recursively enumerable set $A$ of natural numbers there exists a polynomial $f(x_1,...,x_n)$ with integer coefficients such that for every $p\ge 0$, $f(x_1,...,x_n)=p$ has integer solutions if and only if $p\in A$. Now suppose that $A$ is a set of natural numbers with membership problem in $NP$. Is there a polynomial $f$ with integer coefficients such that $f(x_1,...,x_n)=p$ has integer solutions if and only if $p\in A$ and there exists a solution with $||x_i||\le Cp^s$ for some fixed $s, C$, where $||x_i||$ is the length of $x_i$ in binary (i.e. $\sim \log |x_i|$)? Clearly the converse is true: if such a polynomial exists, then the membership problem for $A$ is in NP. - Suppose that, for every instance of p, there was a uniform certificate xbar where the number of components (dimension) of xbar was bounded by a constant. Then it seems to me that the TM that ran the verifier on xbar could be rewritten as your desired polynomial. I do not know how to argue that the certificates have to be uniform in (dimensional) length, although it is clear that the index n is bounded by a polynomial in the bitlength of p. Gerhard "Ask Me About System Design" Paseman, 2011.07.19 – Gerhard Paseman Jul 20 2011 at 2:54 I should say "every instance p of A". Please forgive this and other typos in the previous comment. Gerhard "Ask Me About System Design" Paseman, 2011.07.19 – Gerhard Paseman Jul 20 2011 at 2:56 4 @Gerhard: It cannot be that simple. The conversion from TM to a Diophantine equation is complicated and - at least in Matiyasevich's proof - seems to require exponential slow down. But I may be wrong of course. The proof uses some properties of Pell equations. I wonder if anybody looked at the proof from the complexity point of view. – Mark Sapir Jul 20 2011 at 3:23 1 I vaguely recall that someone did look into this from the computational point of view. The results were not great. I'll see if I can dig this up... – François G. Dorais♦ Jul 20 2011 at 3:49 4 This question seems seems to have been first posed by Adleman and Manders in 1975, and it is closely connected with unsolved problems in complexity theory; the following paper includes a review of the state of the art in 2003: C. Pollett, On the Bounded Version of Hilbert's Tenth Problem. Archive for Mathematical Logic. Vol. 42. No. 5. 2003. pp. 469--488. You can find a copy on the author's homepage at cs.sjsu.edu/faculty/pollett/papers – Ali Enayat Jul 20 2011 at 13:47 show 1 more comment ## 2 Answers I don’t know about the particular form of the polynomial you are using, but in general, it is a well-known open problem whether every NP set can be represented by a Diophantine equation with a polynomial bound on the length of the solutions. Adleman and Manders proved that the set `$\{\langle a,b,c\rangle\in\mathbb N^3:(\exists x,y\in\mathbb N)(ax^2+by=c)\}$` is NP-complete, hence the answer is positive iff the class of such representable sets is closed under polynomial-time reductions, but it’s not clear whether the latter is actually true or not. See the introduction of Pollett for an overview of some known partial results. - Emil, is it known if they are closed under any smaller complexity class like $\mathsf{NC^0}$? – Kaveh Jul 20 2011 at 14:03 2 How do you define uniform $\mathrm{NC}^0$, anyway? As mentioned in Pollett’s paper, all NP-sets are bounded Diophantine if all coNLOGTIME-sets are, so the closure under any class of at least this complexity (which is quite small) is equivalent to the full problem. – Emil Jeřábek Jul 20 2011 at 14:17 I see. (I didn't pay attention that 𝖣𝖫𝖮𝖦𝖳𝖨𝖬𝖤 uniformity is not good for 𝖭𝖢𝟢.) – Kaveh Jul 20 2011 at 14:51 @Emil: Thank you! I accept this answer because it came earlier than François' . – Mark Sapir Jul 20 2011 at 16:20 You are welcome. – Emil Jeřábek Jul 20 2011 at 16:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think this is still an open problem. The idea of a Non-Deterministic Diophantine Machine (NDDM) was introduced by Adleman and Manders. In their paper Diophantine Complexity, they conjecture that the class of problems recognizable in polynomial time by a NDDM are precisely the problems in NP. However, they only prove the following: 1. If A is accepted on a NDDM within time $T$, then A is accepted on a NDTM within time $T^2$. 2. If A is accepted on a NDTM within time $T$, then A is accepted on a NDDM within time $2^{10T^2}$. They also show that if R0 is the problem of determining whether all even bits of a natural number are zero, then R0 is recognized in polynomial time by a NDDM if and only if all NP problems are recognized in polynomial time by a NDDM. PS: Technically speaking, a NDDM is not exactly of the type you ask for in your question. However, one recovers the form you desire using Putnam's trick: the equations $P(x,x_1,\ldots,x_n) = 0$ and $x = x(1 - P(x,x_1,\ldots,x_n)^2)$ have exactly the same solutions. - I had missed Emil's answer while typing this. – François G. Dorais♦ Jul 20 2011 at 14:37 @François: Thank you! – Mark Sapir Jul 20 2011 at 16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374547600746155, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/292633/regularity-up-to-the-boundary/299641
# Regularity up to the boundary Let $L$ be a second order linear elliptic differential operator on an open bounded subset $U\subset \mathbb R^n$, with smooth uniformly bounded coefficients. Suppose the boundary of $U$ is $C^\infty$. Suppose $f\in C_c^\infty(U)$ ($f$ is smooth and has compact support in $U$). Must there exist a solution $u$ to the PDE $Lu = f$, $u|_{\partial U} = 0$ such that $u$ extends to be $C^\infty$ on the closure $\bar U$? - I guess the ellipticity assumption is uniform too? – user53153 Feb 2 at 7:41 Yeah, the coefficients are uniformly elliptic on $U$. – user15464 Feb 2 at 13:35 1 How about $L=-\Delta+\lambda I$ (meaning $Lu=-\Delta u +\lambda u$, for $\lambda \in \mathbb{R}$) and $f=0$. The only $\lambda$ for which there is a solution are the Laplacian's eigenvalues, which form a countable set. On the other hand if a solution exists then it's a weak solution so regularity gives the $C^\infty$ extension up to the boundary. – Jose27 Feb 2 at 18:10 ## 1 Answer Yes, this is the Schauder existence theory (see, for example, Gilbarg and Trudinger, Section 6.3 or so, or if that is not readily available, the Wikipedia article http://en.wikipedia.org/wiki/Schauder_estimates has a good summary). Applying it once will give you $C^{2,\alpha}$ estimates. Then you subsequently apply the theory to the first derivatives, recognizing that differentiating the equation gives you 2nd order elliptic operator in the first derivative, and so on for higher order terms. - Can we necessarily find a solution in $C_c^\infty(U)$? – user15464 Feb 10 at 23:08 The solution will probably not be compactly supported in $U$, but will take on 0 along $\partial U$. In fact, it's probably not possible for $u$ to be compactly supported inside $U$, due to the unique continuation properties of solutions to elliptic PDEs. – Ray Yang Feb 10 at 23:18 I take that last sentence back. It's possible for $u$ to be compactly supported, but it would not be the general case - the unique continuation theorems I was thinking of do not apply when there is a source term. In general, think of what happens if $f$ is your favorite bump function, and if the domain $U$ is really large - then as you get far away from the support of $f$, you would expect $u$ to resemble an appropriate multiple of the fundamental solution .... – Ray Yang Feb 11 at 19:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231796860694885, "perplexity_flag": "head"}
http://motls.blogspot.com/2012/07/diphoton-higgs-enhancement-as-proof-of.html?m=1
The Reference Frame Our stringy Universe from a conservative viewpoint Friday, July 20, 2012 Diphoton Higgs enhancement as a proof of naturalness The first hep-ph preprint today is a paper I've known about since the July 4th Higgsfest and I've been looking forward to see it. The title is 2:1 for Naturalness at the LHC? The score "2:1" has a double meaning: it either refers to a soccer match in which the home team won (Plzeň defeated Rustavi of Georgia 3-to-1 last night, however); or it refers to the 100% excess of the Higgs decays to two photons. The authors, Nima Arkani-Hamed, Kfir Blum, Raffaele Tito D'Agnolo, and JiJi Fan (who will be referred to as Nima et al. because I don't know any of the co-authors, as far as I know) propose a connection between a priori very different features or possible features of Nature: 1. naturalness, essentially the opposite thing to the "anthropic principle" – one of the most conceptual principles we know in contemporary particle physics that may still be wrong (it says that dimensionless parameters shouldn't be surprisingly tiny unless their small or vanishing value is justified by a valid argument, ideally an enhanced symmetry) 2. seemingly elevated diphoton branching ratio of the July 4th $$126\GeV$$ Higgs boson, one of the boring yet distracting 2+ sigma anomalies and the only slight deviation of the observed God particle from the Standard Model predictions that has survived so far and that may be talked about The probability that the Higgs boson decays to two photons (also known as the branching ratio) was observed by ATLAS+CMS to be about 1.8 times higher than the Standard Model prediction. Because the measurements of the precise branching ratios require lots more data than the very discovery that there is a new particle, these branching ratios have a large error margin and the 80% excess is therefore just a 2+ sigma effect at this moment. But it could have profound consequences, Nima et al. argue. As I said, if their arguments are right, it's exactly the type of a connection that must please every physicist. One finds a litmus test, a previously irrelevant technicality – in this case the enhancement of the diphoton branching ratio – that is actually inseparably connected with something we really care about, almost religiously, and something that seems to decide about the soul of science: naturalness. How does their argument work? And is it right? A brief history of Nima Before I try to offer my answer, I can't resist to recall some history about Nima and naturalness. Yes, after those years of interactions with Nima and listening to his talks, I could perhaps be employed as a historian of science focusing on the relationship of Nima Arkani-Hamed and naturalness. ;-) But I will simplify it a bit; let's hope that it won't be completely wrong. One of the things about Nima has been his diversity of ideas and interests and the sheer size of the ensemble of models he has co-fathered or nearly co-fathered. There are several gods whose discovery would mean that Nima would deserve to share a Nobel prize; there are also several antigods, devils, and atheists' holy grails whose discovery would probably earn him a Nobel prize, too. He's famous for models with the huge gaps between the masses as well as small gaps between the masses; large extra dimensions and no extra dimensions at all; models with huge numbers of additional particle species and models with almost no new particles; enthusiastic garden-variety supersymmetric models as well as passionate feelings that SUSY looks much less powerful than a decade ago; and so on, and so on. But one open question has become a defining sign of physics from Nima's viewpoint. It's nothing else than the anthropic reasoning. In the recent decade, Nima has repeatedly emphasized that it's a crossing that may send physics research of the future into vastly different directions. Needless to say, Nima, the ultimate opportunist (greetings, Nima!), has been a double spy in this cold war, too. :-) He's co-written various papers that proposed natural solutions to the hierarchy problem – models that implied that the Higgs boson should be light without fine-tuning. But he's been also involved in the opposite business. Together with Savas Dimopoulos, they gave rise to the split supersymmetry, a culminating work in the pro-anthropic research by these men. One could say that they decided to construct the most sensible particle physics model with the new assumption that one may leave the lightness of the Higgs to the anthropic selection. Still, with this change in the paradigm, it's sensible to consider supersymmetry and reproduce the successes of low-energy supersymmetry such as gauge coupling unification and a dark matter candidate. It can be done. In the resulting model, most superpartners are very heavy and only some of them are kept light. By the way, split SUSY predicted a Higgs boson between $$120$$ and $$150\GeV$$, compatible with the July 4th discovery. But at least one of its co-fathers is on a new mission whose goal is nothing else than the massacre of split SUSY. Nima has always viewed the "naturalness vs unnaturalness" conflict to be very sharp, binary, and black-and-white. And the fresh paper they wrote fits into this philosophy perfectly. I almost have the feeling that the most general philosophy and storyline of the new paper has been decided for 8 years and they just recently added some technical details. ;-) Needless to say, I would also be happy if we could learn a clear black-or-white answer to the question whether Nature respects naturalness when it makes the Higgs boson light. However, I am ready for the answer that the answer isn't black-or-white. The question whether the lightness of the Higgs is natural is somewhat vague and non-rigorous and such questions often have unclear, grey answers. The Planck-Higgs gap could be partially covered by dynamical mechanisms and partially accounted for by the anthropic selection; it could also be explained or co-explained by completely new ideas that can't be easily classified as anthropic or non-anthropic ones. They want to "nearly classify" particle physics models that increase the diphoton Higgs branching ratio, i.e. the probability that the Higgs decays to $$H\to \gamma\gamma$$, but that doesn't enhance the $$H\to ZZ$$ branching ratio. The ratio of the two branching ratios should increase 1.5-2.0 times relatively to the Standard Model. They decide it can't be done by modifying the tree-level coupling of the Higgs field, something I wouldn't even discuss as a possibility because in the minimal i.e. Standard Model, these couplings are completely determined by the measured masses. In their scheme, it follows that the affirmative action favoring the diphoton decays has to come from loop corrections. There have to be new loop contributions – which also means new particle running in the loops. (I have some worries that even this first step could have loopholes – new tree-level exchange of new matter such as new $$W'$$ bosons could also make an impact but I am ready to believe that light enough particles of this kind have been excluded.) But loop contributions are naturally small. To make them large, you must have large values of the interaction coupling constants that appear in almost all the vertices in the loop. It seems simplest to add a fermionic loop. When it comes to the identity of the fermion, they decide that it is essentially a "new vector-like lepton species", a lepton with left-right-symmetric interactions whose mass shouldn't be far from $$100-200\GeV$$. Note that Hooper and Buckley attribute the effect to stop squarks which are scalars so Nima et al. have nothing to say about these things. The point is that if the required higher diphoton branching ratio forces you to add a new particle such as the vector-like lepton, it has additional consequences. The new particle or new particles will contribute to the running of the Higgs quartic coupling $$\lambda$$, the running that I have discussed in the article about the Higgs instability. The running will have the form\[ \ddfrac{\lambda}{\ln\mu} = -C\cdot {\mathcal N} y^4. \] I have included the possibility of several, $${\mathcal N}$$ new species. Unless I am an idiot, the fourth power of the Yukawa coupling $$y$$ comes from the four vertices of a box (square) diagram inserted between four external lines of the God particle. The Yukawa coupling $$y$$ has to be large for the new lepton to substantially influence the diphoton branching ratio. It follows that the right hand side of the equation above is large, too. It means that the quartic coupling goes negative, the theory becomes unstable, and a new fix is needed. The scale at which the new fix is vital may be described as the "cutoff scale" of the theory with the single new lepton only – or, more generally, the cutoff scale of a theory with several new fermion species. The cutoff scale comes quickly, below $$10\TeV$$ or so, even if we try to delay it as much as possible. If we try to delay it, it is desirable to be satisfied with a smaller diphoton enhancement: the doubling of the diphoton branching ratio makes a breakdown below $$1\TeV$$ inevitable. So we should better be satisfied with the enhancement by a factor of $$1.5$$. And if we fix this factor and if we want to delay the breakdown, it is a good idea to make the new lepton as light as possible, e.g. $$100-150\GeV$$. But even if we do it in this way, the theory inevitably breaks down below $$10\TeV$$, they say, which is not too far from the Higgs mass scale. In this sense, the naturalness – something more robust fixing the problems coming from the lightness of the Higgs – is guaranteed by the enhanced branching ratio. One may formulate the same proposition in the opposite way: all theories that only add extra light fermions to the Standard Model (which already includes the God particle) inevitably predict that the large diphoton branching ratio enhancement will disappear. It's a cute argument and equivalence, especially if it is true. Concerning the last paragraph, the proposition is not as strong (or "audacious and obviously wrong") as you may think. The theories discussed in the previous paragraph don't really include the full-fledged supersymmetric models because those models have new bosons such as stop squarks, too. So there's really no contradiction with papers such as Hooper-Buckley who claim to have achieved the diphoton enhancement by stop squarks or by other methods. However, the paper shows that split SUSY would be excluded if the diphoton excess survived. I personally would never say that "split SUSY" and "an unnatural theory" are the same thing, however (split SUSY is just the minimal theory obeying a certain complicated list of requirements that depend on the status of particle physics in 2004, i.e. on social sciences, and I just find it extremely unlikely that the right theory of Nature may be obtained by exactly this minimal-in-2004 definition), and the observation by Nima et al. doesn't seem to affect most unnatural theories because they contain new light bosons, too. In fact, it's natural for unnatural theories to contain new light bosons. ;-) [A wrong paragraph was here and it was removed.] The LHC experiments could decide about the big questions in these confrontations of ideas about naturalness earlier than the men of theory, however. 5 comments: 1. Dude are you doped Dear Lubos, the stops are scalars, and thus have nothing to do with the argument of Nima and friends, who only discussed fermions contributions. You may want to fix sentences such as "depend on new fermion species -- stop squarks --..." Like seriously? 2. Thanks, very good point. 3. Mitchell I just ran across 't Hooft's original paper in which he introduced naturalness: http://bit.ly/PgqYLJ 4. Cool, still a kind of post-golden-era paper. But it's enough to read it roughly, or the abstract, to see that the details shouldn't have been that influential. I introduce naturalness... at every scale - great. Then I claim that for theories to be natural, one must have lots of other QCD/technicolor strong sectors and compositeness all over the place - completely wrong. For a late 1970s paper, it's pretty remarkable to completely overlook SUSY and SUSY isn't really the only mechanism/symmetry that may help to protect naturalness. Only when I see the paper in this form and imagine it was the bread-and-butter of the education of many people, I understand why so many people have been so obsessed by adding strongly interacting sectors and compositeness everywhere - something I would always find pretty much stupid and unmotivated (and that is excluded up to huge scales by the LHC today): an influential paper was identifying this ugly technological monstrosity with naturalness! ;-/ 5. Dilaton Ha ha Lumo, the history of Nima is as much fun to read as listining to a Nima talk :-D... Where would these "new vector-like lepton species" come from? I mean, are they embeded in a more fundamental high energy scale theory somehow? And if true, would Nima's new model give clues about how to resolve additional issues the SM refuses to explain? Who is Lumo? Luboš Motl Pilsen, Czech Republic View my complete profile ← by date
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608883261680603, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/66545-basic-division-simplification.html
# Thread: 1. ## Basic division simplification I have been given a logarithmic equation to simplify. I have simplified it down a couple of stages and am left with the below equation: (3a)/((a^2)/(25)) Is it possible to remove the 'a' from the equation, or at least one of them, perhaps the lower one? I have tried quite a few ways, but have not been successful. Thanks for any help. 2. Originally Posted by Ratiocinator I have been given a logarithmic equation to simplify. I have simplified it down a couple of stages and am left with the below equation: (3a)/((a^2)/(25)) Is it possible to remove the 'a' from the equation, or at least one of them, perhaps the lower one? I have tried quite a few ways, but have not been successful. Thanks for any help. A happy and successful New Year! $\dfrac{3a}{\frac{a^2}{25}} = \dfrac{25\cdot 3a}{a^2} = \dfrac{75}{a}$ Btw: There isn't any equation ... 3. Originally Posted by earboth A happy and successful New Year! $\dfrac{3a}{\frac{a^2}{25}} = \dfrac{25\cdot 3a}{a^2} = \dfrac{75}{a}$ Btw: There isn't any equation ... Thank you very much, I appreciate your help. I perhaps should bow my head in shame; I do need to spend much more time on the very basics! 4. Originally Posted by earboth A happy and successful New Year! Thank you. And, of course, the same to you. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9715080857276917, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/70186/more-questions-about-log-structures/70193
## More questions about log structures ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I now have some more questions regarding the role of log structures in moduli problems (you can assume that the moduli problem is the compactification of $n$-marked genus $g$ smooth projective curves for simplicity): 1. It seems that one of the mantras of the subject is that outside of the boundary, the objects have a unique log-structure. In terms of the example moduli problem I gave, that $n$-marked smooth projective curves of genus $g$ have unique log-structures. In what sense is this true? It doesn't seem literally true to me. Surely they must mean that they have unique log-structures such that they satisfy some property, right? If you can enlighten me about the essence of this mantra, please do! 2. One of the strengths of log-structures, evidently, is that in the degenerations, they give a unique deformation. So in the example, if we had a stable $n$-marked curve of genus $g$ with a log-structure, there there would be a unique way to extend it to a complete DVR. My question is: what is the virtue of log-structures as opposed to deformation data? Why not instead of a log-structure attached to each (possibly semi-stable) curve, just add some data that will say how it deforms over a complete DVR? Would it be fair to say that log-structures is the natural way to encode this deformation data? Or perhaps there is an extra virtue? I'm confused about this. Any help would be much appreciated. I've zigzagging between various texts about log-structures, and it is still difficult to get the gist of how to think about them! P.S. I put this question under community wiki also, but I wasn't sure this time that it was merited. If you have objections, let me know. - 1. log stuructures with some universal property is unique. In section 2 of the following paper, the existence of special log structure is proved in a more general setting. Martin Olsson, Universal log structures on semi-stable varieties. Tohoku Math Journal 55 (2003) 397--438 – Naturalmap Jul 13 2011 at 1:02 1 Community Wiki is mostly meant for questions where criteria for determining correctness or satisfaction are hazy, and you seek a community-sorted list of answers. – S. Carnahan♦ Jul 13 2011 at 1:28 ## 2 Answers 1. if $f: X \rightarrow S$ is a proper, log smooth, integral and vertical morphism with semistable geometric fibers, then there is a special log structure on $f' : X' \rightarrow S'$ with same underlying scheme. This log structure is minimal: $X$ is fibered product of $X'$ and $S$ over $S'$. Look at section 2 of Martin Olsson, Universal log structures on semi-stable varieties. Tohoku Math Journal 55 (2003) 397--438 For example, let $X= \textrm{Spec} k[x,y]/xy$ and $S = \textrm{Spec} k$. Basic log structures on $X \rightarrow S$ will be given by monoids: $\mathbb{N}^2 \rightarrow k[x,y]/xy$, $(1,0) \mapsto x$ and $(0,1) \mapsto y$. For $S$, $\mathbb{N} \rightarrow k$, $1 \mapsto 0$. You can put other log structures using different monoids on $X$ and $S$, but as long as it satisfies log smothness and so on, it is pull back from this basic log structure. By the way, does anyone know how to draw a commutative diagram on mathoverflow? - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you're compactifying a moduli space by choosing degenerations, you typically assign the trivial log structure to the uncompactified space. Given a point $x$, the size of the characteristic $M_{X,x}/\alpha^{-1}\mathscr{O}_{X,x}^\times$ roughly describes how degenerate the object over $x$ is, and in a place where the log structure is trivial, the characteristic is the trivial monoid. When someone says that the log structures on the moduli space of marked curves and the tautological curve over it are unique, that is relative to some condition that needs to be specified, e.g., being an essentially semistable morphism. If that condition is assumed, then the log structure is unique. In the case of marked curves, the locus of schematically smooth curves is then given the trivial log structure. I don't know what you mean by unique deformation. The tangent and jet spaces of a smooth compactified moduli space are just as big on the boundary as they are elsewhere. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335820078849792, "perplexity_flag": "head"}
http://planetmath.org/Discriminant
# Summary. The discriminant of a given polynomial is a number, calculated from the coefficients of that polynomial, that vanishes if and only if that polynomial has one or more multiple roots. Using the discriminant we can test for the presence of multiple roots, without having to actually calculate the roots of the polynomial in question. There are other ways to do this of course; one can look at the formal derivative of the polynomial (it will be coprime to the original polynomial if and only if that original had no multiple roots). But the discriminant turns out to be valuable in a number of other contexts. For example, we will see that the discriminant of $X^{2}+bX+c$ is $b^{2}-4c$; the quadratic formula states that the roots are $-b/2\pm\sqrt{b^{2}-4c}/2$, so that the discriminant also determines whether the roots of this polynomial are real or not. In higher degrees, its role is more complicated. There are other uses of the word “discriminant” that are closely related to this one. If $\mathbb{Q}(\alpha)$ is a number field, then the discriminant of $\mathbb{Q}(\alpha)$ is the discriminant of the minimal polynomial of $\alpha$. For more general extensions of number fields, one must use a different definition of discriminant generalizing this one. If we have an elliptic curve over the rational numbers defined by the equation $y^{2}=x^{3}+Ax+B$, then its modular discriminant is the discriminant of the cubic polynomial on the right-hand side. For more on both these facts, see [1] on number fields and [2] on elliptic curves. # Definition. The discriminant of order $n\in\mathbb{N}$ is the polynomial, denoted here 11 The discriminant of a polynomial $p$ is oftentimes also denoted as “$\mathop{\rm disc}(p)$” by $\delta^{{(n)}}=\delta^{{(n)}}(a_{1},\ldots,a_{n})$, characterized by the following relation: $\delta^{{(n)}}(s_{1},s_{2},\ldots,s_{n})=\prod_{{i=1}}^{n}\prod_{{j=i+1}}^{n}(% x_{i}-x_{j})^{2},$ (1) where $s_{k}=s_{k}(x_{1},\ldots,x_{n}),\quad k=1,\ldots,n$ is the $k^{{\text{th}}}$ elementary symmetric polynomial. The above relation is a defining one, because the right-hand side of (1) is, evidently, a symmetric polynomial, and because the algebra of symmetric polynomials is freely generated by the basic symmetric polynomials, i.e. every symmetric polynomial arises in a unique fashion as a polynomial of $s^{1},\ldots,s^{n}$. ###### Proposition 1. The discriminant $d$ of a polynomial may be expressed with the resultant $R$ of the polynomial and its first derivative: $d\;=\;(-1)^{{\frac{n(n-1)}{2}}}R/a_{n}$ ###### Proposition 2. Up to sign, the discriminant is given by the determinant of a $2n\!-\!1$ square matrix with columns 1 to $n\!-\!1$ formed by shifting the sequence  $1,\,a_{1},\,\ldots,\,a_{n}$,  and columns $n$ to $2n\!-\!1$ formed by shifting the sequence  $n,\,(n\!-\!1)a_{1},\;\ldots,\;a_{{n-1}}$,  i.e. $\delta^{{(n)}}=\left|\begin{array}[]{ccccccccc}1&0&\ldots&0&n&0&\ldots&0&0\\ a_{1}&1&\ldots&0&(n-1)\,a_{1}&n&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ a_{{n-2}}&a_{{n-3}}&\ldots&1&2\,a_{{n-2}}&3\,a_{{n-3}}&\ldots&n&0\\ a_{{n-1}}&a_{{n-2}}&\ldots&a_{1}&a_{{n-1}}&2\,a_{{n-2}}&\ldots&(n-1)a_{1}&n\\ a_{{n}}&a_{{n-1}}&\ldots&a_{2}&0&a_{{n-1}}&\ldots&(n-2)a_{2}&(n-1)a_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&a_{{n-1}}&0&0&\ldots&a_{{n-1}}&2\,a_{{n-2}}\\ 0&0&\ldots&a_{n}&0&0&\ldots&0&a_{{n-1}}\end{array}\right|$ (2) # Multiple root test. Let $\mathbb{K}$ be a field, let $x$ denote an indeterminate, and let $p=x^{n}+a_{1}x^{{n-1}}+\ldots+a_{{n-1}}x+a_{n},\quad a_{i}\in\mathbb{K}$ be a monic polynomial over $\mathbb{K}$. We define $\delta[p]$, the discriminant of $p$, by setting $\delta[p]=\delta^{{(n)}}\left(a_{1},\ldots,a_{n}\right).$ The discriminant of a non-monic polynomial is defined homogenizing the above definition, i.e by setting $\delta[ap]=a^{{2n-2}}\delta[p],\quad a\in\mathbb{K}.$ ###### Proposition 3. The discriminant vanishes if and only if $p$ has multiple roots in its splitting field. ###### Proof. It isn’t hard to show that a polynomial has multiple roots if and only if that polynomial and its derivative share a common root. The desired conclusion now follows by observing that the determinant formula in equation (2) gives the resolvent of a polynomial and its derivative. This resolvent vanishes if and only if the polynomial in question has a multiple root. ∎ # Some Examples. Here are the first few discriminants. $\displaystyle\delta^{{(1)}}$ $\displaystyle=1$ $\displaystyle\delta^{{(2)}}$ $\displaystyle=a_{1}^{2}-4\,a_{2}$ $\displaystyle\delta^{{(3)}}$ $\displaystyle=18\,a_{1}a_{2}a_{3}+a_{1}^{2}a_{2}^{2}-4\,a_{2}^{3}-4\,a_{1}^{3}% a_{3}-27a_{3}^{2}$ $\displaystyle\delta^{{(4)}}$ $\displaystyle=a_{1}^{2}a_{2}^{2}a_{3}^{2}-4\,a_{2}^{3}a_{3}^{2}-4\,a_{1}^{3}a_% {3}^{3}+18\,a_{1}a_{2}a_{3}^{3}-27\,a_{3}^{4}$ $\displaystyle-4\,a_{1}^{2}a_{2}^{3}a_{4}+16\,a_{2}^{4}a_{4}+18\,a_{1}^{3}a_{2}% a_{3}a_{4}-80\,a_{1}a_{2}^{2}a_{3}a_{4}$ $\displaystyle-6\,a_{1}^{2}a_{3}^{2}a_{4}+144\,a_{2}a_{3}^{2}a_{4}-27\,a_{1}^{4% }a_{4}^{2}+144\,a_{1}^{2}a_{2}a_{4}^{2}$ $\displaystyle-128\,a_{2}^{2}a_{4}^{2}-192\,a_{1}a_{3}a_{4}^{2}+256\,a_{4}^{3}$ Here is the matrix used to calculate $\delta^{{(4)}}$: $\delta^{{(4)}}=\left|\begin{array}[]{ccccccc}1&0&0&4&0&0&0\\ a_{1}&1&0&3a_{1}&4&0&0\\ a_{2}&a_{1}&1&2a_{2}&3a_{1}&4&0\\ a_{3}&a_{2}&a_{1}&a_{3}&2a_{2}&3a_{1}&4\\ a_{4}&a_{3}&a_{2}&0&a_{3}&2a_{2}&3a_{1}\\ 0&a_{4}&a_{3}&0&0&a_{3}&2a_{2}\\ 0&0&a_{4}&0&0&0&a_{3}\\ \end{array}\right|$ # References • 1 Daniel A. Marcus, Number Fields, Springer, New York. • 2 Joseph H. Silverman, The Arithmetic of Elliptic Curves. Springer-Verlag, New York, 1986. See also the bibliography for number theory and the bibliography for algebraic geometry. Type of Math Object: Definition Major Section: Reference Groups audience: ## Mathematics Subject Classification 12E05 Polynomials (irreducibility, etc.) ## Recent Activity May 17 new image: sinx_approx.png by jeremyboden new image: approximation_to_sinx by jeremyboden new image: approximation_to_sinx by jeremyboden new question: Solving the word problem for isomorphic groups by mairiwalker new image: LineDiagrams.jpg by m759 new image: ProjPoints.jpg by m759 new image: AbstrExample3.jpg by m759 new image: four-diamond_figure.jpg by m759 May 16 new problem: Curve fitting using the Exchange Algorithm. by jeremyboden new question: Undirected graphs and their Chromatic Number by Serchinnho ## Info Owner: rspuzio Added: 2002-03-07 - 14:56 Author(s): rspuzio disc by Wkbj79 ✓ ## Versions (v17) by rspuzio 2013-03-22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 47, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8861857056617737, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/tensor-calculus+mathematics
# Tagged Questions 0answers 203 views ### Invariants of a tensor [closed] I asked (almost) the same question in the math exchange. I'm teaching a course, and I need a simple and intuitive proof that the invariants of a matrix ($3\times3$, but it doesn't matter) can be ... 2answers 361 views ### What is the mathematical formulation for buckling? Argument: Buckling is an engineering concept that can only be applied to thin columns with compressive loading. (Is it possible to) Prove the above sentence right or wrong with mathematical ... 2answers 1k views ### What is the covariant derivative in mathematician's language? In mathematics, we talk about tangent vectors and cotangent vectors on a manifold at each point, and vector fields and cotangent vector fields (also known as differential one-forms). When we talk ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187058210372925, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/108379-e-x_k-how-can-we-prove.html
# Thread: 1. ## E(∑ |X_k|) < ∞ ? How can we prove it? Let |X_k| ≥ 0 be a sequence of random variables. If we are given that ∞ ∑ |X_k| < ∞ and k=1 +∞ E(∑ |X_k|) < ∞ , k=1 does this imply that k=n E(∑ |X_k|) < ∞ ? k=1 An infinite sum is by definition the limit of the sequence of partial sums and intuition seems to suggest that the above is true, but how can we prove it rigorously? Note: the approach of my book starts with the following axioms for expecation 1. X≥0 =>E(X)|≥0 2. E(cX+dY) = c E(X)+d E(Y) 3. E(1)=1 4. If X_1<X_2<...<X_n and lim Xn(ω)=X(ω), then lim E(X_n) = E(X) [same as monotone convergence theorem] 2. Originally Posted by kingwinner Let |X_k| ≥ 0 be a sequence of random variables. If we are given that ∞ ∑ |X_k| < ∞ and k=1 +∞ E(∑ |X_k|) < ∞ , k=1 does this imply that k=n E(∑ |X_k|) < ∞ ? k=1 An infinite sum is by definition the limit of the sequence of partial sums and intuition seems to suggest that the above is true, but how can we prove it rigorously? Since |X_n| >= 0, the sequence of partial sums is increasing. The result follows by the monotone convergence theorem. 3. No need of monotone convergence theorem here. Simply say $E[\sum_{k=1}^n |X_k|]\leq E[\sum_{k=1}^\infty |X_k|]<\infty$, where the first inequality is your first axiom (more explicitly, if $X\leq Y$ then $Y-X\geq 0$ hence $E[Y-X]\geq 0$ by 1., i.e. $E[X]\leq E[Y]$, called monotonicity of the expectation) 4. Originally Posted by Laurent No need of monotone convergence theorem here. Simply say $E[\sum_{k=1}^n |X_k|]\leq E[\sum_{k=1}^\infty |X_k|]<\infty$, where the first inequality is your first axiom (more explicitly, if $X\leq Y$ then $Y-X\geq 0$ hence $E[Y-X]\geq 0$ by 1., i.e. $E[X]\leq E[Y]$, called monotonicity of the expectation) Hi, I follow your second point, but I don't understand the step before it. Why is it true that $\sum_{k=1}^n |X_k|\leq \sum_{k=1}^\infty |X_k|$? Here the right side is really defined as a LIMIT...(the result SEEMS obvious here, but how can we JUSTIFY it? i.e. how can we show that the left side is less than or equal to the LIMIT of the right side?) Thanks for explaining! 5. Originally Posted by kingwinner Hi, I follow your second point, but I don't understand the step before it. Why is it true that $\sum_{k=1}^n |X_k|\leq \sum_{k=1}^\infty |X_k|$? Here the right side is really defined as a LIMIT...(the result SEEMS obvious here, but how can we JUSTIFY it? i.e. how can we show that the left side is less than or equal to the LIMIT of the right side?) Thanks for explaining! Since each r.v. is nonnegative, the infinite sum |X_(n+1)| + |X_(n+2)| + ... is itself nonnegative. (This is because the partial sums of this series are increasing and nonnegative, and so their supremum cannot be negative.) Therefore |X_1| + |X_2| + ... + |X_n| <= (|X_1| + |X_2| + ... + |X_n|) + (|X_(n+1)| + |X_(n+2)| + ...) = |X_1| + |X_2| + ... #### Search Tags ∞, <, e∑, prove, |xk|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036626219749451, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8295/einsteins-box-unclear-about-bohrs-retort
# Einstein's box - unclear about Bohr's retort I was reading a book on the history of Quantum Mechanics and I got intrigued by the gendankenexperiment proposed by Einstein to Bohr at the 6th Solvay conference in 1930. For context, the thought experiment is a failed attempt by Einstein to disprove Heisenberg's Uncertainty Principle. Einstein considers a box (called Einstein's box; see figure) containing electromagnetic radiation and a clock which controls the opening of a shutter which covers a hole made in one of the walls of the box. The shutter uncovers the hole for a time Δt which can be chosen arbitrarily. During the opening, we are to suppose that a photon, from among those inside the box, escapes through the hole. In this way a wave of limited spatial extension has been created, following the explanation given above. In order to challenge the indeterminacy relation between time and energy, it is necessary to find a way to determine with adequate precision the energy that the photon has brought with it. At this point, Einstein turns to his celebrated relation between mass and energy of special relativity: $E = mc^2$. From this it follows that knowledge of the mass of an object provides a precise indication about its energy. --source Bohr's response was quite surprising: there was uncertainty in the time because the clock changed position in a gravitational field and thus it's rate could not be measured precisely. Bohr showed that [...] the box would have to be suspended on a spring in the middle of a gravitational field. [...] After the release of a photon, weights could be added to the box to restore it to its original position and this would allow us to determine the weight. [...] The inevitable uncertainty of the position of the box translates into an uncertainty in the position of the pointer and of the determination of weight and therefore of energy. On the other hand, since the system is immersed in a gravitational field which varies with the position, according to the principle of equivalence the uncertainty in the position of the clock implies an uncertainty with respect to its measurement of time and therefore of the value of the interval Δt. Question: How can Bohr invoke a General Relativity concept when Quantum Mechanics is notoriously incompatible with it? Shouldn't HUP hold up with only the support of (relativistic) quantum mechanics? Clarifying a bit what my doubt is/was: I thought that HUP was intrinsic to QM, a derived principle from operator non-commutability. QM shouldn't need GR concepts to be self consistent. In other words - if GR did not exist, relativistic QM would be a perfectly happy theory. I was surprised it's not the case. - It seems he uses only the principle of equivalence, not Einsteins' field equations. So there shouldn't be a problem with the fact that they didn't have a quantum gravity theory, no? – MBN Apr 9 '11 at 1:14 As far as I see it, it uses the fact that, with a spatially varying gravitational field, the time $\Delta t$ depends on the spatial position because clocks behave differently according to GR. By 1930 GR was already an experimentally proven theory. – Sklivvz♦ Apr 9 '11 at 1:17 your question is built around the statement that GR and QM are "notoriously incompatible". This is not the case at least at the level of Bohr's answer which invokes only time-dilation. So while one might not have a clear formulation of QFT in a curved spacetime, phenomena such as time-dilation, red-shift and others have been well tested with many "quantum" systems with no contradictions. Problems will arise when considering quantum systems which can appreciably affect the background gravitational field - as in the case of a neutron star or black hole, but this is not the case here. – user346 Apr 9 '11 at 4:12 5 It's one thing for two theories of nature to be incompatible with one another, but nature can't be incompatible with itself. Bohr was just clever enough to use the features of nature that were well understood and sufficient to resolve the apparent conundrum that Einstein created. The irony was that it used the equivalence principle, the very thought child of Einstein himself! – Raskolnikov Apr 9 '11 at 9:12 @ras: you have misunderstood the argument, which was about whether Quantum theory is self-consistent. The argument was never that Nature is not self-consistent. – Sklivvz♦ Jul 24 '11 at 0:08 show 1 more comment ## 2 Answers Bohr realized that the weight of the device is made by the displacement of a scale in spacetime. The clock’s new position in the gravity field of the Earth, or any other mass, will change the clock rate by gravitational time dilation as measured from some distant point the experimenter is located. The temporal metric term for a spherical gravity field is $1~-~2GM/rc^2$, where a displacement by some $\delta r$ means the change in the metric term is $\simeq~(GM/c^2r^2)\delta r$. Hence the clock’s time intervals $T$ is measured to change by a factor $$T~\rightarrow~T\sqrt{(1~-~2GM/c^2)\delta r/r^2}~\simeq~T(1~-~GM\delta r/r^2c^2),$$ so the clock appears to tick slower. This changes the time span the clock keeps the door on the box open to release a photon. Assume that the uncertainty in the momentum is given by the $\Delta p~\simeq~\hbar/\Delta r~<~Tg\Delta m$, where $g~=~GM/r^2$. Similarly the uncertainty in time is found as $\Delta T~=~(Tg/c^2)\delta r$. From this $\Delta T~>~\hbar/\Delta mc^2$ is obtained and the Heisenberg uncertainty relation $\Delta T\Delta E~>~\hbar$. This demands a Fourier transformation between position and momentum, as well as time and energy. This argument by Bohr is one of those things which I find myself re-reading. This argument by Bohr is in my opinion on of these spectacular brilliant events in physics. This holds in some part to the quantum level with gravity, even if we do not fully understand quantum gravity. Consider the clock in Einstein’s box as a blackhole with mass $m$. The quantum periodicity of this blackhole is given by some multiple of Planck masses. For a blackhole of integer number $n$ of Planck masses the time it takes a photon to travel across the event horizon is $t~\sim~Gm/c^3$ $=~nT_p$, which are considered as the time intervals of the clock. The uncertainty in time the door to the box remains open is $$\Delta T~\simeq~Tg/c(\delta r~-~GM/c^2),$$ as measured by a distant observer. Similary the change in the energy is given by $E_2/E_1~=$ $\sqrt{(1~-~2M/r_1)/(1~-~2M/r_2)}$, which gives an energy uncertainty of $$\Delta E~\simeq~(\hbar/T_1)g/c^2(\delta r~-~GM/c^2)^{-1}.$$ Consequently the Heisenberg uncertainty principle still holds $\Delta E\Delta T~\simeq~\hbar$. Thus general relativity beyond the Newtonian limit preserves the Heisenberg uncertainty principle. It is interesting to note in the Newtonian limit this leads to a spread of frequencies $\Delta\omega~\simeq~\sqrt{c^5/G\hbar}$, which is the Planck frequency. The uncertainty in the $\Delta E~\simeq~\hbar/\Delta t$ does have a funny situation, where if the energy is $\Delta E$ is larger than the Planck mass there is the occurrence of an event horizon. The horizon has a radius $R~\simeq~2G\Delta E/c^4$, which is the uncertainty in the radial position $R~=~\Delta r$ associated with the energy fluctuation. Putting this together with the Planckian uncertainty in the Einstein box we then have $$\Delta r\Delta t~\simeq~\frac{2G\hbar}{c^4}~=~{\ell}^2_{Planck}/c.$$ So this argument can be pushed to understand the nature of noncommutative coordinates in quantum gravity. - I forgot to mention it, but your answer is actually very interesting. – Sklivvz♦ Jul 24 '11 at 0:07 How can Bohr invoke a General Relativity concept when Quantum Mechanics is notoriously incompatible with it? You may have misheard, Sklivvz. General relativity is perfectly compatible with quantum mechanics. If you want the full and completely accurate theory that answers questions that depend on both GR and QM, in any regime, it is called string theory. But obviously, you don't need the sophisticated cannon of string theory to answer these Bohr-Einstein questions. String theory is only needed when the distances are as short as the Planck length, $10^{-35}$ meters, or energies are huge, and so on. Whenever you deal with ordinary distance scales, semiclassical GR is enough - a simple quantization of general relativity where one simply neglects all loops and other effects that are insanely small. And indeed, string theory does confirm (and any other hypothetical consistent theory would confirm) that those effects are small, suppressed by extra powers of $G$, $h$, or $1/c$. And in this Bohr-Einstein case, you don't even need semiclassical general relativity. You don't really need to quantize GR at all. This is just about simple quantum mechanics in a pre-existing spacetime, and Bohr's correct answer to Einstein is just a simple comment about the spacetime geometry. The extreme phenomena that make it hard to unify QM and GR surely play no detectable role in this experiment. They don't even play much role in "quantum relativistic" phenomena such as the Hawking radiation: all of their macroscopic properties may be calculated with a huge accuracy. Shouldn't HUP hold up with only the support of (relativistic) quantum mechanics? Nope. The Heisenberg uncertainty principle is a principle that holds for all phenomena in the Universe. Moreover, it's a bit confusing why you wrote "only" in the context of relativistic quantum mechanics - relativistic quantum mechanics is the most universally valid framework to describe the reality because it includes both the quantum and relativistic "refinements" of physics (assuming that we do the relativistic quantum physics right - with quantum field theory and/or string theory). Einstein, in his claim that he could violate the uncertainty principle, used gravity, so it's not surprising that the error in Einstein's argument - one that Bohr has pointed out - has something to do with gravity, too. Because we talk about the uncertainty principle, you surely didn't want to say that we should be able to describe it purely in non-quantum language. If you wanted to say that non-relativistic quantum mechanics should be enough to prove Einstein wrong, then it's not true because photons used in the experiment are "quantum relativistic" particles. In particular, the mass of a photon that he wants to measure is $m=E/c^2 = hf/c^2$. Because photons and electromagnetic waves in any description are produced at a finite frequency $f$, we cannot let $c$ go to infininity because the change of the mass $m=hf/c^2$ that Einstein proposed to measure (by a scale) would vanish so he couldn't determine the change of the mass - and he wanted to calculate the change of the energy from the mass so he wouldn't be able to determine the energy, either. So Einstein's strategy to show that $E,t$ may be determined simultaneously uses effects that depend on the finiteness of both $h$ and $c$, so he is using relativistic quantum phenomena. To get the right answer or the right predictions what will happen and what accuracy can be achieved, he should do so consistently and take into account all other relevant phenomena "of the same order" that also depend both on relativity and the quanta. The time dilation, as pointed out by Bohr, is one such effect that Einstein neglected, and if it is included, not surprisingly, HUP gets confirmed again. Such proofs are somewhat redundant in theories that we know. Whenever we construct a quantum theory, whether the gravity is described relativistically or not, with fields or not, the uncertainty principle is automatically incorporated into the theory - by the canonical commutators - so it is never possible to find a measurement for which the theory predicts that HUP fails. This conclusion is safer than details of Bohr's particular "loophole" - I am not going to claim that Bohr's observation is the only (or the main) effect that Einstein neglected. There are probably many more. - Hi Lubos, I agree with you, but I think you misread my question. I think that the Bohr retort is perfectly valid in the grander scheme of things. I just found it funny that it used non-QM concept to support a QM thesis. – Sklivvz♦ Apr 9 '11 at 9:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496944546699524, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/208909-mvt.html
# Thread: 1. ## MVT Suppose f:[a,b]->R satisfies the following properties: 1. f is continuous on [a,b] and continuously differentiable on (a,b) 2. f(a) and f(b) are positive numbers 3. f(x)=0 for some x in (a,b) Prove that there exists some c in (a,b) such that f'(c)=0. I understand what the problem is asking and can draw a descriptive picture, but I'm having trouble putting it in general terms. Since f(a) and f(b) are both positive numbers, and there is some x such that f(x)=0, then the slope from f(a) to f(x) must be negative, and the slope from f(x) to f(b) must be positive. Since the function is continuous and continuously differentiable, then the derivative function must also be continuous. Can you show me how to prove that f'(c)=0? 2. ## Re: MVT Originally Posted by lovesmath Suppose f:[a,b]->R satisfies the following properties: 1. f is continuous on [a,b] and continuously differentiable on (a,b) 2. f(a) and f(b) are positive numbers 3. f(x)=0 for some x in (a,b) Prove that there exists some c in (a,b) such that f'(c)=0. The fact that $f$ is continuously differentiable means that $f'$ is continuous. Let $f(d)=0$ where $a<d<b$ $(\exists p\in (a,d))\left[f'(p)=\frac{f(d)-f(a)}{d-a}<0\right]$ Why? Can you finish? Find $f'(q)>0$. 3. ## Re: MVT Why isn't f'(p)=(f(d)-f(a))/(d-a)? Isn't that the definition of derivative? It would have to be less than zero since a<d, which means it has a negative slope. The same would be true for f'(q) except that it would be positive because d<b. So, f'(q)=((f(b)-f(d))/(b-d). 4. ## Re: MVT Originally Posted by lovesmath Why isn't f'(p)=(f(d)-f(a))/(d-a)? Isn't that the definition of derivative? It would have to be less than zero since a<d, which means it has a negative slope. The same would be true for f'(q) except that it would be positive because d<b. So, f'(q)=((f(b)-f(d))/(b-d). $f'(p)$ comes from the mean value theorem as does $f'(q)~.$ Use the intermediate value theorem on $f'(p)<0<f'(q)~.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625860452651978, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/87649-double-integrals-problem.html
# Thread: 1. ## Double integrals problem Well, my problem isn't with the double integral bit, it's just with the integration once I've switched the limits et cetera. The original problem is: $\int_{0}^{1}\int_{\sqrt{y}}^{1}2x^5y\cos{(xy^2)}dx dy$ ...and I'm told to solve it by changing the order of integration. I get: $\int_{0}^{1}\int_{0}^{x^2}2x^5y\cos{(xy^2)}dydx$ I know this forum's not here to check my work, but whether my new limits are correct or not, I still don't know how to integrate $2x^5y\cos{(xy^2)}$ with respect to y, taking x as a constant. Can anyone help? 2. Originally Posted by chella182 Well, my problem isn't with the double integral bit, it's just with the integration once I've switched the limits et cetera. The original problem is: $\int_{0}^{1}\int_{\sqrt{y}}^{1}2x^5y\cos{(xy^2)}dx dy$ ...and I'm told to solve it by changing the order of integration. I get: $\int_{0}^{1}\int_{0}^{x^2}2x^5y\cos{(xy^2)}dydx$ I know this forum's not here to check my work, but whether my new limits are correct or not, I still don't know how to integrate $2x^5y\cos{(xy^2)}$ with respect to y, taking x as a constant. Can anyone help? I didn't check your change of order of inegration but use this sub to evaluate the integral $u=xy^2 \implies du=2xydy$ and we get $\int_{0}^{1}\int_{0}^{x^2}2x^5y\cos{(xy^2)}dydx$ $\int_{0}^{1}\int x^4\cos(u)dudx$ 3. Originally Posted by chella182 Well, my problem isn't with the double integral bit, it's just with the integration once I've switched the limits et cetera. The original problem is: $\int_{0}^{1}\int_{\sqrt{y}}^{1}2x^5y\cos{(xy^2)}dx dy$ ...and I'm told to solve it by changing the order of integration. I get: $\int_{0}^{1}\int_{0}^{x^2}2x^5y\cos{(xy^2)}dydx$ I know this forum's not here to check my work, but whether my new limits are correct or not, I still don't know how to integrate $2x^5y\cos{(xy^2)}$ with respect to y, taking x as a constant. Can anyone help? Your reordering of the integration is correct. Proceeding from there, let $u=y^2$. Thus $du = 2y\,dy$ You now have $\int_0^1\int_0^{x^2}x^5\cos(xu)\,du = \frac{1}{x}\cdot x^5\sin(xu) = \left[x^4\sin(xy^2)\right]_0^{x^2} = x^4\sin(x^5)$ So we have $\int_0^1 x^4\sin(x^5)\,dx$ Let $u=x^5$ and $du=5x^4$. Now you have: $\frac{1}{5}\int_0^1\sin(u)\,du = -\frac{1}{5}\cos(u) = \left[-\frac{1}{5}\cos(x^5)\right]_0^1 = \boxed{\frac{1}{5}-\frac{1}{5}\cos(1)}$ 4. I totally don't understand the second line of your working, sorry where does the $2y$ disappear to? And the $x^4$ in the second integral? And I don't get where the $\frac{1}{x}$ has come from 5. $2y$ is the derivative of $y^{2}$. All he did was a u-substitution. 6. No, I know that, where does it disappear to though? 7. Into the $du$. It never "disappears", it's just part of his new variables $u$ and $du$. And the $\frac{1}{x}$ is part of the integration of $cos(xy)$. The antiderivative of $cos(xy)$ where $x$ is treated as a variable is $\frac{sin(xy)}{x}$. 8. So because the $2y$ appears in the $du$ it's not in the problem anymore? :S I'm well confused. I get where the $\frac{1}{x}$ comes from now though, cheers. 9. It's just u-substitution. Just replace the $y^{2}$ with a $u$ and $2ydy$ with a $du$. Don't let the fact that it's a double integral throw you off. It's just a straightforward u-substitution. 10. I know I just... don't get it :S urgh. I'll go see my lecturer tomorrow if I get chance. 11. Let's look at a single integral: $\int 2ycos(y^{2})\,\,dy$ Let $u=y^{2}$ and $du=2ydy$. Now just substitute: $\int cos(u)\,\,du$ Now just integrate: $\int cos(u)\,\,du = sin(u)$ Now just resubstitute for $u=y^{2}$: $sin(y^{2})$. Take the derivative to clarify: $\frac{d}{dy}sin(y^{2})=cos(y^{2})\cdot \frac{d}{dy}y^{2} = 2ycos(y^{2})$ All you're doing is substituting one type of variable for another. The actual function isn't changed, just the way it's being represented. 12. Ohhh right, I get it. All you needed to do to explain was say $dy=\frac{du}{2y}$ so the $2y$'s cancel 13. It hasn't disappeared. $du=2ydy$ I'm just representing $2ydy$ as $du$ so I can have an easier integral to evaluate. And yes, you can think of it that way if you want. Whatever clicks with you. 14. It has cancelled out therefore it's disappeared in my mind. I do understand the whole thing, all I didn't get was why that disappeared 'cause no one was telling me where it'd gone. Although this does seem rather more complicated than examples we've done in lectures :S
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 52, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960809051990509, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/45320/why-does-an-object-with-higher-speed-gain-more-relativistic-mass
# Why does an object with higher speed gain more (relativistic) mass? Today, in my high school physics class, we had an introductory class on electromagnetism. My teacher explained at some point that an object with a very high speed (he said it started to get somewhat clearly noticable when travelling at 10% of the speed of light) will gain mass, and that that's the reason why you can't go faster than light. One of my classmates then asked, why is this so? Why does an object with higher speed gain more mass? This of course is a logical question, since it is not very intuitive that a higher speed leads to a higher mass. My teacher (to my surprise (responded saying that it is a meaningless question, we don't know why, in the same way we don't know why the universe was created and those kind of philosophical questions. I, being interested in physics, couldn't believe this, I was sure that what he said wasn't true. So after a while of thinking I responded saying: Can't we describe it with Einstein's $E=mc^2?$ If an object gains speed, he gains more (kinetic) energy. With this equality we see that the more energy an object gets, to more massive it becomes. He then replied saying that this formula is used for different cases, whereupon he gave a vague explanation as to when it is used. He gave me an example to show what I said was incorrect; when a car goes from $10 m/s$ to $40m/s$, according to what I said we would see a big increase in mass, and we don't (this sounded logical to me). So here I am, with the following questions: • Why does an object with a higher speed have more mass (than the same object with a smaller speed)? • When is $E=mc^2$ used and why is my argument incorrect in explaining this phenomenon? - I am wondering if you could explain it with work. Work is a change in kinetic energy and $E_{k}=\frac{mv^2}{2}$ so once you start to max out on a velocity due to the limit that light places on us if you did more work you just increase the mass? – user24048 May 6 at 3:02 ## 2 Answers In fact you are more or less correct. I assume the increase in mass mentioned is that described in special relativity. The example given by your teacher is incorrect. As the speeds of 10m/s and 40m/s are hardly relativistic, so we can for now assume $E=mc^2$. Increasing the kinetic energy by $\frac{1}{2}mv^2$ thus increases the mass by $$\frac{\frac{1}{2}mv^2}{c^2}=\frac{1}{2}m\frac{v^2}{c^2}$$ This in fact, is INCREDIBLY small, due to the hugeness of $c$. Now back to why the mass of an object increases. According to special relativity, mass and energy are in fact equivalent. Although not related by $E=mc^2$(Actually $E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}$), the equivalance means that an increase in the velocity of an object will yes, increase its kinetic energy and thus mass. - Oh really? Wow, after he said what he said I actually even rebutted his car argument saying that $c$ is very big so it would be unnoticable, but then I just gave up. – user14445 Nov 28 '12 at 16:08 I find your physics teacher quite entertaining and misleading. Bad for you :P – namehere Nov 28 '12 at 16:10 I'd be careful with claims that the instructor is wrong. Certainly at everyday velocities the Lorentz factor is vanishingly small, but the theory is clear: all the mindblowing relativistic effects are present in some time amount at any non-zero relative velocity. – dmckee♦ Nov 28 '12 at 18:03 @dmckee What do you mean, how does this correspond with my teacher's claims? – user14445 Nov 28 '12 at 18:27 @user14445 The change in "relativistic mass" happens at any non-zero speed. However, it is suppressed by factors of $v^2/c^2$ so it is very small at everyday speeds. If I find time later I may write a brief note on the origin of this effect a both the answer you have here simple repeat the statement that it happens. – dmckee♦ Nov 28 '12 at 18:31 show 2 more comments If you look at an object at rest, and then you look at the object at some speed $\vec{v}\neq0$ and constant, the special theory of relativity tells you how things change. There is an invariant (i.e. non changing) mass which we call rest mass $m_0$, and there is a "relativistic" mass $m$ which changes. You have an static particle near you, do some measurements and the mass that you will have is $m_0$. Now, set the particle into movement in a straight line with constant velocity $\vec{v}$ and measure the mass $m$. You will find that the following is true: $$m=\gamma m_0$$ where $\gamma\equiv\gamma(v)$ the Lorentz factor is a function of the speed $v$ of the object $$\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$ and $c$ is the speed of light in the vacuum. You can see that this mass $m$, in the limit $v\to c$ becomes infinite, thus making impossible the object to be moved. However, I think it is more correct think in terms of $m$ as an inertia, in the Newtonian sense (ignoring vectorial character of force and acceleration) $$F=ma\Rightarrow a =\frac{F}{m}$$ Now fix the force $F$. For a heavy object, $a$ will be smaller than for a light object, thus we can interpret $m$ as the number that tells us how easy is to move that particle. In this sense, we see how $m$ increases with $v$ and thus is harder to move the particle the faster it goes. The relativistic energy is given by $$E^2=p^2c^2+m^2c^4$$ where $p$ is the momentum of the particle. If you have your particle at rest ($p\propto v =0)$ then is true $$E=mc^2$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9652705192565918, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4021/st-petersburg-lottery-pricing-short-investing-horizons/4038
# St Petersburg lottery pricing & short investing horizons I am a statistician (no solid background in finance). Please forward me to a book \ chapter \ paper to resolve the following general question. Suppose we have a stock with the following monthly return distribution: P(R=1%)=0.999, P(R=10000%)=0.001. The mean monthly return is about 11%, which is very good. Still, for typical investors with short horizon (say, 1-2 years), the probability to get anything over 1% per month is very small, so the stock is not that attractive as the average return would imply. That means, the price of such a stock should go down until it reaches a more acceptable profile P(R=3%)=0.99; this stock will tend to have a lower P/E because of its uncomfortable return distribution. Now suppose there is a thousand of such stocks, and their returns are independent. In this case, taken as a group, they have a good return profile (return jumps are no longer rare), so the group should not be punished with lower P/E. So, should such a stock be priced individually (based on its return profile), or in a group of similar stocks? - Thanks for your thoughtful answers, guys. Now, let us replace "stock" with "asset" or "strategy" and consider Nassim Taleb's strategy of constantly buying puts on the whole market. The strategy has about the same profile of bleeding money almost all the time, except rare market crashes when it profits a lot. Taleb's logic (I guess) is that the market "pays" for the psychological discomfort brought by such a strategy and by the long horizon. It seem logical, but how can we quantify it? Is this particular strategy's risk diversifiable or not? – Tim Aug 29 '12 at 11:41 1 Hi Tim -- Taleb's reasoning is actually slightly different. His calls these impactful outliers "black swans". His option strategies take advantage of the fact that stock returns are usually assumed to be log-normally distributed, meaning such outliers are ignored in pricing options, and so they're too cheap. Therefore, it's a form of "model arbitrage". To quantify it, you could examine option prices under non-normal distributions accounting for excess kurtosis and skew. It is partially diversifiable since there are instruments correlated with the strategy -- so some risk can be mitigated. – jlowin Aug 29 '12 at 12:24 This is useful, but I seek understanding on a somewhat higher level. My formulation was not exactly what I wanted to ask. I will try to reformulate in a separate question. – Tim Aug 29 '12 at 13:24 Ok! I'll look out for your question. There are many people here who are excellent teachers and I'm sure someone will be able to help. – jlowin Aug 29 '12 at 13:35 ## 2 Answers What a great question -- it touches on many issues at the core of quantitative finance. This answer might be a lot more than you bargained for, but it's too interesting to pass up. References Mostly, this subject falls somewhere at the intersection of these three highly-interrelated topics: risk-neutral valuation, rational pricing and the fundamental theorem of asset pricing. In short: the prices investors are willing to pay depend on perceived risks. So, how can we decide what risks determine prices? As your question illustrates, risk-neutral pricing is one of the most difficult areas to grasp, as it often defies intuition. As such, it is rather hard to find papers that aren't full of math or that require little knowledge of finance. However, I think these three may fit the bill: • Risk Neutral Valuation: A Gentle Introduction (Part 1) -- This appropriately-titled but lengthy piece uses a situation similar (but not the same! see below) to the one you describe to motivate its analysis. • What is... a Free Lunch? -- A very quick, largely non-academic introduction to the arbitrage-free pricing and the fundamental theorem of asset pricing. It's brevity may make it a tempting starting place, but it might be better to read it second, as it is more applied than the first reference. Note also that the words "risk-neutral" don't appear anywhere in it despite forming the crux of its argument; remember I said these topics are quite related. • Risk-Neutral Probabilities Explained -- This one is probably the most complete treatment, and starts in a more traditional manner using Arrow securities. In disclosure, I wasn't familiar with this one before answering your question, but I found it paired with my first choice on the Wikipedia page for rational pricing, and after reading it thought it would be appropriate. Some Background Now, if you're still reading -- or still care -- I'll try to flesh out why these fields are related to your question. You hit the nail on the head: people have subjective judgements of value, based on risk aversion or their own reads of outcome probabilities. Moreover, these opinions are swayed by context (as you set up with a portfolio, or a short horizon). That's hardly the basis for an objective pricing framework, and yet we know that risk impacts price. How can we make claims about pricing if we can't even agree on where to start? Our jumping-off point is the idea that a security's price is the expectation of its future value. To a single investor, that value is conditional on a set of subjectively-determined probabilities. The motivation for rational or risk-neutral pricing is to find the set of probabilities under which any investor becomes indifferent to risky outcomes (hence, "risk-neutral"). While that may sound like replacing one problem with another one, there are actually very few ways to do that consistently across all securities in the market, meaning without allowing arbitrage opportunities to arise. More explicitly, a risk-neutral probability measure makes the expected return on an asset equal to the prevailing risk-free rate. As long as a market is "complete", then the risk-neutral probability measure can be derived for all assets at once, resulting in a unified -- and objective -- pricing framework. The lightbulb really went off in 1979, when Cox, Ross and Rubenstein first used these ideas to price options. Note -- I don't expect to have convinced anyone with these few paragraphs; please see the attached references for a complete explanation! Now, the truth is that your question is somewhat boring, in an asset pricing sense, because you have specified the complete return distribution of the security in question. (In fact, I would argue that in many ways, the role of a quant comes down to estimating return distributions.) The fundamental theorem tells us that the asset price must therefore be its (objective) expected value, or arbitrage opportunities would result. Things become much more interesting if we didn't know the probabilities, or the future prices, or both -- and that's the reason risk-neutral pricing was developed. A more direct answer, kind of... So, though risk-neutral probabilities are derived from the many securities in a market, I don't consider that the same as your question's "price as a portfolio" option. It is rare to find a security that prices completely in isolation; interest rates or other underliers usually play some role. Generally, as long as those components are known to be arbitrage-free, there is no need to go through the risk-neutral exercise when pricing someone that derives from them. Moreover, I interpret your question as using the portfolio as a means to alter perceived risk; you could have as easily increased horizon and not invoked other securities. So to be clear, I'm treating your question as asking "how do different perceived risks impact security prices" rather than "are securities priced in isolation or in a portfolio context?" So the answer might actually be (c) none of the above: securities are priced individually, but in risk-neutral market contexts. For all intents and purposes, I believe this resolves to your "individual" pricing, but perhaps for a different reason than you expected. ...and a Little MPT Finally, as a statistician you may be interested in how to quantify the subjective aspects of your question. This comes down to the increasingly-misnamed "modern portfolio theory", introduced in 1952 and for which Harry Markowitz won a Nobel Prize. Note that while these concepts are very important, I'd be surprised if you found a professional using them in practice today. Let's generalize your question by saying that these securities, call them $S$, have a binomial distribution with probability $p$. If we consider multiple holding periods then they would be Bernoulli-distributed, but since that will come out to a linear scale I disregard it here. One security's expected return is therefore $p$ with variance $p(1-p)$. Compare that to holding a portfolio of $n$ such securities, allocating $\frac{1}{n}$ of your capital to each. The expected portfolio return is $$E[ \sum^n\frac{1}{n}S] = E[S] = p$$ and its variance is $$Var[\sum^n\frac{1}{n}S] = \frac{Var[S]}{n} = \frac{p(1-p)}{n}.$$ This is diversification at work! Same expected return, but lower volatility. You ask: if this new portfolio is obviously better, shouldn't I pay more for it? And traders immediately start lining up to sell it to you in a particularly expensive illustration of the law of no-arbitrage. - Thanks a lot! I will have to read the papers; the main problem to me is understanding the logic of valuation \ risk, and your excellent answer helps a lot. – Tim Aug 29 '12 at 11:22 This may or may not be helpful, since I don't have anything to point you to that specifically addresses the high skewness of the distribution you mention. However, this sounds like it is probably an idiosyncratic risk, and that certainly has bearing on whether or not it would be priced. In the standard capital asset pricing model, the marginal investor holds something close to the market portfolio and so all that matters is an asset's covariance with the market portfolio. So as long as the probability of the good event is independent of the overall performance of the market portfolio, the textbook finance answer is that an idiosyncratic risk like this would be diversified away and therefore the stock would be priced based on expected value (as long as it is small relative to the market portfolio). In practice things aren't quite so clear, and idiosyncratic risk might be priced after all (so the stock would trade at a discount). See for instance Malkiel and Xu (2006): http://www.utdallas.edu/~yexiaoxu/IVOT_H.PDF. Then again others have found that stocks with high idiosyncratic volatility actually have lower expected returns, as in Ang, Hodrick, Xing and Zhang (2006). - Thank you! Will read the papers. – Tim Aug 29 '12 at 11:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570682048797607, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/06/09/equicontinuity-convergence-in-measure-and-convergence-in-mean/?like=1&source=post_flair&_wpnonce=9a71b83933
The Unapologetic Mathematician Equicontinuity, Convergence in Measure, and Convergence in Mean First off we want to introduce another notion of continuity for set functions. We recall that a set function $\nu$ on a class $\mathcal{E}$ is continuous from above at $\emptyset$ if for every decreasing sequence of sets $E_n\in\mathcal{E}$ with $\lim_nE_n=\emptyset$ we have $\lim_n\nu(E_n)=0$. If $\{\nu_m\}$ is a sequence of set functions, then, we say the sequence is “equicontinuous from above at $\emptyset$” if for every sequence $\{E_n\}\subseteq\mathcal{E}$ decreasing to $\emptyset$ and for every $\epsilon>0$ there is some number $N$ so that if $n\geq N$ we have $\lvert\nu_m(E_n)\rvert<\epsilon$. It seems to me, at least, that this could also be called “uniformly continuous from above at $\emptyset$“, but I suppose equicontinuous is standard. Anyway, now we can characterize exactly how convergence in mean and measure differ from each other: a sequence $\{f_n\}$ of integrable functions converges in the mean to an integrable function $f$ if and only if $\{f_n\}$ converges in measure to $f$ and the indefinite integrals $\nu_n$ of $\lvert f_n\rvert$ are uniformly absolutely continuous and equicontinuous from above at $\emptyset$. We’ve already shown that convergence in mean implies convergence in measure, and we’ve shown that convergence in mean implies uniform absolute continuity of the indefinite integrals. All we need to show in the first direction is that if $\{f_n\}$ converges in mean to $f$, then the indefinite integrals $\{\nu_n\}$ are equicontinuous from above at $\emptyset$. For every $\epsilon>0$ we can find an $N$ so that for $n\geq N$ we have $\lVert f_n-f\rVert_1<\frac{\epsilon}{2}$. The indefinite integral of a nonnegative a.e. function is real-valued, countably additive, and nonnegative, and thus is a measure. Thus, like any measure, it’s continuous from above at $\emptyset$. And so for every sequence $\{E_m\}$ of measurable sets decreasing to $\emptyset$ there is some $M$ so that for $m\geq M$ we find $\displaystyle\begin{aligned}\int\limits_{E_m}\lvert f_n-f\rvert\,d\mu&<\frac{\epsilon}{2}\\\int\limits_{E_m}\lvert f\rvert\,d\mu&<\frac{\epsilon}{2}\end{aligned}$ the first for all $n$ from $1$ to $N$. Then if $m\geq M$ we have $\displaystyle\lvert\nu_n(E_m)\rvert=\int\limits_{E_m}\lvert f_n\rvert\,d\mu\leq\int\limits_{E_m}\lvert f_n-f\rvert\,d\mu+\int\limits_{E_m}\lvert f\rvert\,d\mu<\epsilon$ for every positive $n$. We control the first term in the middle by the mean convergence of $\{f_n\}$ for $n\geq N$ and by the continuity from above of $\int_E\lvert f_n-f\rvert\,d\mu$ for $n\leq N$. And so the $\nu_n$ are equicontinuous from above at $\emptyset$. Now we turn to the sufficiency of the conditions: assume that $\{f_n\}$ converges in measure to $f$, and that the sequence $\{\nu_n\}$ of indefinite integrals is both uniformly absolutely continuous and equicontinuous from above at $\emptyset$. We will show that $\{f_n\}$ converges in mean to $f$. We’ve shown that $N(f_n)=\{x\in X\vert f(x)\neq0\}$ is $\sigma$-finite, and so the countable union $E_0=\bigcup\limits_{n=1}^\infty N(f_n)$ of all the points where any of the $f_n$ are nonzero is again $\sigma$-finite. If $\{E_n\}$ is an increasing sequence of measurable sets with $\lim_nE_n=E_0$, then the differences $F_n=E_0\setminus E_n$ form a decreasing sequence $\{F_n\}$ converging to $\emptyset$. Equicontinuity then implies that for every $\delta>0$ there is some $k$ so that $\nu_n(F_k)<\frac{\delta}{2}$, and thus $\displaystyle\int\limits_{F_k}\lvert f_m-f_n\rvert\,d\mu\leq\int\limits_{F_k}\lvert f_m\rvert\,d\mu+\int\limits_{F_k}\lvert f_n\rvert\,d\mu=\nu_m(F_k)+\nu_n(F_k)\leq\frac{\delta}{2}+\frac{\delta}{2}=\delta$ For any fixed $\epsilon>0$ we define $\displaystyle G_{mn}=\left\{x\in X\big\vert\lvert f_m-f_n\rvert\geq\epsilon\right\}$ and it follows that $\displaystyle\begin{aligned}\int\limits_{E_k}\lvert f_m-f_n\rvert\,d\mu&\leq\int\limits_{E_k\cap G_{mn}}\lvert f_m-f_n\rvert\,d\mu+\int\limits_{E_k\setminus G_{mn}}\lvert f_m-f_n\rvert\,d\mu\\&\leq\int\limits_{E_k\cap G_{mn}}\lvert f_m-f_n\rvert\,d\mu+\epsilon\mu(E_k)\end{aligned}$ By convergence in measure and uniform absolute continuity we can make the integral over $E_k\cap G_{mn}$ arbitrarily small by choosing $m$ and $n$ sufficiently large. We deduce that $\displaystyle\limsup\limits_{m,n\to\infty}\int\limits_{E_k}\lvert f_m-f_n\rvert\,d\mu\leq\epsilon\mu(E_k)$ and, since $\epsilon>0$ was arbitrary, we conclude $\displaystyle\lim\limits_{m,n\to\infty}\int\limits_{E_k}\lvert f_m-f_n\rvert\,d\mu=0$ Now we can see that $\displaystyle\int\lvert f_m-f_n\rvert\,d\mu=\int\limits_{E_0}\lvert f_m-f_n\rvert=\int\limits_{E_k}\lvert f_m-f_n\rvert+\int\limits_{F_k}\lvert f_m-f_n\rvert\,d\mu$ and thus $\displaystyle\limsup\limits_{m,n\to\infty}\int\lvert f_m-f_n\rvert\,d\mu<\delta$ and, since $\delta>0$ is arbitrary, $\displaystyle\lim\limits_{m,n\to\infty}\int\lvert f_m-f_n\rvert\,d\mu=0$ That is, the sequence $\{f_n\}$ is Cauchy in the mean. But we know that the $L^1$ norm is complete, and so $\{f_n\}$ converges in the mean to some function $g$. But this convergence in mean implied convergence in measure, and so $\{f_n\}$ converges in measure to $g$, and thus $f=g$ almost everywhere. Like this: Posted by John Armstrong | Analysis, Measure Theory 3 Comments » 1. [...] functions converge (in the mean) to integrable limits. Yes, we have the definition and the characterization in terms of convergence in measure, but this theorem is often easier to [...] Pingback by | June 10, 2010 | Reply 2. [...] sequence of characteristic functions must converge in mean to some function . But mean convergence implies convergence in measure, which is equivalent to a.e. convergence on sets of finite measure, which is [...] Pingback by | August 9, 2010 | Reply 3. someone ask me to define equicontinuous sequence of function Comment by | June 20, 2012 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 88, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363145232200623, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/92634-find-equation-tangent-line-curve-print.html
Find the equation of the tangent line to the curve. Printable View • June 12th 2009, 08:21 AM mant1s Find the equation of the tangent line to the curve. Hi guys, Could someone take a look at my work.. im lost. Problem: Code: Find the equation of the tangent line to the curve y= 4 tan x  at the point ( pi/4 , 4). The equation of this tangent line can be written in the form y = mx+b my work.. Code: using y-y1 = m(x-x1) y - 4tan(x) = 4tan(x- (pi/4)) y - 4tan(x) = 4tan(x) - (pi tan (x)) y = 8tan(x) - pi tan(x) m = 8tan(x) b = pi tan (x) Where did I go wrong? How do I get the equation from the point and the tangent line? • June 12th 2009, 08:43 AM Rachel.F http://img38.imageshack.us/img38/2411/73889329.th.jpg is it the answer? • June 12th 2009, 08:45 AM Chris L T521 Quote: Originally Posted by mant1s Hi guys, Could someone take a look at my work.. im lost. Problem: Code: Find the equation of the tangent line to the curve y= 4 tan x  at the point ( pi/4 , 4). The equation of this tangent line can be written in the form y = mx+b my work.. Code: using y-y1 = m(x-x1) y - 4tan(x) = 4tan(x- (pi/4)) y - 4tan(x) = 4tan(x) - (pi tan (x)) y = 8tan(x) - pi tan(x) m = 8tan(x) b = pi tan (x) Where did I go wrong? How do I get the equation from the point and the tangent line? The function is $y=4\tan x$. To find the slope for any x value, differentiate it. We now see that $y^{\prime}=4\sec^2x$. Since we're looking for the eqn of a tangent at $\left(\tfrac{\pi}{4},4\right)$, we want to find the slope of the function at $\frac{\pi}{4}$. So it follows that $y^{\prime}\!\left(\tfrac{\pi}{4}\right)=4\sec^2\le ft(\tfrac{\pi}{4}\right)=4\left(\sqrt{2}\right)^2= 4\cdot2=8$. This is your slope in your tangent line equation. So using the point-slope equation, we have $y-4=8\left(x-\tfrac{\pi}{4}\right)\implies y=8x-2\pi+4$ Does this make sense? • June 12th 2009, 08:57 AM mant1s Rachel and Chris, You guys rock! It makes sense to me now. I wasn't differentiating and it was screwing me up royally. Thanks for your time! -M All times are GMT -8. The time now is 05:14 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367304444313049, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/319431/how-was-the-isoperimetric-inequality-formulated
# How was the isoperimetric inequality formulated? I'm tyring to understand how the isoperimetric inequality came into existence. It seems like finding the region which yields maximum area when enclosed by a curve of fixed length is an old problem. Queen Dido seems to have figured out the solution a long time ago: that the region should be a circle. I'm currently reading how Steiner's method of characterizing how certain regions cannot yield maximum area. My question however is that how was the following inequality formulated: $4\pi A\leq L^2$ where $A$ is the area of the region which we will be considering and $L$ is the length of the curve which encloses it. I understand we are trying to find a relationship between the area of the region and the length of the curve. I expect the area to be a function of the length perhaps ($A$~$f(L)$ where ~ represents some relation). In this case it happens that we have $A \leq \frac{1}{4\pi}L^2$ My question is simply -- why? EDIT Given a region $\mathbb{L}$ with $A=Area(\mathbb{L})$ and a curve with a fixed length of $L$ why do we have that $4\pi A\leq L^2$ ? - If you believe that the optimal curve is a circle then surely you can figure out the relationship between its area and circumference yourself. – Rahul Narain Mar 3 at 13:10 Are you saying that inequality is purely based on the circle? If so, what you're saying doesn't make sense as the isoperimetric inequality only says that the region is a circle if $A=\frac{1}{4\pi}L^2$ – Adeeb Mar 3 at 13:23 2 For fixed length the circle has the maximum area $\iff$ for fixed length no curve has more area than the circle $\iff$ for fixed length no curve has area more than $\frac1{4\pi}L^2$ $\iff$ $A\le\frac1{4\pi}L^2$. Of course, to prove the premise that the circle has the maximum area, you have the whole rest of the proof of the isoperimetric inequality. – Rahul Narain Mar 3 at 20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9656329154968262, "perplexity_flag": "head"}
http://mathoverflow.net/questions/981?sort=oldest
## Rational maps with all critical points fixed ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What can be said about rational self-maps of $\mathbb P^1$ for which all critical points are also fixed points ? If all but one of the fixed points are critical, there is a characterization in http://arxiv.org/abs/math/0411604v1 ( see Corollary 1 and the discussion just after the statement ). Still assuming that all critical points are fixed: Is it possible to bound the degree of the rational map if all but two of the fixed points are critical ? I think that the answer is probably no, but I would really love to hear the contrary. Motivation. The question is motivated by a rather specific problem I like to think about from time to time. It concerns the classification of some special arrangements of lines on the projective plane. More specifically, I would like to classify arrangements of $3d$ lines(or rather hyperplanes through the origin of $\mathbb C^3$) invariant by degree $d$ homogeneous polynomial vector fields on $\mathbb C^3$. Given one arrangement like that one can produce a degree $d$ rational map having all its critical points fixed. - 1 Do you want almost all the critical points to be fixed, or almost all the fixed points to be critical? In your title and first question you ask for the first, but in the arxiv link and the third paragraph you ask for the latter. – David Speyer Nov 9 2009 at 2:11 I am interested in maps having all its critical points fixed. In the Arxiv link the paragraph following the statement of the Corollary discuss exactly this. In the third paragraph I am implicitly assuming that all the critical points are fixed. Thanks. – jvp Nov 9 2009 at 2:37 ## 4 Answers Consider $$z \mapsto \frac{(n-2) z^n + n z}{n z^{n-1} + (n-2)}.$$ This has $n+1$ fixed points, at $0$, $\infty$, and the $(n-1)$-st roots of $-1$. The only critical points are the roots of $-1$, each of which is ramified of index $3$. So this is a map with all critical points fixed, and all fixed points but two critical. I am tempted to leave it at that. But, being a nice person, I will explain how I found this. Moreover, I will show that this is (up to conjugacy), the only degree $n$ map with $n+1$ distinct fixed points, two of which are not critical and the rest of which are critical with multiplicity $2$ (ramification index $3$.) We can take the two noncritical fixed points to be $0$ and $\infty$. So our map is of the form $$z \mapsto z - z p/q,$$ where $p$ and $q$ are of degree $n-1$ and relatively prime. The $n-1$ fixed points other than $0$ and $\infty$ are the roots of $p$. The derivative of this map is $$\frac{q^2 - zqp' + zpq' - pq}{q^2}.$$ The condition that the fixed points other than $0$ and $\infty$ be critical means $p$ divides the numerator. So $p$ divides $q (q-zp')$ and, as $p$ and $q$ are relatively prime, we conclude that $q - z p' = kp$. Checking degrees, $k$ has degree $0$ and is thus a constant, to be determined later. Now, we want to impose the stronger condition that every zero of $p$ be doubly a critical point, so the numerator is $\ell p^2$ for some constant $\ell$. Plugging in $q = kp + z p'$, and simplifying $$\ell p^2 = p \left( k(k-1) p + 2 k z p' + z^2 p'' \right).$$ Cancelling $p$ from both sides, $$\ell p = k(k-1) p + 2 k z p' + z^2 p''.$$ Plugging in $z=0$, and noting that $p(0) \neq 0$, we get $\ell = k(k-1)$. So $$2 kz p' + z^2 p'' =0.$$ The solution to this differential equation is $p = C z^{1-2k} + D$. But we know that $z$ has degree $n-1$, so $1-2k=n-1$ and $k = -(n-2)/2$. Taking the simplest choices $C=D=1$ and plugging back in gives the above solution. All other solutions are related to this one by rescaling the variable $z$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I've been thinking about this a little. I would guess that, for any sufficiently large $n$, there is a finite, nonzero, number of rational maps of degree $n$ such that all of the critical points are fixed. Here is my heuristic argument. Fix a partition of $2n-2$ into $n+1$ parts: $2n-2 = \lambda_1 + \lambda_2 + \cdots \lambda_{n+1}$. For any $n+1$ points $z_1$, $z_2$, ..., $z_{n+1}$ on $\mathbb{CP}^1$, there are finitely many degree $n$ covers of $\mathbb{CP}^1$ which are ramified over the $z_i$, with the ramified point over $z_i$ being ramified of index $\lambda_i+1$, and no other ramification. (You just need to choose which $\lambda_i$ sheets will be permuted by the monodromy around $z_i$.) Some of these covers will be disconnected, but all the connected ones will have genus $0$ by the Riemmann-Hurwitz formula. FIRST NONRIGOROUS STEP: I expect that, for most choices of the $\lambda_i$, there will be a nonzero number of connected covers. Let D be the number of these connected covers. Now, in each of these connected covers, the covering curve has genus $0$, and is thus isomorphic to $\mathbb{CP}^1$. Let $w_i$ be the ramified preimage of $z_i$. The $n+1$ points $w_i$ give us a point in $M_{0,n+1}$, and the points $z_i$ give another point of $M_{0,n+1}$. Plotting the pairs $((w_1, w_2, \ldots, w_n), (z_1, z_2, \ldots, z_n))$ gives us a subvariety of $M_{0, n+1} \times M_{0,n+1}$ of dimension equal to that of $M_{0,n+1}$; the projection onto the second factor is generically $D$ to $1$. Let's call this subvariety $X$. You goal is to understand the intersection of $X$ with the diagonal. Now, here is the VERY NONRIGOROUS STEP. $X$ has dimension $(n+1)-3=n-2$. So does the diagonal. Our ambient space, $M_{0, n+1} \times M_{0,n+1}$, has dimension $2n-4$. In the absence of any other information, the intersection is probably finite and nonempty. :-) I expect we may be able to extend all of these ideas to work with subvarieties of the compactification $\overline{M}_{0,n+1}$. That would be good because then we could hope to compute the cohomology class of $X$, and show that it cannot miss the diagonal. Filling in the gaps here sounds like a really nice problem. Unfortunately, I have too many nice problems, but I wish you luck. - Hmmm. There is something wrong with my heuristic above. The same argument suggests that there should be a degree $n$ map from $P^1$ to itself with simple branching only, and all $2n-2$ branched points fixed. But, of course, a degree $n$ map fixes at most $n+1$ points. Presumably, understanding the solution would mean understanding the obstructions in the paper of Douady and Hubbard referenced by Agol. – David Speyer Nov 9 2009 at 16:15 David Speyer's answer is right on the money. Douady and Hubbard proved that a map of the sphere to itself whose postcritical set is finite has a unique "uniformization" as a rational map (up to conjugation by Mobius transformations). For a given degree, then, there are finitely many rational maps with all critical points fixed points, since these are postcritically finite. I haven't thought about the second part of the question about whether one can bound the degree if there are only two non-critical fixed points. It might be possible to determine this from the branching data and the Lefschetz fixed-point formula. - Thanksfor the reference. – jvp Nov 9 2009 at 14:58 I realize that the question is about rational functions whose critical points are all fixed points, but it might be useful to note that polynomials with this property are called conservative polynomials. They have been studied by Tischler and Pakovich. In particular, Tischler proved that there are $\binom{2d-2}{d-1}$ normalized conservative polynomials of degree $d$, where a polynomial $C$ is normalized if it is monic and satisfies $C(0)=0$. An example of a normalized conservative polynomial is $x^d + \frac{d}{d-1}x$. Here are the references for Tischler's and Pakovich's articles. • David Tischler, Critical points and values of complex polynomials, Journal of Complexity 5 (1989), 438-456, MR1028906. • Fedor Pakovich, Conservative polynomials and yet another action of ${\rm Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ on plane trees, Journal de Theorie des Nombres de Bordeaux 20 (2008), 205-218, MR2434164. - Thanks for the references. I will take a look. – jvp Jul 9 2011 at 2:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 103, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355655908584595, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/199102-convergence.html
# Thread: 1. ## Convergence I have attached the question. I have expanded $(1+h)^n$ using the binomial theorem, and the question after it I assume they want me to use the very same theorem to prove the convergence. So this is what I had done; $(n+1)^{1/n}=1+h \rightarrow n+1=(1+h)^n=1+nh+\frac{n!}{2!(n-2)!}+...+h^n$ I'm not sure how to prove convergence from then on, but I am guessing the Sandwich Rule would need to be used here at one point. Any hints/tips? Attached Thumbnails 2. ## Re: Convergence Let $h_n$ be such that $(n+1)^{1/n}=1+h_n$. From what you wrote, $1+nh_n+n(n-1)/2\cdot h_n^2<1+n$, so... 3. ## Re: Convergence I understand how you derived the above inequality, however I'm not sure where to go from there - I'm not sure how you apply it to derive any convergence. 4. ## Re: Convergence Can you show that $h_n<f(n)$ for some decreasing function $f$? You don't have to solve the quadratic inequality $nh_n+n(n-1)/2\cdot h_n^2<n$ for $h_n$; just find some function $f$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393336772918701, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/258132/orthonormality-proof-on-outer-product-of-svd-unitary-matrices-columns
# Orthonormality proof on outer product of SVD unitary matrices columns If I have the two unitary matrices from the SVD of an m x n matrix (U, V*) and I form a set of new matrices by doing $u_iv_i^H$ (forms an m x n matrix). Assuming $r = min(m, n)$ and my set is $X_1, X_2, ..., X_r$, how can I show that this is an orthonormal set? Is there some property of unitary matrices that I've forgotten? Their rows and columns form an orthonormal basis in $C^n$ but what of the product of two orthonormal vectors? Furthermore, how do I generalize the coordinates of the original matrix using this orthonormal basis of the vector space? - What are $X_k$, the dyads $u_kv_k^*$? If so, what do you mean by orthonormal (for matrices)? – copper.hat Dec 13 '12 at 19:29 By orthonormal matrices, I mean the set is orthogonal and the matrix norm (generalization of the vector norm) is 1. – PatternMatching Dec 13 '12 at 19:43 1 I understand the definition of orthonormal. I was asking what inner product you are using for two matrices? One definition is $\langle A, B \rangle = \mathbb{tr} A^*B$, and with this inner product the $X_k$ are orthonormal. – copper.hat Dec 13 '12 at 20:08 Sorry - yes, the inner product would be as you have specified above. – PatternMatching Dec 13 '12 at 20:17 ## 1 Answer If you let $X_{ij} = u_i v_j^*$, and define the inner product $\langle A, B \rangle = \mathbb{tr} A^*B$, then you have $\langle X_{ij}, X_{ab} \rangle = \delta_{ia} \delta_{jb}$, from which it follows that the $X_{ij}$ form an orthonormal basis (there are $mn$ of them). The collection of matrices you have above is a subset of this collection, so they are orthonormal (but not a basis). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069189429283142, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3837817
Physics Forums ## Proper summation notation Hi Is it correct of me to say that I want to carry out the sum [tex] \sum_i{v_iw_i} [/tex] where $i\in\{x,y,z\}$? Or is it most correct to say that $i=\{x,y,z\}$? Best regards, Niles. Blog Entries: 1 Recognitions: Homework Help If you have the sum $$v_x w_x + v_y w_y + v_z w_z$$ then you want $i \in \{ x,y,z \}$, which says sum over every element of the set $\{x,y,z \}$. If you wrote $$\sum_{i=\{x,y,z \}} v_i w_i$$ what you really just wrote is $$v_{ \{x,y,z \}} w_{ \{x,y,z \}}$$ which is strange because it's not a sum, and because indices are unlikely (but might be) sets of variables Thanks, that is also what I thought was the case. I see the "i={x,y,z}"-version in all sorts of books. Best wishes, Niles. ## Proper summation notation Quote by Niles Hi Is it correct of me to say that I want to carry out the sum $$\sum_i{v_iw_i}$$ where $i\in\{x,y,z\}$? Or is it most correct to say that $i=\{x,y,z\}$? Best regards, Niles. While one can interpret that, it would make more sense if associated an index set with your label set if you need to do this. So if instead of {x,y,z} just introduce the bijection {x,y,z} = {1,2,3} where the ith component of one set maps to the ith of the other. This is just my opinion, but the reason is mostly conventional because its easier for everyone with a simple mathematics background to understand and causes less confusion. Thanks for the help, that is kind of everybody. Best, Niles. Thread Tools | | | | |------------------------------------------------|----------------------------------|---------| | Similar Threads for: Proper summation notation | | | | Thread | Forum | Replies | | | General Math | 2 | | | General Math | 3 | | | Special & General Relativity | 2 | | | General Physics | 1 | | | Precalculus Mathematics Homework | 6 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242878556251526, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/39284/how-many-different-ways-can-you-distribute-5-apples-and-8-oranges-among-six-chil
How many different ways can you distribute 5 apples and 8 oranges among six children? How many different ways can you distribute 5 apples and 8 oranges among six children if every child must receive at least one piece of fruit? If there was a way to solve this using Pólya-Redfield that would be great, but I cannot figure out the group elements. - 1 Answer I am too lazy to calculate the numbers of elements with $k$ cycles in $S_8$ but if you do that yourself a solution could work as follows. (I will use this version of Redfield-Polya and $[n]$ shall denote $\{1,\dots,n\}$.) Let us take $X = [13]$ the set of fruits, where $G= S_5 \times S_8$ acts on $X$ such that the first five apples and the later eight oranges are indistinguishable. Then $$K_n = |[n]^X/G|$$ is the number of ways to distribute these apples and oranges among six distinguishable children. And $$N_n = K_n -n\cdot K_{n-1}$$ the of ways to distribute these apples and oranges among six distinguishable children such that every child must recieve at least one piece of fruit. Now by the Theorem $$K_n = \frac{1}{|G|} \sum_{g\in G} n^{c(g)} = \frac{1}{5!\cdot 8!} \left(\sum_{g\in S_5} n^{c(g)}\right)\left(\sum_{g\in S_8} n^{c(g)}\right) = \frac{1}{5!\cdot 8!} \left(\sum_{i\in [2]} d_i n^{i}\right) \left(\sum_{i\in [4]} e_i n^{i}\right),$$ where $c(g)$ is the number of cycles of $g$, $d_i$ the number of permutations of $S_5$ with exactly $i$ cycles and $e_i$ the number of permutations of $S_8$ with exactly $i$ cycles. The number that we are looking for in the end is $N_6$. - 1 Wow, just what I was asking; however, I am having a difficult time determining the summations at the end, and does n=6? – poyla fan May 16 '11 at 2:24 I added what you are looking for in the end. If you have further problems with how I manipulated the sums, I might write it down in more clarity later but encourage you to try to figure it out by yourself. If you have problems specifically figuring out $$\sum_{i\in[4]} e_in^i,$$ I hope there might be an easier way to find that but I couldn't come up with it yet. I can give you $$\sum_{i\in[2]}d_in^i = 1 + 84 n + 35 n^2$$ as a reference though as this is a lot easier to calculate. – Peter Patzt May 16 '11 at 10:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606980085372925, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3932667
Physics Forums ## How to calculate the height submerged of cylinder A hollow cylinder (but not open ended) is allowed to float on a liquid. Can anyone please help me how to calculate the height (h) it submerge? I want to get a relationship between height and cylinder weight. I can calculate the volume of submerged part. But no idea how to derive a formula with weight and height. Attached Thumbnails PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help The formula you have derived for the volume of the submerged part should have an h in it - that's the unknown you want to find right? So the volume submerged is a function of height. V=f(h) Presumably you also know the mass of the cylender M and the density p of the water? By Archimedes: the cylinder floats when M=pV ... solve for h. I think you misunderstood what I said. Sorry for being not descriptive. Since I know the weight of cylinder, I can calculate volume of the water displaced. But it doesn't contain 'h' in it. I am struggling to get 'h' into this volume,i.e. I want to know f(h) Recognitions: Homework Help ## How to calculate the height submerged of cylinder Show me your formula for the volume of the submerged part of the cylinder. Vwater.ρwater.g = mcylinder.g Vwater = mcylinder/ρwater Recognitions: Homework Help Oh I see what you mean ... That is only the volume of water that has the same mass as the cylinder - which is only the volume of cylinder submerged if the cylinder floats. That's the condition you need to find h. You need to also find the volume in terms of h. The submerged volume of the cylinder is AL (the area of the cylinder end that is under water multiplied by the length of the cylinder). The area will depend on R (radius of the cylinder) and h. You can work that out by calculus or just look it up. Then you put the volume of the cylinder found this way equal to the volume of water you already have. Yeah, that's where I am stuck at. V = L$\int A(h)dh$ I was not able to find A(h) so far. Anyway I'll have a look again whether I can find any relationship. I'll post what I have done so far a little later. I couldn't go much far... If $l$ is length of cylinder, $y·sin\theta=2r-x$ $y=\frac{2r-x}{sin\theta}$ $Area=2·y·cos\theta·l$ $=2·\frac{2r-x}{sin\theta}·cos\theta·l$ $=2·(2r-x)·cot\theta·l$ $Volume=\int ^{0}_{\frac{\pi}{2}} 2(2r-x)·cot\theta·l·d\theta$ How can I get a relationship between $x$ and $\theta$? Please let me know whether my approach is correct and what should I do from hereon. Attached Thumbnails I think I found the answer. It is so simple than I thought :) Circular Segment Equation (9) is the solution :) Recognitions: Homework Help Yep - that's the one ... you should have done the integral in rectangular coords. But you should also check the proportion of the cylinder that is submerged ... which is an easier calculation. Thank Simon. I'll look into that and will post what I found. Thread Tools | | | | |------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: How to calculate the height submerged of cylinder | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 7 | | | Advanced Physics Homework | 0 | | | Advanced Physics Homework | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359889626502991, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/67484/list
## Return to Answer 2 added 113 characters in body Infinite exponent partition relations are inconsistent with the axiom of choice, so in ZFC, this phenomenon does not exist, but nevertheless, in the context of $ZF+\neg AC$ there is a robust theory. See for example Andres Caicedo's discussion, this Kleinberg article, and the items in this Google search. 1 Infinite exponent partition relations are inconsistent with the axiom of choice, so in ZFC, this phenomenon does not exist, but nevertheless, in the context of $ZF+\neg AC$ there is a robust theory. See for example this article, and the items in this Google search.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8606529831886292, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51715/coproducts-of-complete-boolean-algebras
## Coproducts of complete Boolean algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does the category of complete Boolean algebras have binary coproducts? Note that this category does not have countable coproducts. Indeed, the coproduct of countably many copies of the four element complete Boolean algebra would be the free complete Boolean algebra on countably many generators, and such an object does not exist. - 3 By Stone duality, the category of complete Boolean algebras is dually equivalent to the category of so-called Stonean spaces, i.e. compact, Hausdorff, extremally disconnected, topological spaces. The question then becomes whether the latter category has binary products. Products of compact spaces are compact, and products of Hausdorff spaces are Hausdorff. But binary products of extremally disconnected spaces need not be extremally disconnected. – Chris Heunen Jan 10 2011 at 23:57 @Chris: This does not prove anything. Not every forgetful functor has to preserve products. – Martin Brandenburg Jan 11 2011 at 0:05 Martin, I know, that's why I only added it as a comment. I just thought that it might lead to a counterexample. – Chris Heunen Jan 11 2011 at 9:24 @Martin: But an equivalence preserves products. – Andrej Bauer Jan 11 2011 at 17:59 2 ... and such an object does not exist assuming AC (dx.doi.org/10.1007/BF02757883) – Adam Jan 11 2011 at 18:44 show 1 more comment ## 1 Answer Chris Heunen's comment under the OP can be turned into a proof. Suppose the category of compact Hausdorff extremally disconnected spaces has binary products. Let $X \times Y$ denote the product in that category. If $|X|$ denotes the underlying set, then of course the canonical map $$|X \times Y| \to |X| \times |Y|$$ is an isomorphism, because $|X| \cong \hom(\ast, X)$ where $\ast$ is the one-point space, i.e., the underlying set functor is representable and representables preserve products. Chris observes that the ordinary product space $X \times_{Top} Y$ of two compact Hausdorff extremally disconnected spaces need not be extremally disconnected. However, under our supposition we would have a continuous comparison map $$X \times Y \to X \times_{Top} Y$$ in $Top$ which is a bijection at the level of the underlying sets. Being a continuous bijection between compact Hausdorff spaces, it is a homeomorphism, and this contradicts Chris's observation. - Nice! (and some more to fill up characters) – David Roberts Jan 12 2011 at 2:42 Indeed, a nice proof! – Martin Brandenburg Jan 12 2011 at 7:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956615328788757, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4084380
Physics Forums ## Homogeneous Eqn of Line given 2 homogeneous pointspoints I'm reviewing Projective Geometry. This is an exercise in 2D homogeneous points and lines. It is not a homework assignment - I'm way too old for that. Given two points p1 (X1,Y1,W1) and p2 (X2,Y2,W2) find the equation of the line that passes through them (aX+bY+cW=0). (See http://vision.stanford.edu/~birch/projective/node4.html, "Similarly, given two points p1 and p2, the equation..." and http://vision.stanford.edu/~birch/pr...ve/node16.html, Representing the Plucker Equations) The solution by means of linear algebra is u=p1 x p2 (cross product) = (Y1W2-Y2W1, W1X2-X1W2, X1Y2-Y1X2). I have worked out how to obtain that by calculating a determinant. However, I should be able to get the same result by using elementary algebra and the basic line equation aX + bY + cW = 0, but somewhere I take a wrong turn. Can someone provide the steps? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug You should get the same answer within a global multiplying factor--i.e. a homogeneous multiplier (which is why it's called homogeneous coordinates). I don't see what the problem is. You've found the homogeneous representation for the line, which is $u$. The corresponding equation for the line is, for any third point P = (X, Y, W), that $u \cdot P = 0$, is it not? Quote by Muphrid I don't see what the problem is. You've found the homogeneous representation for the line, which is $u$. As I stated, I want to find the same result using elementary algebra and not matrix operations such as determinants. It may be a full page of lines so you may want to do it by hand and attach an image of the page. As good as I am in algebra, I'm doing something fundamentally wrong. I start with two equations: L1: aX1 +bY1 +cW1 = 0 and L2: aX2 +bY2 +cW2 = 0 (And maybe that's the wrong starting point.) Because the points are on a line I can set them equal to each other, collect terms and then solve for a, then b, and finally c. But I don't get the answer of the determinant method. a(X1-X2) + b(Y1-Y2) + c(W1-W2) =0 a=[-b(Y1-Y2) - c(W1-W2)]/(X1-X2) Now take this value for a and plug it into L1, then solve for b, etc. ## Homogeneous Eqn of Line given 2 homogeneous pointspoints Because of homogeneity, you should have a degree of freedom to choose one of either $a,b,c$ and set one to a convenient number. The usual choice would be $c=1$. This should give you the third equation needed to make the system solvable. Otherwise, you have 2 equations and 3 unknowns, and that's kinda silly. Quote by Muphrid Because of homogeneity, you should have a degree of freedom to choose one of either $a,b,c$ and set one to a convenient number. The usual choice would be $c=1$. This should give you the third equation needed to make the system solvable. Otherwise, you have 2 equations and 3 unknowns, and that's kinda silly. My question stands. If its silly, don't reply. I pointed out you need a third equation to pin down the values of all three variables $a,b,c$. If that doesn't fix your problem, then please elaborate what your "wrong turn" is. I do not think solving this problem for you will be productive. You haven't even really described what hangup you're having in working out the algebra and solving the system by hand. Thread Tools | | | | |-------------------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: Homogeneous Eqn of Line given 2 homogeneous pointspoints | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 3 | | | Linear & Abstract Algebra | 1 | | | Linear & Abstract Algebra | 3 | | | Calculus & Beyond Homework | 10 | | | Calculus & Beyond Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255431294441223, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/74804/software-for-some-universal-algebra-issues/111962
# Software for some universal algebra issues I am looking for some mathematical software that can help me with a very common task in the realm of universal algebra (as far as I know programs like prover9/mace4 and uacalc do not help with this issue). The input of this software should be two different finite algebras, each of them given through the table of its operations. Thus, particular cases are considering two finite groups, two Boolean algebras, etc. The answer (in case the computer search finds something) of this software should be an equation that can distinguish these two finite algebras in the sense that it is valid in one of them, while not valid in the other algebra. The idea is that the software automatically searches for an equation in the corresponding algebraic signature such that it is valid in one of these algebras but not valid in the other one. For instance, if we consider the two finite algebras to be the additive groups of $\mathbb{Z}/2$ and $\mathbb{Z}/3$ an equation that is valid in $\mathbb{Z}/2$ but not in $\mathbb{Z}/3$ is $x + x = y + y$; hence $x+x = y+y$ is an example of the kind of answer I am looking for. On the other hand, there is not such an equation to distinguish the $2$ elements Boolean algebra from the $4$ elements Boolean one; hence, in this case, the computer search will run forever without providing an answer. In case anyone knows a software to do the previous task, let me point out that I am also interested in getting answers bounding the number of variables appearing in the equations. Here I refer to getting answers of the following kind: with equations that only use $3$ variables there is no way distinguish these two finite algebras, with equations that only use $4$ variables there is no way distinguish these two finite algebras, and with equations that only use $5$ variables we can distinguish them using this explicit equation (the one given by the software as answer). - 1 – William DeMeo Feb 22 '12 at 4:21 Correction to my comment above: You don't want to assume A and B have the same type! (but you do want to assume they are on the same set) Also, my comment doesn't answer your question, but the post below, by Ralph Freese (user25460), does. – William DeMeo Feb 23 '12 at 2:51 ## 1 Answer As William says the UACalc can do this. After you have input your algebras using the File -> New menu and then added the operations using the Add button, the Task menu has the option "B in V(A)". This tests if B satisfies all the equations (identities) of A. It tells you if it does and gives an equation of A that fails in B, and the substitution in B witness the failure, if not. But note by a deep result of Kozik (SIAM J Computing, 38(2009), 2443--2467) this problem is 2-EXPTIME complete, so the above will only work on pretty small algebras. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202675223350525, "perplexity_flag": "head"}
http://luckytoilet.wordpress.com/
# Lucky's Notes Notes on math, coding, and other stuff ## How to Write your own Minesweeper AI December 23, 2012 A while ago, I wrote a minesweeper AI. I intended to publish a writeup, but due to university and life and exams, I never got around to writing it. But having just finished my Fall term, I have some time to write a decent overview of what I did. Short 30 second video of the AI in action here: ### How to Play Minesweeper If you’re an experienced minesweeper player, you can probably skip this section. Otherwise, I’ll just give a quick overview of some basic strategies that we can use to solve an easy minesweeper game. We start with a 10×10 Beginner’s grid, and click on a square in the middle: We can quickly identify some of the mines. When the number 1 has exactly one empty square around it, then we know there’s a mine there. Let’s go ahead and mark the mines: Now the next strategy: if a 1 has a mine around it, then we know that all the other squares around the 1 cannot be mines. So let’s go ahead and click on the squares that we know are not mines: Keep doing this. In this case, it turns out that these two simple strategies are enough to solve the Beginner’s grid: ### Roadmap to an AI All this seems easy enough. Here’s what we’ll need to do: 1. Read the board. If we use a screenshot function, we can get a bitmap of all the pixels on the board. We just need to ‘read’ the numbers on the screen. Luckily for us, the numbers tend to have different colors: 1 is blue, 2 is green, 3 is red, and so on. 2. Compute.  Run the calculations, figure out where the mines are. Enough said. 3. Click the board. This step is easy. In Java, we can use the Robot class in the standard library to send mouse clicks to the screen. ### Reading the Field There’s not a whole lot to this step, so I’m going to skim over it quickly. At the beginning of the run, while we have a completely empty grid, we invoke a calibration routine – which takes a screenshot and looks for something that looks like a Minesweeper grid. Using heuristics, it determines the location of the grid, the size of a grid square, the dimensions of the board, and things like that. Now that we know where the squares are, if we want to read a square, we crop a small section of the screenshot and pass it to a detection routine, which looks at a few pixels and figures out what’s in the square. A few complications came up in the detection routine: • The color for the number 1 is very close to the color of an unopened square: both are a dark-blue color. To separate them apart, I compared the ‘variance’ of the patch from the average color for the patch. • The color for 3 is identical to that for 7. Here, I used a simple edge-detection heuristic. ### Straightforward Algorithm The trivially straightforward algorithm is actually good enough to solve the beginner and intermediate versions of the game a good percent of the time. Occasionally, if we’re lucky, it even manages to solve an advanced grid! When humans play minesweeper, we compete for the fastest possible time to solve a grid of minesweeper. So it doesn’t matter if we lose 20 games for every game we win: only the wins count. This is clearly a silly metric when we’re a robot that can click as fast as we want to. Instead, we’ll challenge ourselves with a more interesting metric: Win as many games as possible. Consider the following scenario: Using the straightforward method, we seem to be stuck. Up until now, whenever we mark a square as having a mine or safe, we’ve only had to look at a single 3×3 chunk at a time. This strategy fails us here: the trick is to employ a multisquare algorithm – look at multiple different squares at once. From the lower 2, we know that one of the two circled squares has a mine, while the other doesn’t. We just don’t know which one has the mine: Although this doesn’t tell us anything right now, we can combine this information with the next 2: we can deduce that the two yellowed squares are empty: Let’s click them to be sure. And voilà. They’re empty. The rest of the puzzle can be solved easily, after we’ve made the deduction that those two squares were empty. ### The Tank Solver Algorithm It’s difficult to make the computer think deductively like we just did. But there is a way to achieve the same results, without deductive thinking. The idea for the Tank algorithm is to enumerate all possible configurations of mines for a position, and see what’s in common between these configurations. In the example, there are two possible configurations: You can check for yourself that no other configuration could work here. We’ve deduced that the one square with a cross must contain a mine, and the three squares shaded white below must not contain a mine: This works even better than human deduction! We always try to apply the simple algorithm first, and only if that gets us stuck, then we bring in the Tank algorithm. To implement the Tank algorithm, we first make a list of border tiles: all the tiles we aren’t sure about but have some partial information. Now we have a list of $T$  border tiles. If we’re considering every possible configuration, there are $2^T$ of them. With backtracking, this number is cut down enough for this algorithm to be practical, but we can make one important optimization. The optimization is segregating the border tiles into several disjoint regions: If you look carefully, whatever happens in the green area has no effect on what happens in the pink area – we can effectively consider them separately. How much of a speedup do we get? In this case, the green region has 10 tiles, the pink has 7. Taken together, we need to search through $2^{17}$ combinations. With segregation, we only have $2^{10} + 2^7$: about a 100x speedup. Practically, the optimization brought the algorithm from stopping for several seconds (sometimes minutes) to think, to giving the solution instantly. ### Probability: Making the Best Guess Are we done now? Can our AI dutifully solve any minesweeper grid we throw at it, with 100% accuracy? Unsurprisingly, no: One of the two squares has a mine. It could be in either, with equal probability. No matter how cleverly we program our AI, we can’t do better than a 50-50 guess. Sorry. The Tank solver fails here, no surprise. Under exactly what circumstances does the Tank algorithm fail? If it failed, it means that for every border tile, there exists some configuration that this tile has a mine, and some configuration that this tile is empty. Otherwise the Tank solver would have ‘solved’ this particular tile. In other words, if it failed, we are forced to guess. But before we put in a random guess, we can do some more analysis, just to make sure that we’re making the best guess we could make. Try this. What do we do here: From the 3 in the middle, we know that three of them are mines, as marked. But marking mines doesn’t give us any new information about the grid: in order to gain information, we have to uncover some square. Out of the 13 possible squares to uncover, it’s not at all clear which one is the best. The Tank solver finds 11 possible configurations. Here they are: Each of these 11 configurations should be equally likely to be the actual position – so we can assign each square a probability that it contains a mine, by counting how many (of the 11) configurations does it contain a mine: Our best guess would be to click on any of the squares marked ‘2’: in all these cases, we stand an 82% chance of being correct! ### Two Endgame Tactics Up until now, we haven’t utilized this guy: The mine counter. Normally, this information isn’t of too much use for us, but in many endgame cases it saves us from guessing. For example: Here, we would have a 50-50 guess, where two possibilities are equally likely. But what if the mine counter reads 1? The 2-mine configuration is eliminated, leaving just one possibility left. We can safely open the three tiles on the perimeter. Now on to our final tactic. So far we have assumed that we only have information on a tile if there’s a number next to it. For the most part, that’s true. If you pick a tile in some distant unexplored corner, who knows if there’s a mine there? Exceptions can arise in the endgame: The mine counter reads 2. Each of the two circled regions gives us a 50-50 chance – and the Tank algorithm stops here. Of course, the middle square is safe! To modify the algorithm to solve these cases, when there aren’t that many tiles left, do the recursion on all the remaining tiles, not just the border tiles. The two tricks here have the shared property that they rely on the mine counter. Reading the mine counter, however, is a non-trivial task that I won’t attempt; instead, the program is coded in with the total number of mines in the grid, and keeps track of the mines left internally. ### Conclusion, Results, and Source Code At this point, I’m convinced that there isn’t much more we could do to improve the win rate. The algorithm uses every last piece of information available, and only fails when it’s provably certain that guessing is needed. How well does it work? We’ll use the success rate for the advanced grid as a benchmark. • The naïve algorithm could not solve it, unless we get very lucky. • Tank Solver with probabilistic guessing solves it about 20% of the time. • Adding the two endgame tricks bumps it up to a 50% success rate. Here’s proof: I’m done for now; the source code for the project is available on Github if anyone is inclined to look at it / tinker with it: https://github.com/luckytoilet/MSolver 46 Comments | Programming | Tagged: ai, algorithm, java, minesweeper | Permalink Posted by luckytoilet ## Notes on the partial fraction decomposition: why it always works June 13, 2012 If you’ve taken any intro to Calculus class, you’re probably familiar with partial fraction decomposition. In case you’re not, the idea is that you’re given some rational function with an awful denominator that you want to integrate, like: $\frac{4x-2}{(x-2)(x+4)}$ And you break it up into smaller, simpler fractions: $\frac{1}{x-2} +\frac{3}{x+4}$ This is the idea. If we get into the details, it gets fairly ugly — in a typical calculus textbook, you’ll find a plethora of rules regarding what to do in all sorts of cases: what to do when there are repeated linear factors, quadratic factors, repeated quadratic factors, and so on. Since the textbooks generously cover this for us, we’ll assume that we know what to do with a rational polynomial with some polynomial as the numerator, and some number of linear or quadratic factors in the denominator. We can do partial fraction decomposition on this. If we like, we could integrate it too. I’m talking about anything of this form: $\frac{P(x)}{((ax+b)(cx+d) \cdots)((ex^2+fx+g)(hx^2+ix+j) \cdots)}$ Although we won’t prove this, this seems fairly believable. We’ll assume that once we get a fraction into this form, we’re done and we can let existing partial fraction methods take care of the rest. ### Can Partial Fractions Fail? What if we have a polynomial greater than a quadratic in the denominator? So let’s say: $\frac{1}{x^3+1}$ Fortunately, here the denominator can be factored, giving us a form we can deal with: $\frac{1}{(x+1)(x^2-x+1)}$ But we were lucky that time. After all, not all polynomials can be factored, right? What if we have this: $\frac{1}{x^3+5}$ We can’t factor this. What can we do? It turns out that this isn’t a huge problem. We never required the coefficients of the factors to be integers! Although the factorization is awkward, it can still be factored: $\frac{1}{(x + 5^{1/3})(x^2-5^{1/3}x+5^{2/3})}$ Other than making the next step somewhat algebraically tedious, this decomposition is perfectly valid. The coefficients need not be integers, or even be expressed with radicals. As long as every coefficient is real, partial fraction decomposition will work fine. ### Universality of Partial Fractions The logical next question would be, can all radical functions be written in the previous partial fraction decomposition-suitable form? Looking through my calculus textbooks, none seemed to provide a proof of this — and failing to find a proof on the internet, I’ll give the proof here. We need to prove that any polynomial that might appear in the denominator of a rational function, say $Q(x)$, can be broken down into linear or quadratic factors with real coefficients. In order to prove this, we’ll need the following two theorems: • Fundamental Theorem of Algebra — any polynomial of degree n can be written as a product of n linear complex factors: $Q(x) = (x-z_1) (x-z_2) \cdots (x-z_n)$ • Complex Conjugate Root Theorem — if some complex number $a + bi$ is a root of some polynomial with real coefficients, then its conjugate $a-bi$ is also a root. Starting with the denominator polynomial $Q(x)$, we break it down using the Fundamental Theorem of Algebra into complex factors. Of these factors, some will be real, while others will be complex. Consider the complex factors of $Q(x)$. By the complex conjugate root theorem, for every complex factor we have, its conjugate is also a factor. Hence we can take all of the complex factors and pair them up with their conjugates. Why? If we multiply a complex root by its complex conjugate root: $(x-z)(x-\bar{z})$ — we always end up with a quadratic with real coefficients. (you can check this for yourself if you want) Before, we were left with real linear factors and pairs of complex factors. The pairs of complex factors multiply to form quadratic polynomials with real coefficients, so we are done. At least in theory — partial fraction decomposition always works. The problem is just that we relied on the Fundamental Theorem of Algebra to hand us the roots of our polynomial. Often, these roots aren’t simple integers or radicals — often they can’t really be expressed exactly at all. So we should say — partial fraction decomposition always works, if you’re fine with having infinitely long decimals in the decomposed product. 1 Comment | Mathematics | Tagged: algebra, calculus, complex numbers, theorem proof | Permalink Posted by luckytoilet ## Minimum quadrilateral inscribed in a square May 6, 2012 A problem that I’ve seen lately reduces to the following problem: We have a square, and we put a point on each side of the square. Then we connect the four points to create a quadrilateral. How can we make this quadrilateral have the smallest possible perimeter? Intuitively, you may believe that this natural, obvious configuration should produce the least perimeter: ### Attempt with Calculus How can we prove that this indeed gives us the smallest possible perimeter? A first attempt might be to give variables to the side lengths, and somehow find the minimum perimeter using algebra and calculus tools. So there are four independent points — let’s parameterize them with four variables, and assume the side length of the square is 1: Then we want to minimize this expression: $\sqrt{a^2+(1-d)^2} + \sqrt{b^2+(1-a)^2}+ \sqrt{c^2+(1-b)^2}+ \sqrt{d^2+(1-c)^2}$ At this point, it isn’t clear how to proceed — there doesn’t seem to be any way to minimize this expression of four variables. ### Proof by Net We’ll have to try something different. It’s hard to make sense of anything when there are four independent variables. Instead, if we expand things out a bit, things start to become more manageable: What we did was reflect the square three times, and each time the square is reflected, the inscribed quadrilateral goes with it. By taking only the relevant parts of the quadrilateral, we get the green path. Now we might have a solution. If we had a different green path, can we reverse the steps and get the original quadrilateral back? Basically, the following requirements have to be met: • The path has to cross all three of the internal lines BC, BA, and DA. • The path’s position on the bottom-most line, DC must be the same when reflected onto the top-most line DC. With these requirements in mind, the shortest green path that satisfies these requirements is a straight line connecting a point on the bottom left to its reflected point on the top right: Our intuition at the start was well-founded. Now notice that this isn’t the only possible shortest path. If we move the entire green line to the left or right, we get a different path of the same length! For instance, the degenerate ‘quadrilateral’ formed by connecting two opposite corners has the same perimeter as the one we get by connecting the midpoints. Neat, huh? 1 Comment | Mathematics | Tagged: geometry, optimization, quadrilateral, square | Permalink Posted by luckytoilet ## A CMOQR Problem and why not to Trust Brute Force March 6, 2012 Recently I was invited to compete in the CMOQR – a qualifier contest for the Canadian Math Olympiad. The contest consisted of eight problems, and contestants were allowed about a week’s time to submit written solutions via email. After a few days, I was able to solve all of the problems except one — the second part of the seventh problem: Seven people participate in a tournament, in which each pair of players play one game, and one player is declared the winner and the other the loser. A triplet ABC is considered cyclic if A beats B, B beats C, and C beats A. Can you always separate the seven players into two rooms, so that neither room contains a cyclic triplet? (Note: the first half of the problem asked the same question for six people — and it’s not too difficult to prove that no matter what, we can put them into two rooms so that neither the first nor the second room contains a cyclic triplet.) But what happens when we add another person? Can we still put four people in one room, and three people in the other, so that neither rooms contain a cyclic triplet? There are two possibilities here: • One, it’s always possible. No matter what combinations of wins and losses have occurred, we can always separate them into two rooms in such a way. To prove this, we’ll need to systematically consider all possible combinations, and one by one, verify that the statement is possible for each of the cases. • Two, it’s not always possible. Then there is some counterexample — some combination of wins and losses so that no matter how we separate them, one of the rooms has a cyclic triplet. This is easier to prove: provided that we have the counterexample, we just have to verify that indeed, this case is a counterexample to the statement. But there’s a problem. Which of the cases does the solution fall into? That is, should we look for a quick solution by counterexample, or look for some mathematical invariant that no counterexample can exist? ### Brute Force? It would be really helpful if we knew the counterexample, or knew for sure what the counterexample was. What if we wrote a computer program to check all the cases? After all, there are only 7 people in the problem, and 7 choose 2 or 21 games played. Then since each game is either won by one player or the other, there are only 2^21 combinations overall (although that does count some duplicates). And 2^21 is slightly over two million cases to check — completely within the bounds of brute force. So I coded up a possibility-checker. Generate all 2^21 possible arrangements, then for each one, check all possible ways to separate them into two rooms. If it turns out that no matter how we arrange them, a cyclic triplet persists, then display the counterexample. Simple. I ran the program. It quickly cycled through every possible arrangement, three seconds later exiting without producing a counterexample. Alright. So there’s no counterexample. I would have to find some nice mathematical invariant, showing that no matter what, there is always some way to group the players so that neither room has a cyclic triplet. But no such invariant came. I tried several things, but in each attempt couldn’t quite show that the statement held for every case. I knew that there was no counterexample, but I couldn’t prove it. But why? There must be some tricky way to show that no counterexample existed; whatever it was, I couldn’t find it. ### Brute Force poorly implemented Reluctantly, as the deadline came and passed, I submitted my set of solutions without solving the problem. When the solutions came out a week later, the solution to this problem did not contain any tricky way to disprove the counterexample. Instead, what I found was this: Let $A_0 \ldots A_6$ be seven players. Let $A_i$ beat $A_j$ when the difference $i-j \equiv 1,2,4 \mod 7$. Huh? A counterexample, really? Let’s look at it. Everything is symmetric — we can ‘cycle’ the players around without changing anything. Also, if we take four players, two of them are consecutive. Let them be $A_0$ and $A_1$. At this point everything falls into place: in any subset of four players, three of them are cyclic. But wait … my program had not found any counterexamples! And right here is a counterexample! The culprit was obvious (the reader may have foreseen this by now) — of course, there had to be a problem with my program. Running my code through a debugger, I found a logic error in the routine converting binary numbers to array configurations, meaning that not all possible configurations were tried. As a result, the counterexample slipped through the hole. After fixing the code, the program found not one, but a total of 7520 (although not necessarily distinct) counterexamples. Most of them had no elegant structure, but the solution’s configuration was among them. For the interested, here is the fixed code. ### When to Start Over? It is true that the program could have been better written, better debugged. But how could you know whether a counterexample existed and your program didn’t find it, or if no counterexample existed at all? In hindsight, it seems that writing the brute force program made me worse off than if I hadn’t written it at all. After the program ran without finding a single counterexample, I was confident that no counterexample existed, and set out about proving that, instead of looking for counterexamples or symmetry. When you are stuck on such a math problem — that is, after making a bit of progress you get stuck — it might be profitable to start over. More often than I would like, I prove a series of neat things, without being able to prove the desired result. Then a look at the solutions manual reveals that a very short solution — one or two steps — lay in the opposite direction. I’ll put an end to my philosophical musings of the day. Fortunately, the cutoff for the CMOQR was low enough that even without solving every single problem, I was still able to meet the cutoff. 2 Comments | Mathematics, Programming | Tagged: brute force, c, cmoqr, combinatorics, solved problem | Permalink Posted by luckytoilet ## A trivial inequality, and how to express its solution in the most cryptic way imaginable February 19, 2012 Solutions to olympiad problems are seldom written with clarity in mind — just look at forum posts in the Art of Problem Solving. The author makes jumps and skips a bunch of steps, expecting the reader to fill in the gaps. Usually this is not much of a problem — the missing steps become obvious when you sit down and think about what’s going on with a pencil and some paper. But sometimes, this is not the case. ### The problem One of the worst examples I’ve seen comes in the book Inequalities, A Mathematical Olympiad Approach. By all means, this is an excellent book. Anyways, here’s one of its easier problems — and you’re expected to solve it using the triangle inequality: Prove that for all real numbers a and b, $||a|-|b|| \leq |a-b|$ ### Attempt 1: Intuitive solution It isn’t clear how the triangle inequality fits. If I weren’t required to use the triangle inequality, I might be tempted to do an intuitive, case-by-case argument. Let’s visualize the absolute value of $a-b$ as the difference between the two numbers on a number line. Now we compare this distance $|a-b|$ with the distance after you take the absolute value of both of them, $||a|-|b||$. If one of the numbers is positive and the other negative, we clearly have a smaller distance if we ‘reflect’ the negative one over. Of course, if they’re both positive, or they’re both negative, then nothing happens and the distances remain equal. There, a simple, fairly clear argument. Now let’s see what the book says. ### The book’s solution Flip to the end of the book, and find Consider $|a|=|a-b+b|$ and $|b|=|b-a+a|$, and apply the triangle inequality. Huh. Perhaps if you are better versed than I am in the art of solving inequalities, you’ll understand what this solution is saying. But I, of course, had no idea. Maybe try the substitution they suggest. I only see one place to possibly substitute $|a|$ for anything — and substituting gives $||a-b+b|-|b-a+a||$. Now what? I don’t think I did it right — this doesn’t make any sense. To be fair, I cheated a little bit in the first attempt: I didn’t use the triangle inequality. Fair enough — let’s solve it with the triangle inequality then and come back to see if the solution makes any sense now. ### Attempt 2: Triangle inequality solution A standard corollary to the triangle inequality of two variables is the following: $|a|-|b| \leq |a-b|$ Combine this with the two variables switched around: $|b|-|a| \leq |b-a| = |a-b|$ Combine the two inequalities and we get the desired $||a|-|b|| \leq |a-b|$ Now let’s look at the solution again. Does it make sense? No, at no point here  did we do any $|a-b+b|$ substitution. Clearly the authors were thinking of a different solution that happened to also use the triangle inequality. Whatever it was, I had no idea what the solution meant. ### The book’s solution, decrypted Out of ideas and hardly apt to let the issue rest, I consulted help online at a math forum. And look — it turns out that my solution was without a doubt the same solution as the book’s intended solution! What the author meant was this: considering that $|a| = |a-b+b|$, we have $|a| \leq |a-b|+|b|$ from the triangle inequality. Then, moving the $|b|$ over we get $|a|-|b| \leq |a-b|$. After that, the steps I took above are left to the reader. Perhaps I’m a bit thick-headed, but your solution can’t possibly be very clear if a reader has the exact same solution yet can’t even recognize your solution as the same solution. Come to think of it, if I couldn’t even recognize the solution, what chance is there of anybody being able to follow the solution — especially if they’re new to inequalities? Almost every one of the one-sentence phrasings of this solution I could think of would be clearer and less puzzling than the solution the book gives me. 1 Comment | Mathematics | Tagged: inequalities, olympiad | Permalink Posted by luckytoilet ## Fix for Digsby’s Facebook authentication error and broken Facebook support January 26, 2012 To all Digsby users (ignore this post if you don’t use Digsby): If you use Digsby with Facebook, you might have noticed that things behave strangely — the program pops up a window looking like this when it tries to connect to Facebook: Then after you give it your credentials, Digsby still thinks you’re not logged in, and so on. If you found this page via a google search, there’s a simple hack / workaround you can use to patch up this problem. Basically, instead of using the Facebook protocol to connect, we let Digsby use the Jabber protocol as a ‘proxy’ to connect to Facebook: 1. Go to Digsby -> My Accounts and in the Add Accounts section at the top, select the Jabber icon. 2. You should get a window that looks like this: 4. Remove the facebook account from Digsby At this point, you’re done: Digsby should give you no more problems about Facebook. Warning: the following is unnecessary and experimental! It might screw up the entire Digsby installation, forcing you to reinstall! However, you can replace the Jabber icon with the Facebook one (this is for purely cosmetic purposes): 1. Go to C:\Program Files (x86)\Digsby\res\skins\default\serviceicons (that’s the default installation path on my machine, yours may be different) 2. Delete jabber.png, duplicate facebook.png, and rename it jabber.png 3. Restart Digsby There you have it — hack accomplished: 40 Comments | Uncategorized | Tagged: bug, digsby, facebook, hack, workaround | Permalink Posted by luckytoilet ## Understanding Harmonic Conjugates (sort of) January 7, 2012 For many people (for me at least), the Harmonic Conjugate is a difficult concept to understand. I didn’t really get it the first time I saw it, at Mathcamp. Let’s take the definition of the harmonic conjugate: AB and CD are harmonic conjugates if this equation holds: $\frac{AC}{BC} = \frac{AD}{BD}$ If you’re like me, you’re thinking along the lines of “But why? Why is this defined this way? Why would we spend so much time proving things about this weird concept? What’s the point, what’s the use?” Even now, I can’t really give you an intuitive explanation of why this equality is so important. On the other hand, I could certainly come up with a few problems in which the concept of the harmonic conjugate turns to be useful. ### Apollonius and Fleeing Ships Apollonius’s problem was this: you are in control of a ship (point A on diagram), and you are in pursuit of another ship (point B). The other ship is fleeing in a straight line in some direction: Your speed is (obviously) faster than the speed of the other ship: say they’re going at 30 km/h and you’re going at 50 km/h. Additionally, your ship is required to go in a straight line. In which direction should you set off in order to intercept the fleeing ship? ### Solution with Harmonic Conjugates The first step of the solution is to construct harmonic conjugates CD so that their ratio is the ratio of your speed to the other ship’s speed (we’ll prove later that this is actually possible; assume we can do this for now): $\frac{AC}{BC} = \frac{AD}{BD} = \frac{5}{3}$ Next, draw a circle with diameter CD: There is a point where the ray from B (their ship) intersects this circle. Now go to this point immediately, in a straight line: the ship will be there. ### The Proof In order to prove that this works, we’ll need to take a step back and look at how we constructed the points C and D. The solution turns out to be evident directly from the construction of the harmonic conjugates. Again, let’s assume our desired ratio is 5/3. Starting with the points A and B, the first step is constructing some point P so that: $\frac{AP}{BP} = \frac{5}{3}$ This is fairly easy to do. Draw a circle of radius 5 around A, and draw a circle of radius 3 around B — the intersection P of these two circles forms the correct ratio. (if the circles don’t intersect, just scale everything up and try again) Next, dropping the internal and external angle bisectors of the new triangle gives the harmonic conjugates C and D: Why angle bisectors? From the angle bisector theorems (which I won’t prove here): $\frac{AP}{BP} = \frac{AC}{BC} = \frac{5}{3}$ $\frac{AP}{BP} = \frac{AD}{BD} = \frac{5}{3}$ Combining the two proves that C and D are indeed harmonic conjugates to AB. As a corollary, notice that because of angle bisecting, the angle CPD is always a right angle — hence, the locus of all points P forms a circle with diameter CD. Returning to the ship problem, since each point P is defined as a point so that $\frac{AP}{BP} = \frac{5}{3}$, it follows that when both ships travel to such a point P, they will meet at the same time. Leave a Comment » | Mathematics | Tagged: apollonius, geometry, harmonic conjugates | Permalink Posted by luckytoilet
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251386523246765, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/36847/list
## Return to Answer 2 added 2 characters in body Maybe I'm underestimating your problem, but it seems Mikael above is right. In your example you define $q:f_1\to f_2$, so if it's a kernel of some other map r, then $r$ must have $f_2$ for domain. $q$ can't possibly be a kernel of $p$, as the composition $pq$ does not make sense. Categorically http://en.wikipedia.org/wiki/Kernel_(category_theory) Given a map $f: X \to Y$, a kernel is another map $k:K \to X$ satisfying blah blah. So Now, if X and Y are complexes you have a criterion to check wether $k$ is a kernel: checking the components $k^n$ (but you already must have a chain map $k$ to begin with). 1 Maybe I'm underestimating your problem, but it seems Mikael above is right. In your example you define $q:f_1\to f_2$, so if it's a kernel of some other map r, then $r$ must have $f_2$ for domain. $q$ can't possibly be a kernel of $p$, as the composition $pq$ does not make sense. Categorically http://en.wikipedia.org/wiki/Kernel_(category_theory) Given a map $f: X \to Y$, a kernel is another map $k:K \to X$ satisfying blah blah. So if X and Y are complexes you have a criterion to check wether $k$ is a kernel: checking the components $k^n$ (but you already must have a chain map $k$ to begin with).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9022296667098999, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/46628-differentiating-integral.html
# Thread: 1. ## Differentiating an integral Hi, I am unsure how to differentiate an integral. The specific problem I am working on at the moment is: $\frac{\partial }{\partial T} \int_t^T F(t;s) ds$ s is a dummy variable used to represent T. The solution to this is $F(t;T)$ ...but I am unsure why we would get this. Thanks in advance. Peter 2. That's the first fundamental theorem of calculus: Fundamental theorem of calculus - Wikipedia, the free encyclopedia If there's anything you should ever remember from calculus, it's this theorem. 3. Hi ! Originally Posted by peterpan Hi, I am unsure how to differentiate an integral. The specific problem I am working on at the moment is: $\frac{\partial }{\partial T}\int_t^T F(t;s) ds$ s is a dummy variable used to represent T. The solution to this is $F(t;T)$ ...but I am unsure why we would get this. Thanks in advance. Peter Let $\tilde{F}(s)=F(t;s)$. Let $\mathcal{F}$ be an antiderivative of $\tilde{F}$. By definition of the integral (I think you call it "fundamental theorem of calculus"), we have : $\int_t^T \tilde{F}(s) ~ds=\mathcal{F}(T)-\mathcal{F}(t)$. Now, $\frac{\partial}{\partial T} \int_t^T \tilde{F}(s) ~ds=\frac{\partial}{\partial T} ~ \left(\mathcal{F}(T)-\mathcal{F}(t)\right)$ Assuming that t is not a function of T, we can thus consider that $\mathcal{F}(t)$ is a constant with respect to T. Hence $\frac{\partial}{\partial T} ~\mathcal{F}(t)=0$ $\therefore \frac{\partial}{\partial T} \int_t^T \tilde{F}(s) ~ds=\frac{\partial}{\partial T} ~\mathcal{F}(T)$ But the derivative of $\mathcal{F}$ is $\tilde{F}$. We can then say that $\frac{\partial}{\partial T} \int_t^T F(t;s) ~ds=\frac{\partial}{\partial T} \int_t^T \tilde{F}(s) ~ds=\tilde{F}(T)=F(t;T)$ 4. Here's the general expression: $\frac{\partial}{\partial x}\int_{f_1(x)}^{f_2(x)}G(x,t)dt=<br /> G(x,f_2(x))\frac{d f_2}{dx}-G(x,f_1(x))\frac{d f_1}{dx}+\int_{f_1(x)}^{f_2(x)}<br /> \frac{\partial G}{\partial x}dt$ Then: $\frac{\partial}{\partial x}\int_{4x+x^2}^{\sin(x)} (2x^2\sin(t)+t^2e^{x})dt$ is a piece of cake right?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9056136608123779, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/Control_Systems/Feedback_Loops
# Control Systems/Feedback Loops | | | | | |----------------------------------------------------------|------------------------|--------------|-------------| | The Wikibook of: Control Systems and Control Engineering | Table of Contents | All Versions | PDF Version | | ← Block Diagrams | Signal Flow Diagrams → | Glossary | | ## Feedback A feedback loop is a common and powerful tool when designing a control system. Feedback loops take the system output into consideration, which enables the system to adjust its performance to meet a desired output response. When talking about control systems it is important to keep in mind that engineers typically are given existing systems such as actuators, sensors, motors, and other devices with set parameters, and are asked to adjust the performance of those systems. In many cases, it may not be possible to open the system (the "plant") and adjust it from the inside: modifications need to be made external to the system to force the system response to act as desired. This is performed by adding controllers, compensators, and feedback structures to the system. ↑Jump back a section ## Basic Feedback Structure Wikipedia has related information at Feedback This is a basic feedback structure. Here, we are using the output value of the system to help us prepare the next output value. In this way, we can create systems that correct errors. Here we see a feedback loop with a value of one. We call this a unity feedback. Here is a list of some relevant vocabulary, that will be used in the following sections: Plant The term "Plant" is a carry-over term from chemical engineering to refer to the main system process. The plant is the preexisting system that does not (without the aid of a controller or a compensator) meet the given specifications. Plants are usually given "as is", and are not changeable. In the picture above, the plant is denoted with a P. Controller A controller, or a "compensator" is an additional system that is added to the plant to control the operation of the plant. The system can have multiple compensators, and they can appear anywhere in the system: Before the pick-off node, after the summer, before or after the plant, and in the feedback loop. In the picture above, our compensator is denoted with a C. Some texts, or texts in other disciplines may refer to a "summer" as an adder. Summer A summer is a symbol on a system diagram, (denoted above with parenthesis) that conceptually adds two or more input signals, and produces a single sum output signal. Pick-off node A pickoff node is simply a fancy term for a split in a wire. Forward Path The forward path in the feedback loop is the path after the summer, that travels through the plant and towards the system output. Reverse Path The reverse path is the path after the pick-off node, that loops back to the beginning of the system. This is also known as the "feedback path". Unity feedback When the multiplicative value of the feedback path is 1. ↑Jump back a section ## Negative vs Positive Feedback It turns out that negative feedback is almost always the most useful type of feedback. When we subtract the value of the output from the value of the input (our desired value), we get a value called the error signal. The error signal shows us how far off our output is from our desired input. Positive feedback has the property that signals tend to reinforce themselves, and grow larger. In a positive feedback system, noise from the system is added back to the input, and that in turn produces more noise. As an example of a positive feedback system, consider an audio amplification system with a speaker and a microphone. Placing the microphone near the speaker creates a positive feedback loop, and the result is a sound that grows louder and louder. Because the majority of noise in an electrical system is high-frequency, the sound output of the system becomes high-pitched. ### Example: State-Space Equation In the previous chapter, we showed you this picture: Now, we will derive the I/O relationship into the state-space equations. If we examine the inner-most feedback loop, we can see that the forward path has an integrator system, $\frac{1}{s}$, and the feedback loop has the matrix value A. If we take the transfer function only of this loop, we get: $T_{inner}(s) = \frac{\frac{1}{s}}{1 - \frac{1}{s}A} = \frac{1}{s - A}$ Pre-multiplying by the factor B, and post-multiplying by C, we get the transfer function of the entire lower-half of the loop: $T_{lower}(s) = B\left(\frac{1}{s - A}\right)C$ We can see that the upper path (D) and the lower-path Tlower are added together to produce the final result: $T_{total}(s) = B\left(\frac{1}{s - A}\right)C + D$ Now, for an alternate method, we can assume that x' is the value of the inner-feedback loop, right before the integrator. This makes sense, since the integral of x' should be x (which we see from the diagram that it is. Solving for x', with an input of u, we get: $x' = Ax + Bu$ This is because the value coming from the feedback branch is equal to the value x times the feedback loop matrix A, and the value coming from the left of the sumer is the input u times the matrix B. If we keep things in terms of x and u, we can see that the system output is the sum of u times the feed-forward value D, and the value of x times the value C: $y = Cx + Du$ These last two equations are precisely the state-space equations of our system. ↑Jump back a section ## Feedback Loop Transfer Function We can solve for the output of the system by using a series of equations: $E(s) = X(s) - Y(s)$ $Y(s) = G(s)E(s)$ and when we solve for Y(s) we get: [Feedback Transfer Function] $Y(s) = X(s) \frac{Gp(s)}{1 + Gp(s)}$ The reader is encouraged to use the above equations to derive the result by themselves. The function E(s) is known as the error signal. The error signal is the difference between the system output (Y(s)), and the system input (X(s)). Notice that the error signal is now the direct input to the system G(s). X(s) is now called the reference input. The purpose of the negative feedback loop is to make the system output equal to the system input, by identifying large differences between X(s) and Y(s) and correcting for them. ### Example: Elevator Here is a simple example of reference inputs and feedback systems: There is an elevator in a certain building with 5 floors. Pressing button "1" will take you to the first floor, and pressing button "5" will take you to the fifth floor, etc. For reasons of simplicity, only one button can be pressed at a time. Pressing a particular button is the reference input of the system. Pressing "1" gives the system a reference input of 1, pressing "2" gives the system a reference input of 2, etc. The elevator system then, tries to make the output (the physical floor location of the elevator) match the reference input (the button pressed in the elevator). The error signal, e(t), represents the difference between the reference input x(t), and the physical location of the elevator at time t, y(t). Let's say that the elevator is on the first floor, and the button "5" is pressed at time t0. The reference input then becomes a step function: $x(t) = 5u(t - t_0)$ Where we are measuring in units of "floors". At time t0, the error signal is: $e(t_0) = x(t_0) - y(t_0) = 5 - 1 = 4$ Which means that the elevator needs to travel upwards 4 more floors. At time t1, when the elevator is at the second floor, the error signal is: $e(t_1) = x(t_1) - y(t_1) = 5 - 2 = 3$ Which means the elevator has 3 more floors to go. Finally, at time t4, when the elevator reaches the top, the error signal is: $e(t_4) = x(t_4) - y(t_4) = 5 - 5 = 0$ And when the error signal is zero, the elevator stops moving. In essence, we can define three cases: • e(t) is positive: In this case, the elevator goes up one floor, and checks again. • e(t) is zero: The elevator stops. • e(t) is negative: The elevator goes down one floor, and checks again. ### State-Space Feedback Loops In the state-space representation, the plant is typically defined by the state-space equations: $x'(t) = Ax(t) + Bu(t)$ $y(t) = Cx(t) + Du(t)$ The plant is considered to be pre-existing, and the matrices A, B, C, and D are considered to be internal to the plant (and therefore unchangeable). Also, in a typical system, the state variables are either fictional (in the sense of dummy-variables), or are not measurable. For these reasons, we need to add external components, such as a gain element, or a feedback element to the plant to enhance performance. Consider the addition of a gain matrix K installed at the input of the plant, and a negative feedback element F that is multiplied by the system output y, and is added to the input signal of the plant. There are two cases: 1. The feedback element F is subtracted from the input before multiplication of the K gain matrix. 2. The feedback element F is subtracted from the input after multiplication of the K gain matrix. In case 1, the feedback element F is added to the input before the multiplicative gain is applied to the input. If v is the input to the entire system, then we can define u as: $u(t) = Fv(t) - FKy(t)$ In case 2, the feeback element F is subtracted from the input after the multiplicative gain is applied to the input. If v is the input to the entire system, then we can define u as: $u(t) = Fv(t) - Ky(t)$ ↑Jump back a section ## Open Loop vs Closed Loop Let's say that we have the generalized system shown above. The top part, Gp(s) represents all the systems and all the controllers on the forward path. The bottom part, Gb(s) represents all the feedback processing elements of the system. The letter "K" in the beginning of the system is called the Gain. We will talk about the gain more in later chapters. We can define the Closed-Loop Transfer Function as follows: [Closed-Loop Transfer Function] $H_{cl}(s) = \frac{KGp(s)}{1 + Gp(s)Gb(s)}$ If we "open" the loop, and break the feedback node, we can define the Open-Loop Transfer Function, as: [Open-Loop Transfer Function] $H_{ol}(s) = Gp(s)Gb(s)$ We can redefine the closed-loop transfer function in terms of this open-loop transfer function: $H_{cl}(s) = \frac{KGp(s)}{1 + H_{ol}(s)}$ These results are important, and they will be used without further explanation or derivation throughout the rest of the book. ↑Jump back a section ## Placement of a Controller There are a number of different places where we could place an additional controller. In front of the system, before the feedback loop. Inside the feedback loop, in the forward path, before the plant. In the forward path, after the plant. In the feedback loop, in the reverse path. After the feedback loop. Each location has certain benefits and problems, and hopefully we will get a chance to talk about all of them. ↑Jump back a section ## Second-Order Systems ### Damping Ratio The damping ratio is defined by way of the sign zeta. The damping ratio gives us an idea about the nature of the transient response detailing the amount of overshoot and oscillation that the system will undergo. This is completely regardless time scaling. If zeta is: zero, the system is undamped; zeta < 1, the system is underdamped; zeta = 1, the system is critically damped; zeta > 1, the system is overdamped; Zeta is used in conjunction with the natural frequency to determine system properties. To find the zeta value you must first find the natural response! ### Natural Frequency ↑Jump back a section ## System Sensitivity ↑Jump back a section Last modified on 14 March 2013, at 13:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8944885730743408, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/94764-using-parameters.html
Thread: 1. Using Parameters A question from my text is: The simultaneous equations $x+2y+3z=13,\;-x-3y+2z=2$ and $-x-4y+7z=17$ have infinitely many solutions. Describe these solutions through the use of a parameter. I understand how to do this. I just don't understand the purpose of using a parameter? 2. Originally Posted by Stroodle A question from my text is: The simultaneous equations $x+2y+3z=13,\;-x-3y+2z=2$ and $-x-4y+7z=17$ have infinitely many solutions. Describe these solutions through the use of a parameter. I understand how to do this. I just don't understand the purpose of using a parameter? The parameter (let's say $t$) is used to define $x$, $y$, and $z$ solutions. Since $t\in\mathbb{R}$, that implies that $t$ can take on any value (hence, the system has infinite solution). Its just a nice way of writing the solution, because you can't possibly list all the solutions (you can even call it a general solution for any $t$)! 3. Actually. I don't quite understand how to do these. Can someone show working for: Using a parameter find all the solutions for: $2x-y+z=0$ and $y+2z=2$ Thanks 4. Originally Posted by Stroodle Actually. I don't quite understand how to do these. Can someone show working for: Using a parameter find all the solutions for: $2x-y+z=0$ and $y+2z=2$ Thanks Let $x = t$ where t is any real number. Then: $-y + z = -2t$ .... (1) $y + 2z = 2$ .... (2) Now solve (1) and (2) simultaneously for y and z in terms of t. 5. Hello, Stroodle! Using a parameter find all the solutions for: . $\begin{array}{cc}2x-y+z\:=\:0 & [1] \\ y+2z\:=\:2& [2]\end{array}$ Here's one solution . . . [There are many others.] Solve one of the equations for one of its variables. . . Solve [2] for $y\!:\;\;{\color{blue}y \:=\:2-2z}$ Substitute into the other equation. . . $2x - (2 - 2z) + z \:=\:0$ And solve for the third variable. . . ${\color{blue}x \:=\:1 - \tfrac{3}{2}z}$ So we have: . $\begin{Bmatrix}x &=& 1 - \frac{3}{2}z \\ y &=& 2 - 2z \\ z &=& z \end{Bmatrix}$ On the right side, replace $z$ with a parameter $t.$ Therefore, we have: . $\boxed{\begin{array}{ccc} x &=& 1 - \frac{3}{2}t \\ y &=& 2 - 2t \\ z &=& t \end{array}}$ 6. Awesome. Thanks for your help guys!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8719103336334229, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/7485/list
## Return to Answer 3 added 119 characters in body Let me have a punt at this. $F$ alg closed inside $F'$ iff $\overline{F}\otimes_FF'$ is a field, right? So now it's easy because $\overline{L}$ is an algebraic closure of $F$, and I don't think I even assumed $L$ was simple over $F$. Did I miss something? Edit: my first assertion needs justification and I can't justify it so I could be mistaken. It's clear that $\overline{F}\otimes_FF'$ is a field iff $L\otimes_FF'$ is a field for all finite extensions $L$ of $F$ (union of fields is a field; integral domain finite over a field is a field). Moreover, if $F$ is not algebraically closed in $F'$ then choose some $\alpha\in F'$ algebraic over $F$ but not in $F$, and $L=F(\alpha)$ is finite but $L\otimes_FF'$ is not a field (it contains $L\otimes L$), so one way is OK. The problem is the other way. First say $K=F(\beta)$ is finite and simple over $F$. Then $F$ alg closed in $F'$ implies $K\otimes_FF'$ is a field because if the min poly of $\beta$ factors in $F'$ then the factors are algebraic over $F$, so in $F$. But as the OP quite rightly points out, that isn't enough for me. So this is not yet an answer to the question. Second edit: I couldn't justify my first assertion because it is false. The counterexample posted looks good to me. 2 added clarifying sentence Let me have a punt at this. $F$ alg closed inside $F'$ iff $\overline{F}\otimes_FF'$ is a field, right? So now it's easy because $\overline{L}$ is an algebraic closure of $F$, and I don't think I even assumed $L$ was simple over $F$. Did I miss something? Edit: my first assertion needs justification and I can't justify it so I could be mistaken. It's clear that $\overline{F}\otimes_FF'$ is a field iff $L\otimes_FF'$ is a field for all finite extensions $L$ of $F$ (union of fields is a field; integral domain finite over a field is a field). Moreover, if $F$ is not algebraically closed in $F'$ then choose some $\alpha\in F'$ algebraic over $F$ but not in $F$, and $L=F(\alpha)$ is finite but $L\otimes_FF'$ is not a field (it contains $L\otimes L$), so one way is OK. The problem is the other way. First say $K=F(\beta)$ is finite and simple over $F$. Then $F$ alg closed in $F'$ implies $K\otimes_FF'$ is a field because if the min poly of $\beta$ factors in $F'$ then the factors are algebraic over $F$, so in $F$. But as the OP quite rightly points out, that isn't enough for me. So this is not yet an answer to the question. 1 Let me have a punt at this. $F$ alg closed inside $F'$ iff $\overline{F}\otimes_FF'$ is a field, right? So now it's easy because $\overline{L}$ is an algebraic closure of $F$, and I don't think I even assumed $L$ was simple over $F$. Did I miss something?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9789174199104309, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/gauss-sums?sort=votes&pagesize=15
# Tagged Questions For questions on Gauss sums, a particular kind of finite sum of roots of unity. 1answer 82 views ### Gauss Sum of a Field with Four Elements I need to calculate a couple of Gauss sums to solve a problem I'm working on, but I keep getting the wrong answer because the absolute value of what I calculate is impossible for such a sum. Can ... 1answer 147 views ### A Gauss sum like summation I would like to calculate the following sum. Let $\zeta$ be a primitive $n$ th root of unity for some integer $n$. Here $n$ is not necessarily prime. The sum is \sum_{j=1}^n (-1)^j ... 1answer 141 views ### Determining the Value of a Gauss Sum. Can we evaluate the exact form of $$g\left(k,n\right)=\sum_{r=0}^{n-1}\exp\left(2\pi i\frac{r^{2}k}{n}\right)$$ for general $k$ and $n$? For $k=1$, on MathWorld we have that ... 1answer 69 views ### Prime power Gauss sums are zero Fix an odd prime $p$. Then for a positive integer $a$, I can look at the quadratic Legendre symbol Gauss sum $$G_p(a) = \sum_{n \,\bmod\, p} \left( \frac{n}{p} \right) e^{2 \pi i a n / p}$$ where ... 1answer 92 views ### The Legendre symbol for an integer but not the Jacobi symbol Let $p$ be a prime number and $\big(\frac{a}{p} \big)$ be the Legendre symbol. Then we have the equality $\sum_{a=1}^{p-1} \big(\frac{a}{p} \big) \zeta^a =\sum_{t=0}^{p-1} \zeta^{t^2}$, where ... 1answer 161 views ### Gauss-type sums for cube roots of primes (Quadratic) Gauss sums express square root of any integer as a sum of roots of unity (or of cosines of rational multiples of $2\pi$, if you will) with rational coefficients. But Kronecker-Weber ... 1answer 55 views ### Number of solutions of $N(y^{2}+x^{3}=1)=p+2ReJ(\chi,\rho)$ This is similar to a question I recently asked about. It is from Ireland's Number theory book, ch.8, ex.27 b,c. I think I can do the first part of this question, but I think there might be a trick ... 1answer 79 views ### Quadratic Gauss Sums Let $p$ be an odd prime and $\zeta \not = 1$ be a $p^{th}$ root of unity. Let $R$ denote the set of all quadratic residues in $\mathbb{F}_p^*$. If $\alpha=\sum_{r\in R} \zeta^r$, prove that \alpha ... 1answer 26 views ### Multi-dimensional MLE Guassian I wonder that what is the mu and sigma formula MLE(maximum likelihood estimates) for a 3 dimension guassian ? It is the same form as 1 and 2 dimension (+ 1 mu and sigma for the new vector) ? 1answer 74 views ### Fractions in limits of a summation What if on the sum there is a fraction in the limit? $\sum_{m=k/12}^{k}$ or $\sum_{m=0}^{k/12+1}$ thank you very much! what type of sequence is used for summing this type of interval? 1answer 84 views ### A Trigonometric Sum Related to Gauss Sums This is a problem given to me by fractals on Art of Problem Solving. I couldn't solve it so I'm posting it here for some thoughts on it. Let S = \sum_{j = 0}^{\lfloor n/2 \rfloor} ... 1answer 35 views ### Definition: Gauss Sum - Where is the error? In my algebraic number theory lecture we defined Gauss sums as follows. However, I am quite unsure whether this definition is correct (our lecturer is quite absentminded at times). My intuition says ... 1answer 78 views ### Gauss Newton minimization of 2D linear function Given the input-output relation: \$ \begin{pmatrix} y_1 \\ y_2 \end{pmatrix} =p_1 \begin{pmatrix} p_2 & p_3 \\ p_4 & p_4 ... 1answer 52 views ### Gauss elimination with partial pivoting doubts I have the following doubts about Gauss algorithm with partial pivoting: Say that I sum to the second row the first row multiplied by $k$. In the $L$ matrix, should I sum to the second row the first ... 0answers 72 views ### How to prove a generalized Gauss sum formula I read the wikipedia article on quadratic Gauss sum. link First let me write a definition of a generalized Gauss sum. Let $G(a, c)= \sum_{n=0}^{c-1}\exp (\frac{an^2}{c})$, where $a$ and $c$ are ... 1answer 45 views ### Is there a way to directly compute maximum of a sum of several Gaussian functions? I have a problem which goes as follows. I am trying to predict the value of a variable $x$. I also have a set of measurements (the actual context is an image) $x^i$. I know from some training ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8865876197814941, "perplexity_flag": "head"}