url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/61600
# Examples of common false beliefs in mathematics The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes. Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are (i) a bounded entire function is constant; (ii) $\sin z$ is a bounded function; (iii) $\sin z$ is defined and analytic everywhere on $\mathbb{C}$; (iv) $\sin z$ is not a constant function. Obviously, it is (ii) that is false. I think probably many people visualize the extension of $\sin z$ to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense. A second example is the statement that an open dense subset $U$ of $\mathbb{R}$ must be the whole of $\mathbb{R}$. The "proof" of this statement is that every point $x$ is arbitrarily close to a point $u$ in $U$, so when you put a small neighbourhood about $u$ it must contain $x$. Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied. - I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 '10 at 0:55 The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – Unknown May 22 '10 at 9:04 wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – Suvrit Sep 20 '10 at 12:39 It's a thought -- I might consider it. – gowers Oct 4 '10 at 20:13 Meta created tea.mathoverflow.net/discussion/1165/… – user9072 Oct 8 '11 at 14:27 If $E$ is a contractible space on which the (Edit: topological) group $G$ acts freely, then $E/G$ is a classifying space for $G$. A better, but still false, version: If $E$ is a free, contractible $G$-space and the quotient map $E\to E/G$ admits local slices, then $E/G$ is a classifying space for $G$. (Here "admits local slices" means that there's a covering of $E/G$ by open sets $U_i$ such that there exist continuous sections $U_i \to E$ of the quotient map.) The simplest counterexample is: let $G^i$ denote $G$ with the indiscrete topology (Edit: and assume $G$ itself is not indiscrete). Then G acts on $G^i$ by translation and $G^i$ is contractible (for the same reason: any map into an indiscrete space is continuous). Since $G^i/G$ is a point, there's a (global) section, but it cannot be a classifying space for $G$ (unless $G=\{1\}$). The way to correct things is to require that the translation map $E\times_{E/G} E \to G$, sending a pair $(e_1, e_2)$ to the unique $g\in G$ satisfying $ge_1 = e_2$, is actually continuous. Of course the heart of the matter here is the corresponding false belief(s) regarding when the quotient map by a group action is a principal bundle. - I'm a little confused. How does requiring that $(e_1, e_2) \mapsto g$ be continuous fix things? In the indiscrete case, this map is continuous (since every map to the group is). And why isn't $G^i \to G^i/G$ a principal $G^i$--bundle? – Richard Kent Mar 6 '11 at 17:52 The group in this example starts out with some topology. (I guess I didn't specify that I was thinking of a topological group.) If G started with the indiscrete topology, then your commment makes sense, and we would have a principal bundle for this indiscrete group. But if G is not indiscrete, then the map $(e_1, e_2) \mapsto g$ is not continuous as a map into the topological group G. The proof that continuity of the translation map forces this to be a principal bundle can be found in Husemoller's book on fiber bundles (it's not hard). Let me know if this didn't answer your questions. – Dan Ramras Mar 6 '11 at 19:57 Oh! You're saying that a point is not a classifying space for G with some other topology. I thought you were saying that $G^i/G$ wasn't $BG^i$. Thanks for the clarification! – Richard Kent Mar 6 '11 at 20:01 Yes, precisely. It's an odd little example, but helpful when people forget to include the proper conditions... – Dan Ramras Mar 6 '11 at 21:06 Maybe even more amazing wrong belief in this field: $\dim(E/G)\le\dim E$ (there are counterexamples by A.N. Kolmogorov) – mikhail skopenkov Jun 9 '11 at 14:52 I don't know how common this is, but I've noticed it half an hour ago in some notes I had written: If $J$ is a finitely generated right ideal of a not necessarily commutative ring $R$, and $n$ is natural, then $J^n$ is finitely generated, isn't it? No, it isn't. For an example, try $R=\mathbb Z\left\langle X_1,X_2,X_3,...\right\rangle$ (ring of noncommutative polynomials) and $J=X_1R$. - Omg, I will have to be careful about that. Thanks Darij ;). – Martin Brandenburg Apr 12 '11 at 8:45 A degree $k$ map $S^n\to S^n$ induces multiplication by $k$ on all the homotopy groups $\pi_m(S^n)$. (Not sure if this is a common error, but I believed it implicitly for a while and it confused me about some things. If you unravel what degree $k$ means and what multiplication by $k$ in $\pi_m$ means, there's no reason at all to expect this to be true, and indeed it is false in general. It is true in the stable range, since $S^n$ looks like $\Omega S^{n+1}$ in the stable range, "degree k" can be defined in terms of the H-space structure on $\Omega S^{n+1}$, and an Eckmann-Hilton argument applies.) - If $n$ is even and $x \in \pi_{2n-1}(S^n)$ and $f$ a degree $k$ map and $H$ the Hopf invariant, then $H(f_* (x)) = k^2 H(x)$. A related misbelief: if $M$ is a framed manifold and $N\to$M a finite cover, of degree $d$. Then the framed bordism classes satisfy $[N]=d [M]$. Completely wrong. – Johannes Ebert Apr 14 '11 at 9:04 A random $k$-coloring of the vertices of a graph $G$ is more likely to be proper than a random $(k-1)$-coloring of the same graph. (A vertex coloring is proper if no two adjacent vertices are colored identically. In this case, random means uniform among all colorings, or equivalently, that each vertex is i.i.d. colored uniformly from the space of colors.) - ...wait, what's the truth then? – Harry Altman May 10 '11 at 0:06 It sounds plausible. – Michael Hardy May 10 '11 at 0:34 For some graphs $G$ and integers $k$, the opposite. The easiest example is the complete bipartite graph $K_{n,n}$ with $k=3$. The probability a $2$-coloring is proper is about $(1/4)^n$ while the same for a $3$-coloring is about $(2/9)^n$, where I've ignored minor terms like constants. The actual probabilities cross at $n=10$, so as an explicit example, a random $2$-coloring of $K_{10,10}$ is more likely to be proper than a random $3$-coloring. – aorq May 10 '11 at 0:37 This seems like a good example of a counterintuitive statement, but to call it a common false belief would mean that there are lots of people who think it's true. The question would probably never have occurred to me it I hadn't seen it here. The false belief that Euclid's proof of the infinitude of primes, on the other hand, actually gets asserted in print by mathematicians---in some cases good ones. – Michael Hardy May 10 '11 at 15:36 False statement: If $A$ and $B$ are subsets of $\mathbb{R}^d$, then their Hausdorff dimension $\dim_H$ satisfies $$\dim_H(A \times B) = \dim_H(A) + \dim_H(B).$$ EDIT: To answer Benoit's question, I do not know about a simple counterexample for $d = 1$, but here is the usual one (taken from Falconer's "The Geometry of Fractal Sets"): Let $(m_i)$ be a sequence of rapidly increasing integers (say $m_{i+1} > m_i^i$). Let $A \subset [0,1]$ denote the numbers with a zero in the $r^{th}$ decimal place if $m_j + 1 \leq r \leq m_{j+1}$ and $j$ is odd. Let $B \subset [0,1]$ denote the numbers with a zero in the $r^{th}$ decimal place if $m_{j} + 1 \leq r \leq m_{j+1}$ and $j$ is even. Then $\dim_H(A) = \dim_B(A) = 0$. To see this, you can cover $A$, for example, by $10^k$ covers of length $10^{- m_{2j}}$, where $k = (m_1 - m_0) + (m_3 - m_2) + \dots + (m_{2j - 1} - m_{2j - 2})$. Furthermore, if $\mathcal{H}^1$ denotes the Hausdorff $1$-dimensional (metric) outer measure of $E$, then the result follows by showing $\mathcal{H}^1(A \times B) > 0$. This is accomplished by considering $u \in [0,1]$ and writing $u = x + y$, where $x \in A$ and $y \in B$. Let $proj$ denote orthogonal projection from the plane to $L$, the line $y = x$. Then $proj(x,y)$ is the point of $L$ with distance $2^{-1/2}(x+y)$ from the origin. Thus, $proj( A \times B)$ is a subinterval of $L$ of length $2^{-1/2}$. Finally, it follows: $$\mathcal{H}^1(A \times B) \geq \mathcal{H}^1(proj(A \times B)) = 2^{-1/2} > 0.$$ - Well, it's disappointing that this fails, although it hadn't occurred to me to conjecture it. – Toby Bartels Apr 4 '11 at 9:53 Actually, the situation is worse than I say: there exist sets $A, B \subset \mathbb{R}$ with $dim_H(A \times B )= 1$, and yet $\dim_h(A) = \dim_H(B) = 0$. – JavaMan Apr 5 '11 at 6:22 By the way, is there a simple counter-example with $A=B$? – Benoît Kloeckner May 9 '11 at 7:51 Nice, I did not know that, though Hausdorff dimension is part of my mathematical life! But the sets I study (Julia sets in complex dimension one) usually are uniform enough that this does not occurr, I guess. Here's what happens, morally, in the example given here: the scales epsilon at which you have good covers of A and the scales at which you have good covers of B are disjoint. The products of these good covers are extremely distorted : they are thin rectangles, instead of squares. – Arnaud Chéritat Oct 18 '15 at 13:25 A possible false belief is that "a maximal Abelian subgroup of a compact connected Lie group is a maximal torus". Think of the $\mathbf Z_2\times\mathbf Z_2$-subgroup of $SO(3)$ given by diagonal matrices with $\pm1$ entries. - Fu... I just "proved" that again as an exercise a few days ago. – Johannes Hahn Mar 6 '13 at 0:02 A common trap which sometimes I see people fall is that a Hermitian matrix $M$ is negative definite if and only if its leading principal minors are negative. What is true is the Sylvester's criterion, which says that $M$ is positive definite if and only if its principal minors are positive. Thus, the true statement is that $M$ is negative definite if and only if the principal minors of $-M$ are positive. - That Darboux functions are continuous is certainly a widely held belief among students, at least in France where it is induced by the way continuity is taught in high school. I remember having gone through all the five "stages of grief" when shaken from this false belief with the $sin(1/x)$ example : denial, anger ( "then the definition of continuity must be wrong ! Let's change it !), bargaining ("Ok, but a Darboux function must surely be continuous except at exceptional points. Let's prove that..."), depression (when shown a nowhere continuous Darboux function), acceptance ("Hey guys, you really think the intermediate value theoem has a converse ? C'mon, you're smarter than that...") - Let $(X,\tau)$ be a topological space. The false belief is: "Every sequence $(x_n)$ in $X$ with an accumulation point $a\in X$ has a subsequence that converges to $a$". I subscribed to this intuitively until I stumbled over a counterexample, see http://dominiczypen.wordpress.com/2014/10/13/accumulation-without-converging-subsequence/ - Some things from pseudo-Riemannian geometry are a bit hard to swallow for students who have had previous exposure to Riemannian geometry. Aside from the usual ones arising from sign issues (like, in a two dimensional Lorentzian manifold with positive scalar curvature, time-like geodesics will not have conjugate points), an example is that in Riemannian manifolds, connectedness + geodesic completeness implies geodesic connectedness (every two points is connected by a geodesic). This is not true for Lorentzian manifolds, and the usual example is the pseudo-sphere. - I just realized yesterday that, given $A \to C, B \to C$ in an abelian category, the kernel of $A \oplus B \to C$ is not the direct sum of the kernels of $A \to C, B \to C$. - A common false belief is that all Gödel sentences are true because they say of themselves they are unprovable. See Peter Milne's "On Goedel Sentences and What They Say", Philosophia Mathematica (III) 15 (2007), 193–226. doi:10.1093/philmat/nkm015 - Two very common errors I see in (bad) statistics textbooks are (i) zero 3rd moment implies symmetry (though generally stated in terms of "skewness", where skewness has just been defined as a scaled third moment) (ii) the median lies between the mean and the mode (I have seen a bunch of related errors as well.) Another one I often see is some form of claim that the t-statistic goes to the t-distribution (with the usual degrees of freedom) in large samples from non-normal distributions. Even if we take as given that the samples are drawn under conditions where the central limit theorem holds, this is not the case. I have even seen (flawed) informal arguments given for it. What does happen is (given some form of the CLT applies) Slutzky's theorem implies that the t-statistic goes to a standard normal as the sample size goes to infinity, and of course the t-distribution also goes to the same thing in the limit - but so, for example, would a t-distribution with only half the degrees of freedom - and countless other things would as well. The first two errors are readily demonstrated to be false by simple counterexample, and to convince people that they don't have the third usually only requires pointing out that the numerator and denominator of the t-statistic won't be independent if the distribution is non-normal, or any of several other issues, and they usually realize quite quickly that you can't just hand-wave this folk-theorem into existence. - In the statistics text at the college where I teach, (ii) is universal among the examples given, so I formulated the conjecture; but when I tried to prove it and thought about what the mode really is, I realised how badly behaved that can be and found immediate counterexamples. (Then this gets me wondering why anybody would bother using the mode as a statistic for anything, since it's pretty much meaningless, but never mind.) – Toby Bartels Apr 4 '11 at 9:24 Toby: sure, you use the mode for cases when the domain of the measurement is not an ordered set but just a set without structure and so the median wouldn't make sense. – Zsbán Ambrus Apr 7 '11 at 12:01 Here's one I was reminded recently during lunch in the common room. A maximal abelian subalgebra of a semisimple Lie algebra is a Cartan subalgebra. This is true for compact real forms of semisimple Lie algebras, but fails in general. The missing condition is that the subalgebra should equal its normaliser. - For example, the usual proof of Schur's theorem on commutative algebras of matrices produces abelian Lie subalgebras of $\mathfrak{sl}_n$ much larger than the rank of $\mathfrak{sl}_n$. – Mariano Suárez-Alvarez Aug 4 '10 at 23:41 Yes, my favourite example (being a physicist) is the stabiliser in $\mathfrak{so}(1,n)$ of a nonzero zero-norm vector in $\mathbb{R}^{1,n}$ for $n>4$. The stabiliser contains an abelian ideal (infinitesimal null rotations) of dimension $n-1$. – José Figueroa-O'Farrill Aug 5 '10 at 0:02 I meant to write $n>3$ above. – José Figueroa-O'Farrill Aug 5 '10 at 0:03 The missing condition is that the subalgebra should equal its normaliser'', or that the subalgebra consists of semisimple elements, no? (That provides another perspective on why it's true for compact real forms.) – L Spice Dec 12 '13 at 23:26 Consider the following well-known result: Let $(E,\leq)$ be an ordered set. Then the following are equivalent: (i) Every nonempty subset of $E$ has a maximal element. (ii) Every increasing sequence in $E$ is stationary. It is immediate that (i) implies (ii). To prove the converse, one assumes that (i) is false and then "constructs step by step" a strictly increasing sequence. The common mistake (which I have seen in textbooks) is to describe the latter construction as a proof by induction. In fact, the construction uses the axiom of choice (or at least the dependent choice axiom). (As a special case, I don't think ZF can prove that every PID is a UFD.) - It’s not exactly wrong to call it a proof by induction. In ZFC, the proof of dependent choice — or of just about any instance of it, eg the one here — works by combining induction and choice. So I’d agree it’s wrong to sweep the choice under the carpet; but if you’re not explicitly invoking DC, then you will be using induction as well. – Peter LeFanu Lumsdaine Dec 1 '10 at 15:34 Peter, let's state DC as follows: "If $(p_n:X_{n+1}\to X_n)$ is an $\mathbb{N}$-projective system of nonempty sets with all $p_n$ surjective , then projlim($X_n$) is nonempty." Proof from AC: put $X:=\coprod_{n\geq0}X_n$ and $X^+=\coprod_{n>0}X_n$ with obvious map $p:X^+\to X$. Then $p$ is onto, so has a section $s$ (family of sections of all $p_n$'s). Given $x_0\in X_0$, sequence $(s^n(x_0))$ is an element of projlim($X_n$). I agree that we do need induction to define $s^n$. But iteration of a map is such a basic tool that I don't agree to call any proof using it a "proof by induction". – Laurent Moret-Bailly Dec 7 '10 at 11:49 Draw the graph of a continuous function $f$ (from $\mathbb{R}$ to $\mathbb{R}$). Now draw two dashed curves: one which everywhere a distance $\epsilon$ above the graph of $f$ and one which is everywhere a distance $\epsilon$ below the graph of $f$. Then the open $\epsilon$-ball around $f$ (with respect to the uniform norm) is all functions which fit strictly between the two dashed curves. - Surely this is true if you are talking about the closed ball, and only just barely false for the open ball (and if we were talking about functions from $[a,b]$ to $\mathbb{R}$ it would be true)? Or else I am one of those with the false belief... – Nate Eldredge Oct 10 '10 at 18:26 You are right, I should have specified open ball, thanks. I think it is just barely false for the open ball. Honestly, I held this false belief until a couple of days ago, and I haven't thought much about correcting my belief. Probably the real open epsilon ball is the union of all functions that fit between dashed curves a distance strictly less than epsilon away from f? At any rate, I think the above picture is the right way to think about it most of the time. But it gives results such as $tan^{-1}$ being in the open ball of radious pi/2 centered at 0 if you interpret it literally. – user4977 Oct 10 '10 at 19:24 Hmm, very nice (once clarified to the open ball)! Easily dispelled as soon as you question it, but I could easily imagine using it without thinking and missing the alternation of quantifiers that’s going on under the surface. – Peter LeFanu Lumsdaine Dec 1 '10 at 15:30 "If a field $K$ has characteristic 0 and $G$ is a group, then all $KG$-modules are completely reducible." True for finite groups but very false in general. - 1- A very common mistake that 1st year students (but not even a single mathematician) think that it is true is "a transitive and symmetric relation on a set is reflexive". But as the empty set is a transitive and symmetric relation but not reflexive on any non-empty set. Of course there lots of non-trivial examples also. 2- Another common mistake is that the expression "countable union of countable sets is again countable" is independent of axiom of choice (AC). Many people make the proof of this statement without mentioning axiom of choice. Indeed, in his holly book Algebra, Lang proves this statement just by taking an ordering from each countable set and continues without the mentioning AC. - For big-list questions, it's usually best to post independent answers as separate answers. – Nate Eldredge Dec 2 '10 at 15:13 +1 for #2. Baby Rudin is another offender. And many authors use so-called "diagonalization tricks" for proving compactness theorems like Arzela-Ascoli and Prohorov, which typically reduce to the compactness of $[0,1]^\mathbb{N}$. – Nate Eldredge Dec 2 '10 at 15:21 Isn't the more right statement that a transitive and irreflexive relation is assymetric? – Zsbán Ambrus Dec 4 '10 at 22:46 Duality reverses inclusions of vector spaces. - That's funny, because I don't imagine this kind of idea would occur to someone who has just learned the definition of a dual space. That would be a strangely sophisticated mistake to make. – Thierry Zell Apr 7 '11 at 0:21 And, once you learn that this is wrong, you can make the opposite mistake. See my comments here sbseminar.wordpress.com/2011/02/22/sobolev-spaces-on-manifolds on how surprised I was that duality DOES reverse the inclusions between Soboloev spaces. – David Speyer Apr 11 '11 at 12:07 The mistake is somehow in the wording. The dual of the inclusion morphism is reversed, it's just not an inclusion anymore. – Turion Sep 4 '15 at 15:42 I have heard the following a few times : "If $f$ is holomorphic on a region $\Omega$ and not one-to-one, then $f'$ must vanish somewhere in $\Omega$." $f(z)=e^z$ of course is a counterexample. - is it true though if the image is simply-connected? – KotelKanim Apr 17 '11 at 11:33 thanks. that's great. I never thought about it before, but it just sounded right... – KotelKanim Apr 19 '11 at 7:46 No true. Take $f(z)=z^3-3z$ and restrict it to the complement of $\lbrace 1,-1\rbrace$ so that $f'(z)$ is never $0$. It maps this domain onto $\mathbb C$. – Tom Goodwillie May 4 '11 at 0:16 @TomGoodwillie: what if both domain and image are simply-connected? – Michael Dec 3 '13 at 0:45 This is more of a false philosophy than a clear mistake, but nevertheless it is very common: A compact topological space must be "small" in some sense: it should be second countable or separable or have cardinality $\le 2^{\aleph_0}$, etc. This is all true for compact metric spaces, but in the general case, Tychonoff's theorem gives plenty of examples of compact spaces which are "huge" in the above sense. - More or less along the same lines, one is usually lead to think that "every topological space is Hausdorff". – Marco Golla May 4 '11 at 14:43 This is a common error made by mature mathematicians in many books and papers in analysis, especially in differential equations: If $X$ is a closed subspace of a Banach space $Y$, then the $Y^*$ (the dual of $Y$) is isomorphic to a subspace of $X^*$ (the dual of $X$). It is false (of course) since Euclidian space $\mathbb R$ is a subspace of $\mathbb R^2$, yet the dual of $\mathbb R^2=\mathbb R^2$ is not isomorphic to a subspace of the dual of $\mathbb R=\mathbb R$. I guess, sometimes they really, really want it to be true. Cheers Boris - I'll take your word for it, but since that statement is false even without introducing norms and topology, it staggers me that people could even believe that. They might say it without thinking, I guess – Yemon Choi Oct 20 '10 at 2:58 I would also be shocked if this really gets believed often! It seems to be a sort of “mis-dualisation”: they dualise “$X$ is a subobject of $Y$” to “$Y^*$ is a subobject of $X^*$”, where the correct dual is “$X^*$ is a quotient of $Y^*$”. – Peter LeFanu Lumsdaine Dec 1 '10 at 15:19 Teaching introduction to analysis, I had students using the "fact" that if $f: [a,b] \rightarrow \mathbb{R}$ is continuous, then $[a,b]$ can be divided to subintervals $[a,c_1],[c_1,c_2],...,[c_n,b]$ such that $f$ is monotone on every subinterval. For instance you can use this "fact" to "prove" the (true) fact that $f$ must be bounded on $[a,b]$. Also, some students used the same "fact", but with countably many subintervals. I found this mistake hard to explain to students, because constructing a counterexample (such as the Weierstrass function) is impossible at the knowledge level of an introduction course. - Why not $x \sin(1/x)$ as example? – user9072 Jan 2 '14 at 17:33 It is in the case of finitely many subintervals, but not in the case of countably many subintervals. – Izhar Oppenheim Jan 2 '14 at 19:17 You can surely discuss fractal shapes without needing to go into the details of a technical counterexample. The point seems to be that it is hard to imagine that "increasing at a point" and "increasing in a neighborhood of a point" are not the same for continuous functions. You can give easy examples showing that indeed they disagree, locally, and fractals suggest that you can make the disagreement happen everywhere. You can revisit this later, once more technology has been set in place. – Andrés E. Caicedo Jan 2 '14 at 23:44 While technically it is true one can do it with countably many for the function I gave (if one includes degenerate intervals) I would be surprised if not at least some (or rather most) of the confusion of the students could be addressed by the example (possibly continuing with discussion along the lines suggested by @AndresCaicedo). – user9072 Jan 5 '14 at 16:50 Many students believe that every abelian subgroup is a normal subgroup. - "A real symmetric matrix is positive-definite iff all the leading principal minors are positive, and positive-semidefinite iff all the leading principal minors are nonnegative." This paper collects some evidence that this belief is "common", and presents a counterexample (of size $3\times 3$. Exercise: find an example of size $2\times 2$). (Related to, but not the same as this answer.) - $$\pmatrix{0&0\cr0&-1\cr}$$ – Gerry Myerson Jul 29 '15 at 3:22 Yet another one: Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be differentiable. If $f'(x_0) > 0$, then there exists an interval $I$ containing $x_0$ such that $f$ is increasing in $I$. - I sort of find it hard to believe that amongst the nearly 200 answers on this thread (and just over 20 deleted ones), no one has posted this. – Asaf Karagila Aug 10 '15 at 6:07 A counter-example is necessarily with $f'$ discontinuous in $x_0$, right? For example $f(x)=x^2 sin (1/x)+x/2$ and $x_0 = 0$. – Sebastien Palcoux Aug 10 '15 at 8:05 @SébastienPalcoux Yes, I think if $f'$ is continuous in $x_0$ then the statement is true. – Shamisen Aug 10 '15 at 15:16 Common false belief: a space that is locally homeomorphic to $\mathbb{R}^n$ must be Hausdorff. More generally, many people forget that the usual definition of a manifold contains the Hausdorff and paracompact conditions. There are of course examples that show that forgetting this assumption leads to unexpected result, and they are in fact much wilder than I knew a few weeks ago. Notably, among examples of (Hausdorff) non-paracompact "manifolds" are the well-known long line, but also the Prüfer manifold constructed from a closed half-plane by attaching to it a half plane at each boundary point. Added: Let me give a particular case of this false belief to illustrate what kind of weird things can happen that most people would not realize when they are sloppy with the paracompact hypothesis: there exists a path-connected, locally contractible, simply-connected space that admits non-trivial locally trivial bundles with fiber $[0,1]$. Indeed, the first octant in the product of two long line is not homeomorphic to a product a long ray with an interval, but has a natural bundle structure over a long ray. - This one has bit me and some very good mathematicians I know. Let $X,Y$ be Banach spaces, and let $E \subset X$ be a dense subspace. Suppose $T : E \to Y$ is a bounded linear operator. Then $T$ has a unique bounded extension $\tilde{T} : X \to Y$. (True, this is the well-known and elementary "BLT theorem".) If $T$ is injective then so is $\tilde{T}$. (False! See this answer for a counterexample.) - Here's a mistake I've seen from students taking a first course in linear analysis. For a vector $g$ in a Hilbert space $H$, it is true that $\langle f,g\rangle=0$ for every $f\in H$ implies $g=0$. This leads us to the mistaken: “Let $(g_n)$ be a sequence in $H$. If, for every $f\in H$, $\langle f,g_n\rangle\to0$, then $g_n\to 0$.” - You wrote: "Here's a mistake I've seen from students taking a first course in linear analysis." Then you wrote: "For a vector $g$ in a Hilbert space, $\langle f,g\rangle$ for every $f \in H$ implies $g = 0$." At this point the reader could be wondering what that is a mistake. – Michael Hardy Dec 1 '10 at 22:35 ....sorry; I meant "$\langle f,g \rangle = 0$ for every[....]" – Michael Hardy Dec 1 '10 at 22:36 @Michael: all answers are CW; so if we think some wording needs clarifying, we can do it ourselves! – Peter LeFanu Lumsdaine Dec 2 '10 at 0:43 Thanks for the edits - that's a lot clearer.. – Ollie Margetts Dec 2 '10 at 13:49 I saw many students using the "fact" that for a subset $S$ of a group one has $SS^{-1}=\{e\}$ - This is an interesting example, because it addresses the mistakes that come from the all-too frequent confusion with notations. But we need our shortcuts, our $f^{-1}(x)$ versus $x^{-1}$, etc. Obtaining concise notations while avoiding confusion: a tricky proposition! – Thierry Zell Apr 14 '11 at 15:50 ## protected by François G. Dorais♦Oct 15 '13 at 2:34 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891316831111908, "perplexity": 388.80486049193246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00083-ip-10-164-35-72.ec2.internal.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=46B99
AMS eContent Search Results Matches for: msc=(46B99) AND publication=(all) Sort order: Date Results: 1 to 30 of 72 found      Go to page: 1 2 3 [1] Cyril Tintarev. Four proofs of cocompactness for Sobolev embeddings. Contemporary Mathematics 693 (2017) 321-329. Book volume table of contents    View Article: PDF [2] W. H. Schikhof and E. Olivos. A note on Banach spaces over a rank 1 discretely valued field. Contemporary Mathematics 665 (2016) 279-288. Book volume table of contents    View Article: PDF [3] Itaï Ben Yaacov. The linear isometry group of the Gurarij space is universal. Proc. Amer. Math. Soc. 142 (2014) 2459-2467. Abstract, references, and article information    View Article: PDF [4] Don Hadwin, Zhe Liu and Eric Nordgren. Closed densely defined operators commuting with multiplications in a multiplier pair. Proc. Amer. Math. Soc. 141 (2013) 3093-3105. Abstract, references, and article information    View Article: PDF [5] Costas Poulios. Regular methods of summability on tree-sequences in Banach spaces. Proc. Amer. Math. Soc. 139 (2011) 259-271. MR 2729088. Abstract, references, and article information    View Article: PDF This article is available free of charge [6] L. Nguyen Van Thé. Structural Ramsey theory of metric spaces and topological dynamics of isometry groups. Memoirs of the AMS 206 (2010) MR 2667917. Book volume table of contents    [7] Timothy Ferguson. Continuity of extremal elements in uniformly convex spaces. Proc. Amer. Math. Soc. 137 (2009) 2645-2653. MR 2497477. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Sergey Antonyan. Corrigendum to West's problem on equivariant hyperspaces and Banach-Mazur compacta''. Trans. Amer. Math. Soc. 358 (2006) 5631-5633. MR 2238929. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] Tepper L. Gill, Sudeshna Basu, Woodford W. Zachary and V. Steadman. Adjoint for operators in Banach spaces. Proc. Amer. Math. Soc. 132 (2004) 1429-1434. MR 2053349. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] Sergey Antonyan. West's problem on equivariant hyperspaces and Banach-Mazur compacta. Trans. Amer. Math. Soc. 355 (2003) 3379-3404. MR 1974693. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] Daniel Azagra and Juan Ferrera. Every closed convex set is the set of minimizers of some $C^{\infty}$-smooth convex function. Proc. Amer. Math. Soc. 130 (2002) 3687-3692. MR 1920049. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] Yudi Soeharyadi. On the comparison of the spaces $L^1BV(\mathbb{R}^n)$ and $BV(\mathbb{R}^n)$. Proc. Amer. Math. Soc. 130 (2002) 405-412. MR 1862119. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] P. G. Casazza, C. L. García and W. B. Johnson. An example of an asymptotically Hilbertian space which fails the approximation property. Proc. Amer. Math. Soc. 129 (2001) 3017-3023. MR 1840107. Abstract, references, and article information View Article: PDF This article is available free of charge [14] P. Kiriakouli. A classification of Baire-1 functions. Trans. Amer. Math. Soc. 351 (1999) 4599-4609. MR 1407705. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] Daniel Carando and Ignacio Zalduendo. A Hahn-Banach theorem for integral polynomials. Proc. Amer. Math. Soc. 127 (1999) 241-250. MR 1458865. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Cecília S. Fernandez. A counterexample to the Bartle-Graves selection theorem for multilinear maps. Proc. Amer. Math. Soc. 126 (1998) 2687-2690. MR 1485473. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] Fernando Galaz-Fontes. Note on compact sets of compact operators on a reflexive and separable Banach space. Proc. Amer. Math. Soc. 126 (1998) 587-588. MR 1443386. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] M. Meyer, G. Mokobodzki and M. Rogalski. Convex bodies and concave functions . Proc. Amer. Math. Soc. 123 (1995) 477-484. MR 1254848. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] Gun-Marie Lövblom. Uniform homeomorphisms between the unit balls in $L\sb p$ and $l\sb p$ . Proc. Amer. Math. Soc. 123 (1995) 405-409. MR 1227523. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] W. T. Gowers. A Banach space not containing $c\sb 0,\ l\sb 1$ or a reflexive subspace . Trans. Amer. Math. Soc. 344 (1994) 407-420. MR 1250820. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] Haskell Rosenthal. A characterization of Banach spaces containing $c\sb 0$ . J. Amer. Math. Soc. 7 (1994) 707-748. MR 1242455. Abstract, references, and article information    View Article: PDF This article is available free of charge [22] Robert Cauty and Tadeusz Dobrowolski. Applying coordinate products to the topological identification of normed spaces . Trans. Amer. Math. Soc. 337 (1993) 625-649. MR 1210952. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] J. A. Jaramillo and A. Prieto. Weak-polynomial convergence on a Banach space . Proc. Amer. Math. Soc. 118 (1993) 463-468. MR 1126196. Abstract, references, and article information    View Article: PDF This article is available free of charge [24] Janusz Matkowski. Subadditive functions and a relaxation of the homogeneity condition of seminorms . Proc. Amer. Math. Soc. 117 (1993) 991-1001. MR 1113646. Abstract, references, and article information    View Article: PDF This article is available free of charge [25] Paweł Domański and Augustyn Ortyński. Complemented subspaces of products of Banach spaces . Trans. Amer. Math. Soc. 316 (1989) 215-231. MR 937243. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Saharon Shelah and Juris Steprāns. A Banach space on which there are few operators . Proc. Amer. Math. Soc. 104 (1988) 101-105. MR 958051. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] Gerald Beer. On the Young-Fenchel transform for convex functions . Proc. Amer. Math. Soc. 104 (1988) 1115-1123. MR 937844. Abstract, references, and article information    View Article: PDF This article is available free of charge [28] Charles Stegall. Generalizations of a theorem of Namioka . Proc. Amer. Math. Soc. 102 (1988) 559-564. MR 928980. Abstract, references, and article information    View Article: PDF This article is available free of charge [29] A. Ülger. Weakly compact bilinear forms and Arens regularity . Proc. Amer. Math. Soc. 101 (1987) 697-704. MR 911036. Abstract, references, and article information    View Article: PDF This article is available free of charge [30] Jan van Mill. Domain invariance in infinite-dimensional linear spaces . Proc. Amer. Math. Soc. 101 (1987) 173-180. MR 897091. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 30 of 72 found      Go to page: 1 2 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482223391532898, "perplexity": 1539.840726862948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812855.47/warc/CC-MAIN-20180219231024-20180220011024-00418.warc.gz"}
https://absoblogginlutely.net/pimp-your-powershell-prompt/
# Pimp your Powershell Prompt I use powershell a lot at work – I’m not a guru by any means and I often find it hard to remember the commands I have run in a session, either for future use or for documenting in my time sheet (which also acts as a point of reference for future helpdesk tickets). When I started going through the Powershell in a month of lunches book (which I highly recommend or the Powershell v3 book) I decided to use the start-transcript commandlet to record all my powershell activities.  This worked very well until I would scroll through several screens worth and then forget what file I had saved my transcript too.  There was also the possibility of forgetting to transcript everything. By using the powershell profile file I was able to enter the commands to automatically set the transcript to the current date. I was then able to modify the title of the powershell prompt to display the filename so I could always see where the file was saved with the added bonus of a variable being used if I ever needed to open the transcript My next step was to include the time in the powershell prompt – this enables me to go back through the transcript and see how long it took to run the commands for my timesheet entries.  Remembering back to the good old dos days, I remembered the prompt command. A quick bit of experimenting with the Date command I had the current time displayed at the beginning on the Powershell prompt. Note this is displayed after the previous command is run, so technically it’s not the exact current time, but the time that the prompt was displayed on the screen. The final profile script can be copy/pasted into notepad by typing in is as follows:- ```cd \andy\powershellinamonthoflunches \$log="c:\temp\powershelllogs-" + \$env.username + (get-date -uformat "%y%m%d-%H%M") + ".txt" start-transcript \$log \$host.ui.rawui.WindowTitle = \$log function prompt { write-host ((Date -uformat %T).ToString() + "PS " +\$(get-location) + ">") -nonewline return " " }``` This ends up with a powershell prompt that looks like the following. Hope this brief posting inspires you to change your powershell prompt to be even more useful for you. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058356285095215, "perplexity": 1416.691759846359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00206.warc.gz"}
https://www.physicsforums.com/threads/need-help-with-trigonometric-equation.300651/
# Need help with trigonometric equation 1. Mar 18, 2009 ### andrelutz001 Hi All, I’m currently attempting to work ahead on my precalculs and I’m looking at trigonometric equations. I seem to have a bit of a problem with this example (i haven’t had too many issues with the rest of the exercises), 9cos(2x)+sin(x)=9 (solve for x in the interval 0 <=x <=2pi.) I’m thinking that I could move 9 on the other side and hence the equation will equal to 1: cos(2x)+sin(x)=9/9 cos(2x)+sin(x)=1 I know that cos(2x)=cos2x-sin2x and than the equation should look like this: cos2x-sin2x+sin(x)=1 Am i on the right track? What is the next step form here? Andrei 2. Mar 18, 2009 ### danago Woa be careful what you do there; If you are going to divide by 9, make sure you divide EVERYTHING by 9, so it becomes: cos(2x) + (1/9)sin(x)=1 Anyway, other than that, i would probably proceed in the same way as you did. With these types of questions, it is often easiest to reduce the equation to a form such that only one type of trig function is present. Can you see how a common identity can be used to further reduce the equation down to one with only sine's? What common type of equation does this then resemble? 3. Mar 18, 2009 ### andrelutz001 Thank you for replaying danago. That definitely helps. So after dividing everything by 9 I’m getting: cos(2x)+1/9sin(x)=1 I can use the double angle formula to reduce the equation to one type of trig function, hence: 1-2sin^2(x)+1/9sin(x)=1 -sin^2(x)+1/9sin(x)=0 And i now have a quadratic equation type 2x^2+(1/9)x I can use the quadratic formula and I’m nearly done. Many thanks. 4. Mar 18, 2009 ### Staff: Mentor When you write cos(2x)+1/9sin(x)=1, some people might (incorrectly) take the sine term to be 1/(9sin(x)). You can write this more clearly as 1/9 * sin(x). You can use the quadratic formula to solve -sin^2(x)+1/9sin(x)=0, but it's quicker and simpler just to factor sin(x) from each term to get sin(x)(-sin(x) + 1/9) = 0, and then set each factor to 0 to solve for sin(x) and then x. Similar Discussions: Need help with trigonometric equation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.855642557144165, "perplexity": 610.147881749281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00217.warc.gz"}
http://mathhelpforum.com/calculus/122649-problem-periodic-function.html
Math Help - Problem with periodic function 1. Problem with periodic function Mój problem jest następujący:My problem is as follows Let $f(x)=xh(x)$, where $h(x)$ is periodic function with period 1. Prove or give a counterexample to the following statement: Function $f$ is increasing if and only if $h$ is constant (and the constant is positive). We assume that $h$ is defined on the whole R. Thank you for help 2. the "If" part is trivial. for the " only if" part: since h is periodic function with period 1, we only need to focus on the closed interval [0,1]. To prove h is positive constant, it is surffice to prove that h is nondecreasing in [0,1] which implies h is constant in [0,1]. this can be Proved by contradiction! suppose there is two points $x_1 < x_2$ in [0,1] such that $h(x_1)>h(x_2)$. then $k+x_2>k+x_1$,where $k$ is arbitrary positive integer. since f is increasing, we have $f(k+x_2)-f(k+x_1)=k(h(x_2)-h(x_1))+x_2h(x_2)-x_1h(x_1)\geq 0$ (*) hold for any arbitrary positive integer $k$. since $h(x_1)>h(x_2)$, let k approches infinity, then $k(h(x_2)-h(x_1))+x_2h(x_2)-x_1h(x_1)$ tend to negative infinity, which is contradict to (*). thus h is nondecreasing in [0,1]. combined with h(0)=h(1) gives that h is constant in [0,1], therefore constant in all real line. since f is increasing, the constant is definitely positive, of course.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995042085647583, "perplexity": 332.61213536523036}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010779425/warc/CC-MAIN-20140305091259-00004-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/206585/group-with-finite-outer-automorphism-group-and-large-center
# Group with finite outer automorphism group and large center Does there exist a finitely generated group $$G$$ with outer automorphism group $$\mathrm{Out}(G)$$ finite, whose center contains infinitely many elements of order $$p$$ for some prime $$p$$? A motivation is that the existence of such a group would answer the question in this 2011 MO post. Indeed such a group would be an example of a finitely generated group with finite Out but with finite index in a group with infinite Out (namely $$G\times\mathbf{Z}/p\mathbf{Z}$$). Edit: here is an example when we drop the finite generation assumption: the universal central extension $$G$$ of $$\mathrm{SL}_n(\mathbf{Q})$$ when $$n\ge 5$$. Indeed, the central kernel is isomorphic (by standard stability results for $$K_2$$ of fields) to $$K_2(\mathbf{Q})$$, which is isomorphic to $$C_2\oplus\bigoplus_{p>2}C_{p-1}$$, where $$p$$ ranges over odd primes (see Milnor K-theory book, Chapter 11). So $$K_2(\mathbf{Q})$$ contains infinitely many elements of order 2 (or even of any other prime order, by Dirichlet's theorem of primes in an arithmetic progression). On the other hand, by Schreier-Van der Waerden, $$\mathrm{Aut}(\mathrm{(P)SL}_n(K))$$, for any field $$K$$, is generated by inner automorphisms, the inverse-by-transposition involution, and automorphisms induced by $$\mathrm{Aut}_{\mathrm{field}}(K)$$ acting entry-wise. Since here the field has a trivial field automorphism group, we obtain that $$\mathrm{Out((P)SL}_n(\mathbf{Q}))$$ has two elements only. It is immediate that the canonical map from $$\mathrm{Aut}(G)$$ to $$\mathrm{Aut}(G/Z(G))$$ is injective, and it follows that $$\mathrm{Out}(G)$$ is finite. Actually I expect that central extensions of $$\mathrm{SL}_n$$ of some well-chosen finitely generated commutative ring should be a source of finitely generated exemples, but it sounds harder. • Yves, "transposition" should read contrgradient ($X\mapsto (X^t)^{-1}$). Also, $\mbox{Aut}(Z(G))$ should read $\mbox{Aut}(G/Z(G))$. – Anton Klyachko Jul 2 '15 at 20:45 • @AntonKlyachko: Oddly, it's contragredient and not contragradient. (I'm guilty of misspelling this in the past.) I don't know where this term came from; in particular I don't think that gredient has any meaning on its own. – Dan Ramras Feb 17 '18 at 20:55 • @DanRamras here international-dictionnaire.com/definitions/… they suggest it comes from "ingredient" (but do not provide source) – YCor Feb 17 '18 at 21:08 Theorem B of this paper implies that we can take any two nontrivial involution-free groups $A$ and $B$ and construct a complete simple groups $D$ with a (diagrammatically) aspherical presentation $D=A*B/\langle\!\langle w_1, w_2,\dots\rangle\!\rangle$ (though such use of this theorem is killing a mosquito with a cannon). Here, complete means naturally isomorphic to the automorphism group (i.e. centreless and without outer automorphisms). Now, suppose that $A$ and $B$ coincides with their commutator subgroups. Acphericity implies that the centre of the free central extension $\widetilde D=A*B/[\langle\!\langle w_1, w_2,\dots\rangle\!\rangle,A*B]$ of $D$ is the free abelian group with the basis $\widetilde{w_1},\widetilde{w_2},\dots$ (see, e.g., Olshanskii's book, Section 34.4). Now, we can do whatever we want. For instance, we can take the quotient $G=\widetilde D/\langle\widetilde {w_1}^p,\widetilde{w_2}^p,\dots\rangle$ and obtain a desired group $G$. The natural map $\mbox{Aut}\,\widetilde D\to\mbox{Aut}\,G$ exists because the subgroup $\langle\widetilde {w_1}^p,\widetilde{w_2}^p,\dots\rangle$ is characteristic in $\widetilde D$ (as this subgroup consists of $p$th powers of all central elements), and this map is injective because $\widetilde D$ coincides with its commutator subgroup. So, there are no outer automorphisms of $G$. Jul 6, 2015. More details added, as suggested by Mamuka. (1) $G$ (as well as $D$ and $\widetilde D$) is finitely generated if $A$ and $B$ are finitely generated. (2) Suppose that a group is a quotient of another group: $L=M/N$. Then • there is a natural map $f\colon \mbox{Aut}\,M\to \mbox{Aut}\,L$ if $N$ is characteristic; • $f(\mbox{Inn}\,M)=\mbox{Inn}\,L$ (i.e. $f$ sends inner automorphisms to inner automorphisms, and each inner automorphism of $L$ has at least one inner preimage); • $f$ is injective if $N$ is central and $M$ coincides with its commutator subgroup. Now, take $M=G$ and $L=D$ (and $N=\langle\widetilde {w_1},\widetilde{w_2},\dots\rangle$). Then $\mbox{Aut}\,L=\mbox{Inn}\,L$ and, hence, $\mbox{Aut}\,M=\mbox{Inn}\,M$ by (2), i.e. all automorphisms of $M=G$ are inner. • Sorry, I cannot figure out how do you deduce the last sentence, and why can one arrange for a finitely generated $G$. Could you please elaborate a little bit? – მამუკა ჯიბლაძე Jul 6 '15 at 6:08 • I added some details. – Anton Klyachko Jul 6 '15 at 10:21 • Thank you for expanding it. And sorry for asking about (1) - by some reason I thought $w_i$ were some "new" generators... – მამუკა ჯიბლაძე Jul 9 '15 at 4:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242187142372131, "perplexity": 302.9816492524981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00586.warc.gz"}
https://kr.mathworks.com/help/econ/ssm.simulate.html
# simulate Class: ssm Monte Carlo simulation of state-space models ## Syntax ``````[Y,X] = simulate(Mdl,numObs)`````` ``````[Y,X] = simulate(Mdl,numObs,Name,Value)`````` ``````[Y,X,U,E] = simulate(___)`````` ## Description example ``````[Y,X] = simulate(Mdl,numObs)``` simulates one sample path of observations (`Y`) and states (`X`) from a fully specified, state-space model (`Mdl`). The software simulates `numObs` observations and states per sample path.``` ``````[Y,X] = simulate(Mdl,numObs,Name,Value)``` returns simulated responses and states with additional options specified by one or more `Name,Value` pair arguments.For example, specify the number of paths or model parameter values.``` example ``````[Y,X,U,E] = simulate(___)``` additionally simulate state disturbances (`U`) and observation innovations (`E`) using any of the input arguments in the previous syntaxes.``` ## Input Arguments expand all Standard state-space model, specified as an`ssm` model object returned by `ssm` or `estimate`. A standard state-space model has finite initial state covariance matrix elements. That is, `Mdl` cannot be a `dssm` model object. If `Mdl` is not fully specified (that is, `Mdl` contains unknown parameters), then specify values for the unknown parameters using the `'``Params``'` `Name,Value` pair argument. Otherwise, the software throws an error. Number of periods per path to generate variants, specified as a positive integer. If `Mdl` is a time-varying model, then the length of the cell vector corresponding to the coefficient matrices must be at least `numObs`. If `numObs` is fewer than the number of periods that `Mdl` can support, then the software only uses the matrices in the first `numObs` cells of the cell vectors corresponding to the coefficient matrices. Data Types: `double` ### Name-Value Arguments Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes. Number of sample paths to generate variants, specified as the comma-separated pair consisting of `'NumPaths'` and a positive integer. Example: `'NumPaths',1000` Data Types: `double` Values for unknown parameters in the state-space model, specified as the comma-separated pair consisting of `'Params'` and a numeric vector. The elements of `Params` correspond to the unknown parameters in the state-space model matrices `A`, `B`, `C`, and `D`, and, optionally, the initial state mean `Mean0` and covariance matrix `Cov0`. • If you created `Mdl` explicitly (that is, by specifying the matrices without a parameter-to-matrix mapping function), then the software maps the elements of `Params` to `NaN`s in the state-space model matrices and initial state values. The software searches for `NaN`s column-wise following the order `A`, `B`, `C`, `D`, `Mean0`, and `Cov0`. • If you created `Mdl` implicitly (that is, by specifying the matrices with a parameter-to-matrix mapping function), then you must set initial parameter values for the state-space model matrices, initial state values, and state types within the parameter-to-matrix mapping function. If `Mdl` contains unknown parameters, then you must specify their values. Otherwise, the software ignores the value of `Params`. Data Types: `double` ## Output Arguments expand all Simulated observations, returned as a matrix or cell matrix of numeric vectors. If `Mdl` is a time-invariant model with respect to the observations, then `Y` is a `numObs`-by-n-by-`numPaths` array. That is, each row corresponds to a period, each column corresponds to an observation in the model, and each page corresponds to a sample path. The last row corresponds to the latest simulated observations. If `Mdl` is a time-varying model with respect to the observations, then `Y` is a `numObs`-by-`numPaths` cell matrix of vectors. `Y{t,j}` contains a vector of length nt of simulated observations for period t of sample path j. The last row of `Y` contains the latest set of simulated observations. Data Types: `cell` | `double` Simulated states, returned as a numeric matrix or cell matrix of vectors. If `Mdl` is a time-invariant model with respect to the states, then `X` is a `numObs`-by-m-by-`numPaths` array. That is, each row corresponds to a period, each column corresponds to a state in the model, and each page corresponds to a sample path. The last row corresponds to the latest simulated states. If `Mdl` is a time-varying model with respect to the states, then `X` is a `numObs`-by-`numPaths` cell matrix of vectors. `X{t,j}` contains a vector of length mt of simulated states for period t of sample path j. The last row of `X` contains the latest set of simulated states. Simulated state disturbances, returned as a matrix or cell matrix of vectors. If `Mdl` is a time-invariant model with respect to the state disturbances, then `U` is a `numObs`-by-h-by-`numPaths` array. That is, each row corresponds to a period, each column corresponds to a state disturbance in the model, and each page corresponds to a sample path. The last row corresponds to the latest simulated state disturbances. If `Mdl` is a time-varying model with respect to the state disturbances, then `U` is a `numObs`-by-`numPaths` cell matrix of vectors. `U{t,j}` contains a vector of length ht of simulated state disturbances for period t of sample path j. The last row of `U` contains the latest set of simulated state disturbances. Data Types: `cell` | `double` Simulated observation innovations, returned as a matrix or cell matrix of numeric vectors. If `Mdl` is a time-invariant model with respect to the observation innovations, then `E` is a `numObs`-by-h-by-`numPaths` array. That is, each row corresponds to a period, each column corresponds to an observation innovation in the model, and each page corresponds to a sample path. The last row corresponds to the latest simulated observation innovations. If `Mdl` is a time-varying model with respect to the observation innovations, then `E` is a `numObs`-by-`numPaths` cell matrix of vectors. `E{t,j}` contains a vector of length ht of simulated observation innovations for period t of sample path j. The last row of `E` contains the latest set of simulated observations. Data Types: `cell` | `double` ## Examples expand all Suppose that a latent process is an AR(1) model. The state equation is `${x}_{t}=0.5{x}_{t-1}+{u}_{t},$` where ${u}_{t}$ is Gaussian with mean 0 and standard deviation 1. Generate a random series of 100 observations from ${x}_{t}$, assuming that the series starts at 1.5. ```T = 100; ARMdl = arima('AR',0.5,'Constant',0,'Variance',1); x0 = 1.5; rng(1); % For reproducibility x = simulate(ARMdl,T,'Y0',x0);``` Suppose further that the latent process is subject to additive measurement error. The observation equation is `${y}_{t}={x}_{t}+{\epsilon }_{t},$` where ${\epsilon }_{t}$ is Gaussian with mean 0 and standard deviation 0.75. Together, the latent process and observation equations compose a state-space model. Use the random latent state process (`x`) and the observation equation to generate observations. `y = x + 0.75*randn(T,1);` Specify the four coefficient matrices. ```A = 0.5; B = 1; C = 1; D = 0.75;``` Specify the state-space model using the coefficient matrices. `Mdl = ssm(A,B,C,D)` ```Mdl = State-space model type: ssm State vector length: 1 Observation vector length: 1 State disturbance vector length: 1 Observation innovation vector length: 1 Sample size supported by model: Unlimited State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... State equation: x1(t) = (0.50)x1(t-1) + u1(t) Observation equation: y1(t) = x1(t) + (0.75)e1(t) Initial state distribution: Initial state means x1 0 Initial state covariance matrix x1 x1 1.33 State types x1 Stationary ``` `Mdl` is an `ssm` model. Verify that the model is correctly specified using the display in the Command Window. The software infers that the state process is stationary. Subsequently, the software sets the initial state mean and covariance to the mean and variance of the stationary distribution of an AR(1) model. Simulate one path each of states and observations. Specify that the paths span 100 periods. `[simY,simX] = simulate(Mdl,100);` `simY` is a 100-by-1 vector of simulated responses. `simX` is a 100-by-1 vector of simulated states. Plot the true state values with the simulated states. Also, plot the observed responses with the simulated responses. ```figure subplot(2,1,1) plot(1:T,x,'-k',1:T,simX,':r','LineWidth',2) title({'True State Values and Simulated States'}) xlabel('Period') ylabel('State') legend({'True state values','Simulated state values'}) subplot(2,1,2) plot(1:T,y,'-k',1:T,simY,':r','LineWidth',2) title({'Observed Responses and Simulated responses'}) xlabel('Period') ylabel('Response') legend({'Observed responses','Simulated responses'})``` By default, `simulate` simulates one path for each state and observation in the state-space model. To conduct a Monte Carlo study, specify to simulate a large number of paths. To generate variates from a state-space model, specify values for all unknown parameters. Explicitly create this state-space model. `$\begin{array}{c}{x}_{t}=\varphi {x}_{t-1}+{\sigma }_{1}{u}_{t}\\ {y}_{t}={x}_{t}+{\sigma }_{2}{\epsilon }_{t}\end{array}$` where ${u}_{t}$ and ${\epsilon }_{t}$ are independent Gaussian random variables with mean 0 and variance 1. Suppose that the initial state mean and variance are 1, and that the state is a stationary process. ```A = NaN; B = NaN; C = 1; D = NaN; mean0 = 1; cov0 = 1; stateType = 0; Mdl = ssm(A,B,C,D,'Mean0',mean0,'Cov0',cov0,'StateType',stateType);``` Simulate 100 responses from `Mdl`. Specify that the autoregressive coefficient is 0.75, the state disturbance standard deviation is 0.5, and the observation innovation standard deviation is 0.25. ```params = [0.75 0.5 0.25]; y = simulate(Mdl,100,'Params',params); figure; plot(y); title 'Simulated Responses'; xlabel 'Period';``` The software searches for `NaN` values column-wise following the order A, B, C, D, Mean0, and Cov0. The order of the elements in `params` should correspond to this search. Suppose that the relationship between the change in the unemployment rate (${x}_{1,t}$) and the nominal gross national product (nGNP) growth rate (${x}_{3,t}$) can be expressed in the following, state-space model form. `$\left[\begin{array}{c}{x}_{1,t}\\ {x}_{2,t}\\ {x}_{3,t}\\ {x}_{4,t}\end{array}\right]=\left[\begin{array}{cccc}{\varphi }_{1}& {\theta }_{1}& {\gamma }_{1}& 0\\ 0& 0& 0& 0\\ {\gamma }_{2}& 0& {\varphi }_{2}& {\theta }_{2}\\ 0& 0& 0& 0\end{array}\right]\left[\begin{array}{c}{x}_{1,t-1}\\ {x}_{2,t-1}\\ {x}_{3,t-1}\\ {x}_{4,t-1}\end{array}\right]+\left[\begin{array}{cc}1& 0\\ 1& 0\\ 0& 1\\ 0& 1\end{array}\right]\left[\begin{array}{c}{u}_{1,t}\\ {u}_{2,t}\end{array}\right]$` `$\left[\begin{array}{c}{y}_{1,t}\\ {y}_{2,t}\end{array}\right]=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 0& 1& 0\end{array}\right]\left[\begin{array}{c}{x}_{1,t}\\ {x}_{2,t}\\ {x}_{3,t}\\ {x}_{4,t}\end{array}\right]+\left[\begin{array}{cc}{\sigma }_{1}& 0\\ 0& {\sigma }_{2}\end{array}\right]\left[\begin{array}{c}{\epsilon }_{1,t}\\ {\epsilon }_{2,t}\end{array}\right],$` where: • ${x}_{1,t}$ is the change in the unemployment rate at time t. • ${x}_{2,t}$ is a dummy state for the MA(1) effect on ${x}_{1,t}$. • ${x}_{3,t}$ is the nGNP growth rate at time t. • ${x}_{4,t}$ is a dummy state for the MA(1) effect on ${x}_{3,t}$. • ${y}_{1,t}$ is the observed change in the unemployment rate. • ${y}_{2,t}$ is the observed nGNP growth rate. • ${u}_{1,t}$ and ${u}_{2,t}$ are Gaussian series of state disturbances having mean 0 and standard deviation 1. • ${\epsilon }_{1,t}$ is the Gaussian series of observation innovations having mean 0 and standard deviation ${\sigma }_{1}$. • ${\epsilon }_{2,t}$ is the Gaussian series of observation innovations having mean 0 and standard deviation ${\sigma }_{2}$. Load the Nelson-Plosser data set, which contains the unemployment rate and nGNP series, among other things. `load Data_NelsonPlosser` Preprocess the data by taking the natural logarithm of the nGNP series, and the first difference of each. Also, remove the starting `NaN` values from each series. ```isNaN = any(ismissing(DataTable),2); % Flag periods containing NaNs gnpn = DataTable.GNPN(~isNaN); u = DataTable.UR(~isNaN); T = size(gnpn,1); % Sample size y = zeros(T-1,2); % Preallocate y(:,1) = diff(u); y(:,2) = diff(log(gnpn));``` This example proceeds using series without `NaN` values. However, using the Kalman filter framework, the software can accommodate series containing missing values. To determine how well the model forecasts observations, remove the last 10 observations for comparison. ```numPeriods = 10; % Forecast horizon isY = y(1:end-numPeriods,:); % In-sample observations oosY = y(end-numPeriods+1:end,:); % Out-of-sample observations``` Specify the coefficient matrices. ```A = [NaN NaN NaN 0; 0 0 0 0; NaN 0 NaN NaN; 0 0 0 0]; B = [1 0;1 0 ; 0 1; 0 1]; C = [1 0 0 0; 0 0 1 0]; D = [NaN 0; 0 NaN];``` Specify the state-space model using `ssm`. Verify that the model specification is consistent with the state-space model. `Mdl = ssm(A,B,C,D)` ```Mdl = State-space model type: ssm State vector length: 4 Observation vector length: 2 State disturbance vector length: 2 Observation innovation vector length: 2 Sample size supported by model: Unlimited Unknown parameters for estimation: 8 State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... Unknown parameters: c1, c2,... State equations: x1(t) = (c1)x1(t-1) + (c3)x2(t-1) + (c4)x3(t-1) + u1(t) x2(t) = u1(t) x3(t) = (c2)x1(t-1) + (c5)x3(t-1) + (c6)x4(t-1) + u2(t) x4(t) = u2(t) Observation equations: y1(t) = x1(t) + (c7)e1(t) y2(t) = x3(t) + (c8)e2(t) Initial state distribution: Initial state means are not specified. Initial state covariance matrix is not specified. State types are not specified. ``` Estimate the model parameters, and use a random set of initial parameter values for optimization. Restrict the estimate of ${\sigma }_{1}$ and ${\sigma }_{2}$ to all positive, real numbers using the `'lb'` name-value pair argument. For numerical stability, specify the Hessian when the software computes the parameter covariance matrix, using the `'CovMethod'` name-value pair argument. ```rng(1); params0 = rand(8,1); [EstMdl,estParams] = estimate(Mdl,isY,params0,... 'lb',[-Inf -Inf -Inf -Inf -Inf -Inf 0 0],'CovMethod','hessian');``` ```Method: Maximum likelihood (fmincon) Sample size: 51 Logarithmic likelihood: -170.92 Akaike info criterion: 357.84 Bayesian info criterion: 373.295 | Coeff Std Err t Stat Prob ---------------------------------------------------- c(1) | 0.06750 0.16548 0.40791 0.68334 c(2) | -0.01372 0.05887 -0.23302 0.81575 c(3) | 2.71201 0.27039 10.03006 0 c(4) | 0.83815 2.84585 0.29452 0.76837 c(5) | 0.06274 2.83469 0.02213 0.98234 c(6) | 0.05196 2.56872 0.02023 0.98386 c(7) | 0.00272 2.40772 0.00113 0.99910 c(8) | 0.00016 0.13942 0.00113 0.99910 | | Final State Std Dev t Stat Prob x(1) | -0.00000 0.00272 -0.00033 0.99973 x(2) | 0.12237 0.92954 0.13164 0.89527 x(3) | 0.04049 0.00016 256.68501 0 x(4) | 0.01183 0.00016 72.49641 0 ``` `EstMdl` is an `ssm` model, and you can access its properties using dot notation. Filter the estimated, state-space model, and extract the filtered states and their variances from the final period. `[~,~,Output] = filter(EstMdl,isY);` Modify the estimated, state-space model so that the initial state means and covariances are the filtered states and their covariances of the final period. This sets up simulation over the forecast horizon. ```EstMdl1 = EstMdl; EstMdl1.Mean0 = Output(end).FilteredStates; EstMdl1.Cov0 = Output(end).FilteredStatesCov;``` Simulate `5e5` paths of observations from the fitted, state-space model `EstMdl`. Specify to simulate observations for each period. ```numPaths = 5e5; SimY = simulate(EstMdl1,10,'NumPaths',numPaths);``` `SimY` is a `10`-by- `2`-by- `numPaths` array containing the simulated observations. The rows of `SimY` correspond to periods, the columns correspond to an observation in the model, and the pages correspond to paths. Estimate the forecasted observations and their 95% confidence intervals in the forecast horizon. ```MCFY = mean(SimY,3); CIFY = quantile(SimY,[0.025 0.975],3);``` Estimate the theoretical forecast bands. ```[Y,YMSE] = forecast(EstMdl,10,isY); Lb = Y - sqrt(YMSE)*1.96; Ub = Y + sqrt(YMSE)*1.96;``` Plot the forecasted observations with their true values and the forecast intervals. ```figure h = plot(dates(end-numPeriods-9:end),[isY(end-9:end,1);oosY(:,1)],'-k',... dates(end-numPeriods+1:end),MCFY(end-numPeriods+1:end,1),'.-r',... dates(end-numPeriods+1:end),CIFY(end-numPeriods+1:end,1,1),'-b',... dates(end-numPeriods+1:end),CIFY(end-numPeriods+1:end,1,2),'-b',... dates(end-numPeriods+1:end),Y(:,1),':c',... dates(end-numPeriods+1:end),Lb(:,1),':m',... dates(end-numPeriods+1:end),Ub(:,1),':m',... 'LineWidth',3); xlabel('Period') ylabel('Change in the unemployment rate') legend(h([1,2,4:6]),{'Observations','MC forecasts',... '95% forecast intervals','Theoretical forecasts',... '95% theoretical intervals'},'Location','Best') title('Observed and Forecasted Changes in the Unemployment Rate')``` ```figure h = plot(dates(end-numPeriods-9:end),[isY(end-9:end,2);oosY(:,2)],'-k',... dates(end-numPeriods+1:end),MCFY(end-numPeriods+1:end,2),'.-r',... dates(end-numPeriods+1:end),CIFY(end-numPeriods+1:end,2,1),'-b',... dates(end-numPeriods+1:end),CIFY(end-numPeriods+1:end,2,2),'-b',... dates(end-numPeriods+1:end),Y(:,2),':c',... dates(end-numPeriods+1:end),Lb(:,2),':m',... dates(end-numPeriods+1:end),Ub(:,2),':m',... 'LineWidth',3); xlabel('Period') ylabel('nGNP growth rate') legend(h([1,2,4:6]),{'Observations','MC forecasts',... '95% MC intervals','Theoretical forecasts','95% theoretical intervals'},... 'Location','Best') title('Observed and Forecasted nGNP Growth Rates')``` ## Tips Simulate states from their joint conditional posterior distribution given the responses by using `simsmooth`. ## References [1] Durbin J., and S. J. Koopman. Time Series Analysis by State Space Methods. 2nd ed. Oxford: Oxford University Press, 2012.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801633358001709, "perplexity": 1140.776907201246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00663.warc.gz"}
http://math.stackexchange.com/questions/154467/cauchy-theorem-for-integrals-on-convex-sets
# Cauchy theorem for integrals on convex sets I have a few questions about the proof of the following theorem (outlined in red). ## Lemma Cauchy-Goursat Let $\triangle=\triangle(a,b,c)$ be a triangle in an open set $\Omega \subseteq \mathbb{C},\ p\in \Omega,\ f:\Omega\rightarrow \mathbb{C}$ continuous and $f$ analytical on $\Omega \setminus \{p\}$. Then: $\int\limits_{\partial\triangle}f(z)dz=0$ , where $\partial\triangle$ means on and inside the triangle $\triangle$. ## Cauchy Theorem for convex sets Let $\Omega\subseteq \mathbb{C}$ open and convex, $p\in\Omega,\ f$ : $\Omega\rightarrow \mathbb{C}$ continuous, $f\in H(\Omega\backslash \{p\})$ ($f$ analytic on $\Omega \backslash \{p\}$. Then there is a $F\in H(\Omega)$ (F analytic on $\Omega$) with $F'(z)=f(z)$ for all $z\in\Omega$. It also folows that $\displaystyle \int\limits_{\gamma}f(z)\mathrm{d}z=0$ for every piecewise continuously differentiable and closed path $\gamma\in\Omega$. (e) ## Proof Fix $a\in\Omega$ . Because of the convexity of $\Omega$, we have $$F(z):=\int\limits_{[a,z]}f(\xi)\mathrm{d}\xi\ (z\in\Omega).\ (a)$$ $\color{red}{\text{(1) Why (a) follow from }\Omega \text{ being convex ? }}$ Also, because $\Omega$ is convex, we have every triangle $\triangle(a,\ z_{0},\ z)$ in $\Omega\ (z,\ z_{0}\in\Omega). \ \ \ (b)$ $\color{red}{\text{(2) Why is (b) true ?}}$ From the Cauchy-Goursat lemma presented above, we have: $$F(z)-F(z_{0})=\int\limits_{[a,z]}f(\xi)\mathrm{d}\xi+\int_{[z_{0},a]}f(\xi)\mathrm{d}\xi=\int_{[z_{0},z]}f(\xi)\mathrm{d}\xi. \ (c)$$ $\color{red}{\text{(3) How does (c) follow from the Cauchy-Goursat lemma?}}$ For a fixed $z_{0}\in\Omega$ it follows for $z\in\Omega\backslash \{z_{0}\}$: $$\frac{F(z)-F(z_{0})}{z-z_{0}}-f(z_{0})=\frac{1}{z-z_{0}}\int\limits_{[z_{0},z]}(f(\xi)-f(z_{0}))\mathrm{d}\xi \ \ (d)$$ $\color{red}{\text{(4) Why is in (d): }\displaystyle f(z_0)=\frac{1}{z-z_{0}}\int\limits_{[z_{0},z]}f(z_{0})\mathrm{d}\xi}$ Let $\varepsilon>0$. Because $f$ is in $z_{0}$ continuous, we have a $\delta>0$ with $|f(\xi)-f(z_{0})|<\varepsilon$, if $|\xi-z_{0}|<\delta$. For $0<|z-z_{0}|<\delta$ we have: $$\left|\frac{F(z)-F(z_{0})}{z-z_{0}}-f(z_{0})\right|<\frac{1}{|z-z_{0}|}L([z_{0},z])\varepsilon=\varepsilon. \ (d)$$ $L[z_0,z]$ is the length from $z_0$ to $z$. $\color{red}{\text{(5) In (d): This is the ML-inequality, right? Where does the }\frac{1}{|z-z_{0}|} \text{come from?}}$ So $F'(z_{0})=f(z_{0})$. Because $z_{0}\in\Omega$ was chosen arbitrarily, we have $F$ analytic in $\Omega$. $\color{red}{\text{(6) But we still have not shown }\\ \displaystyle \int\limits_{\gamma}f(z)\mathrm{d}z=0. \text{ This is appears in the statement of the theorem, the last paragraph (see (e) ). How does this follow?}}$ As you can see, I am utterly lost here. - I shall address $\,(4)\,$ as is the only one that the question is in red. In all the other ones I can't be sure whether you're asking about something below or above the red question... Anyway: $$\int_{[z_0,z]}f(z_0)d\xi=f(z_0)\int_{[z_0,z]}d\xi=f(z_0)|z_0-z|$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956700205802917, "perplexity": 152.27384247606187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775392.34/warc/CC-MAIN-20141217075255-00098-ip-10-231-17-201.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/188867/quotient-ring-of-a-polynomial-ideal-with-two-variables
Quotient ring of a polynomial ideal with two variables Given an ideal $I = \langle x-y,y^3+y+1 \rangle \subset \mathbb{C}[x,y]$ (this is a Gröbner basis w.r.t. degree-lexicographic order). I want to write $\mathbb{C}[x,y]/I$ as a $\mathbb{C}$-Basis and determire $\operatorname{dim}_{\mathbb{C}}\mathbb{C}[x,y]/I$. I know what a quotient ring is and how it is definied but I have no intuition how $\mathbb{C}[x,y]/I$ looks like. Any hints? - Hint: $x-y\in I$ means that that in $C[x,y]/I$, $x$ and $y$ are equal. –  Thomas Andrews Aug 30 '12 at 16:34 Well, $\mathbb{C}[x,y]/I$ is still spanned as a complex vector space by the monomials $x^i y^j$. However, that spanning set is not linearly independent: any linear combination of monomials that adds up to an element of $I$ is equal to zero! Thinking of the Groebner basis as a rewrite scheme is useful too: the form of your basis says: • Whenever I see an $x$, replace it with $y$ • Whenever I see a $y^3$, replace it with $-y-1$ which gives you an algorithm to convert any polynomial to a unique normal form... and makes it easy to see what polynomials can be normal forms. (To be clear, whenever you see a $y^4$, that also means you see a $y^3$, because $y^4 = y^3 \cdot y$) - Thanks for the great answer! If I understand you correctly I assume all the equivalence classes are of the form $[a \cdot 1], [a \cdot y], [a \cdot y^2]$ with $a \in \mathbb{C}$. Thus, $\operatorname{dim}_{\mathbb{C}}\mathbb{C}[x,y]/I = 3$? –  joachim Aug 30 '12 at 18:11 That is indeed a basis for $\mathbb{C}[x,y]/I$. (don't forget the quotient ring also contains linear combinations of them!) –  Hurkyl Aug 30 '12 at 18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304711818695068, "perplexity": 294.75229870059525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00249-ip-10-171-96-226.ec2.internal.warc.gz"}
https://byjus.com/question-answer/a-vertical-pole-of-a-length-6-m-casts-a-shadow-4m-long-on-1/
Question # Question 15 A vertical pole of a length 6 m casts a shadow 4m long on the ground and at the same time a tower casts a shadow 28 m long. Find the height of the tower. Open in App Solution ## Length of the vertical pole = 6m (Given) Length of the shadow of the pole = 4 m (Given) Let Height of tower = h m Length of shadow of the tower = 28 m (Given) In ΔABC and ΔDEF, ∠C=∠E(angular elevation) ∠B=∠F=90∘ ∴ΔABC∼ΔDFE (By AAA similarity criterion) ∴ABDF=BCEF (If two triangles are similar then their corresponding sides are proportional.) ∴6h=428 ⇒h=6×284 ⇒h=6×7 ⇒h=42m Hence, the height of the tower is 42 m. Suggest Corrections 12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888880848884583, "perplexity": 2059.1502833138106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00370.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/0007323/
Enhanced J/ψ Production in Deconfined Quark Matter Robert L. Thews, Martin Schroedter, and Johann Rafelski Department of Physics, University of Arizona, Tucson, AZ 85721, USA Abstract In high energy heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN, each central event will contain multiple pairs of heavy quarks. If a region of deconfined quarks and gluons is formed, a new mechanism for the formation of heavy quarkonium bound states will be activated. This is a result of the mobility of heavy quarks in the deconfined region, such that bound states can be formed from a quark and an antiquark which were originally produced in separate incoherent interactions. Model estimates of this effect for production at RHIC indicate that significant enhancements are to be expected. Experimental observation of such enhanced production would provide evidence for deconfinement unlikely to be compatible with competing scenarios. PACS number(s): 12.38.Mh, 25.75.-q, 14.40Gx. Ultrarelativistic heavy ion collisions at the RHIC and LHC colliders are expected to provide initial energy density sufficient to initiate a phase transition from normal hadronic matter to deconfined quarks and gluons [1]. A decrease in the number of observed heavy quarkonium states was proposed many years ago [2] as a signature of the deconfined phase. One invokes the argument that in a plasma of free quarks and gluons the color forces will experience a Debye-type screening. Thus the quark and antiquark in a quarkonium bound state will no longer be subject to a confining force and diffuse away from each other during the lifetime of the quark-gluon plasma. As the system cools and the deconfined phase disappears, these heavy quarks will most likely form a final hadronic state with one of the much more numerous light quarks. The result will be a decreased population of heavy quarkonium relative to those formed initially in the heavy ion collision. There is now extensive data on charmonium production using nuclear targets and beams. The results for in p-A collisions and also from Oxygen and Sulfur beams on Uranium show a systematic nuclear dependence of the cross section [3] which points toward an interpretation in terms of interactions of an initial quarkonium state with nucleons[4]. Recent results for a Lead beam and target reveal an additional suppression of about 25, prompting claims that this effect could be the expected signature of deconfinement[5]. The increase of this anomalous suppression with the centrality of the collision, as measured by the energy directed transverse to the beam, shows signs of structure which have been interpreted as threshold behavior due to dissociation of charmonium states in a plasma[6]. However, several alternate scenarios have been proposed which do not involve deconfinement effects [7]. These models are difficult to rule out at present, since there is significant uncertainty in many of the parameters. It appears that a precision systematic study of suppression patterns of many states in the quarkonium systems will be necessary for a definitive interpretation. In all of the above, a tacit assumption has been made: the heavy quarkonium is formed only during the initial nucleon-nucleon collisions. Once formed, subsequent interactions with nucleons or final state interactions in a quark-gluon plasma or with other produced hadrons can only reduce the probability that the quarkonium will survive and be observed. Here we explore a scenario which will be realized at RHIC and LHC energies, where the average number of heavy quark pairs produced in the initial (independent and incoherent) nucleon-nucleon collisions will be substantially above unity for a typical central heavy ion interaction. Then if and only if a space-time region of deconfined quarks and gluons is present, quarkonium states will be formed from combinations of heavy quarks and antiquarks which were initially produced in different nucleon-nucleon collisions. This new mechanism of heavy quarkonium production has the potential to be the dominant factor in determining the heavy quarkonium population observed after hadronization. To be specific, let be the number of heavy quark pairs initially produced in a central heavy ion collision, and let be the number of those pairs which form bound states in the normal confining vacuum potential. The final number of bound states surviving at hadronization will be some fraction of the initial number , plus the number formed by this new mechanism from the remaining heavy quark pairs. (We include in the new mechanism both formation and dissociation in the deconfined region.) The instantaneous formation rate of will be proportional to the square of the number of unbound quark pairs, which we approximate by its initial value. This is valid as long as , which we demonstrate is valid in our model calculations. We also show in our model calculations that the time scales for the formation and dissociation processes are typically somewhat larger than the expected lifetime of the deconfined state. Thus there will be insufficient time for the relative populations of bound and unbound heavy quarks to reach an equilibrium value, and we anticipate that the number of bound states existing at the end of the deconfinement lifetime will remain proportional to the square of the initial unbound charm population. We introduce a proportionality parameter , to express the final population as NB=ϵN1+β(N0−N1)2. (1) We then average over the distributions of and , introducing the probability that a given heavy quark pair was in a bound state before the deconfined phase was formed. The bound state “suppression” factor is just the ratio of this average population to the average initially-produced bound state population per collision, . SB=ϵ+β(1−x)+β(1−x)2x(¯N0+1) (2) Without the new production mechanism, and the suppression factor is bounded by . But for sufficiently large values of this factor could actually exceed unity, i.e. one would predict an enhancement in the heavy quarkonium production rates to be the signature of deconfinement! We thus proceed to estimate expected -values for production at RHIC. Let us emphasize at the outset that we are not attempting a detailed phenomenology of production at RHIC. The goal is merely to estimate if this new formation mechanism could have a significant impact on the results. We consider the dynamical evolution of the pairs which have been produced in a central Au-Au collision at = 200A GeV. This is adapted from our previous calculation of the formation of mesons. [8]. For simplicity, we assume the deconfined phase is an ideal gas of free gluons and light quarks. To describe the “standard model” scenario for suppression of in the deconfined region, we utilize the collisional dissociation via interactions with free thermal gluons. (This is the dynamic counterpart of the static plasma screening scenario [9].) Our new formation mechanism is just the inverse of this dissociation reaction, when a free charm quark and antiquark are captured in the bound state, emitting a color octet gluon. Thus it is an unavoidable consequence in this model of quarkonium suppression that a corresponding mechanism for quarkonium production must be present. The competition between the rates of these reactions integrated over the lifetime of the QGP then determines the final population. Our estimates result from numerical solutions of the kinetic rate equation dNJ/ψdτ=λFNcρ¯c−λDNJ/ψρg, (3) where is the proper time, denotes number density, and the reactivity is the reaction rate averaged over the momentum distribution of the initial participants, i.e. and for and and for . Formation of other states containing charm quarks is expected to occur predominantly at hadronization, since their lower binding energies prevents them from existing in a hot QGP, or equivalently they are ionized on very short time scales. The gluon density is determined by the equilibrium value in the QGP at each temperature. Initial charm quark numbers are given by and , and exact charm conservation is enforced throughout the calculation. The initial volume at is allowed to undergo longitudinal expansion . The expansion is taken to be isentropic, = constant, which then provides a generic temperature-time profile. We use parameter values for thermalization time = 0.5 fm, initial volume with R = 6 fm, and a range of initial temperature 300 MeV 500 MeV, which are all compatible with expectations for a central collision at RHIC. For simplicity, we assume the transverse spatial distributions are uniform, and use a thermal momentum distribution for gluons. Sensitivity of the results to these parameter values and assumptions will be presented later. The formation rate for our new mechanism has significant sensitivity to the charm quark momentum distribution, and we thus consider a wide variation for this quantity. At one extreme, we use the initial charm quark rapidity interval and transverse momentum spectrum unchanged from the perturbative QCD production processes. We then allow for energy loss processes in the plasma by reducing the width of the rapidity distribution, terminating with the opposite extreme when the formation results are almost identical to those which would result if the charm quarks were in full thermal equilibrium with the plasma. This range approximately corresponds to changing the rapidity interval y between one and four units. We utilize a cross section for the dissociation of due to collisions with gluons which is based on the operator product expansion [10]: σD(k)=2π3(323)2(2μϵo)1/214μ2(k/ϵo−1)3/2(k/ϵo)5, (4) where is the gluon momentum, the binding energy, and the reduced mass of the quarkonium system. This form assumes the quarkonium system has a spatial size small compared with the inverse of , and its bound state spectrum is close to that in a nonrelativistic Coulomb potential. This same cross section is utilized with detailed balance factors to calculate the primary formation rate for the capture of a charm and anticharm quark into the . We have also considered a scenario in which a static screening of the color force replaces the gluon dissociation process, and dominates the suppression of initially-produced . Equivalently, the binding decreases from its vacuum value at low temperature and vanishes at high temperature. As a simple approximation to this behavior, we multiply the vacuum value by a step function at some screening temperature , such that total screening is active at high temperature and the formation mechanism is active at low temperatures. The numerical results for these two scenarios are identical for a screening temperature = 280 MeV. The screening scenario predictions fall somewhat below the gluon dissociation results for lower . They differ by a maximum factor of two when decreases to 180 MeV (we have used a deconfinement temperature of 150 MeV). We show in Fig. 1 sample calculated values of per central event as a function of initial number of unbound charm quark pairs. Quadratic fits of Eq. 1 are superimposed. This is a direct verification of our expectations that the final population in fact retains the quadratic dependence of the initial formation rate. This also verifies that the decrease in initial unbound charm is a small effect. (These fits also contain a small linear term for the cases in which is nonzero, which accounts for the increase of the unbound charm population when dissociation occurs.) We then extract the fitted parameters over our assumed range of initial temperature and charm quark rapidity width. The fitted values decrease quite rapidly with increasing as expected, and are entirely insensitive to . The corresponding values have a significant dependence on y. They are less sensitive to , but exhibit an expected decrease at large due to large gluon dissociation rates at initial times (counterpart of color screening). These fitted parameters must be supplemented by values of and to determine the “suppression” factor from Eq. 2 for the new mechanism. We use the nuclear overlap function (b=0) = 29.3 for Au, and a pQCD estimate of the charm production in p-p collisions at RHIC energy = 350 b [11] to estimate = 10 for central collisions. The parameter contains the fraction of initial charm pairs which formed states before the onset of deconfinement. Fitted values from a color evaporation model [12] are consistent with , which we adopt as an order of magnitude estimate. This must be reduced by the suppression due to interactions with target and beam nucleons. For central collisions we use 0.6 for this factor, which results from the extrapolation of the observed nuclear effects for p-A and smaller A-B central interactions. With these parameters fixed, we predict from Eq. 2 an enhancement factor for production of , where this range of values includes the full range of initial parameters, i.e. initial temperatures between 300 and 500 MeV and charm quark rapidity ranges between 1 and 4. This is to be compared with predictions of models which extrapolate existing suppression mechanisms to RHIC conditions, resulting in typical suppression factors of 0.05 for central collisions [13]. Note that in addition to the qualitative change between suppression and enhancement, the actual numerical difference should be easily detectable by the RHIC experiments. One can also predict how this new effect will vary with the centrality of the collision, which has been a key feature of deconfinement signatures analyzed at CERN SPS energies [5]. To estimate the centrality dependence, we repeat the calculation of the and parameters using appropriate variation of initial conditions with impact parameter b. From nuclear geometry and the total non-diffractive nucleon-nucleon cross section at RHIC energies, one can estimate the total number of participant nucleons and the corresponding density per unit transverse area [14]. The former quantity has been shown to be directly proportional to the total transverse energy produced in a heavy ion collision [15]. The latter quantity is used, along with the Bjorken-model estimate of initial energy density [16], to provide an estimate of how the initial temperature of the deconfined region varies with impact parameter. We also use the ratio of these quantities to define an initial transverse area within which deconfinement is possible, thus completing the initial conditions needed to calculate the production and suppression. The average initial charm number varies with impact parameter in proportion to the nuclear overlap integral (b). The impact-parameter dependence of the fraction is determined by the average path length encountered by initial as they pass through the remaining nucleons, [4]. All of these b-dependent effects are normalized to the previous values used for calculations at . It is revealing to express these results in terms of the ratio of final to initially-produced charm pairs, both of which will be measurable at RHIC. (This normalization automatically eliminates the trivial effects of increased collision energy and phase space.) In Fig. 2, the solid symbols are the full results predicted with the inclusion of our new production mechanism. We include full variation of these results with initial temperature (squares, circles, and diamonds are = 300, 400, 500 MeV, respectively), charm quark distribution (full lines are thermal, combinations of dashed and dotted lines use y ranging from one through 4), and also variation with screening temperature for the alternate scenario (triangles with = 200, 240, and 280 MeV). The centrality dependence is represented by the total participant number (b). The effect is somewhat obscured by the log scale, but the ratio predictions typically increase about 50% between peripheral and central events. Note that this increase is in addition to the expected dependence of total charm production on centrality, so that the quadratic nature of our new production mechanism is evident. We also show for contrast the results without the new mechanism, when only dissociation by gluons is included ( = 0 for curves with open symbols). These results have the opposite centrality dependence, and the absolute magnitudes are very much smaller. It is evident that the new mechanism dominates production in a deconfined medium at all but the largest impact parameters, and that this situation survives uncertainties associated with variation in model parameters. For completeness, we list a few effects of variations in our other parameters and assumptions which have relatively minor impact on the results. 1. The initial charm production at RHIC could be decreased due to nuclear shadowing of the gluon structure functions. Model estimates [17] indicate this effect could result in up to a 20% reduction. 2. The validity of the cross section used assumes strictly nonrelativistic bound states. Several alternative models for this cross section result in substantially higher values. When we arbitrarily increase the cross section by a factor of two, or alternatively set the cross section to its maximum value (1.5 mb) at all energies, we find an increase in the final population of about 15%. This occurs because the kinetics always favors formation over dissociation, and a larger cross section just allows the reactions to approach completion more easily within the lifetime of the QGP. 3. A nonzero transverse expansion will be expected at some level, which will reduce the lifetime of the QGP and reduce the efficiency of the new formation mechanism. We have calculated results for central collisions with variable transverse expansion, and find a decrease in the parameter of about 15% for each increase of 0.2 in the transverse velocity. 4. Model calculations of the approach to chemical equilibrium for light quarks and gluons indicate that the initial density of gluons in a QGP fall substantially below that for full phase space occupancy. We have checked our model predictions in this scenario, using a factor of two decrease in the gluon density at . This decreases the effectiveness of the dissociation process, such that the final production is increased by about 35%. We also justify neglecting dissociation via collisions with light quarks in this scenario, since the population ratio of quarks to gluons is expected to be a small fraction. This is potentially important, since the inverse process is inhibited by the required three-body initial state. 5. The effect of a finite formation time may also be considered. The total effect is a competition between delayed dissociation ( cannot be dissociated before formation) and a possible loss of states whose formation started just before the hadronization point. A conservative upper limit would bound any decrease by the ratio of formation to QGP lifetime, certainly in the 15% range. 6. Although the new formation mechanism is large compared with dissociation, it is small on an absolute basis, with yields only a few percent of total charm. These small values can be traced in part to the magnitude of spatial charm density, which enters in the calculation of time-integrated flux of charm quark pairs. Our assumption of constant spatial density certainly underestimates the charm density, since it is likely somewhat peaked toward the center of the nuclear overlap region in each collision. A correspondingly smaller deconfined region is also to be expected, but it will still contain virtually all initial charm and have a similar time and longitudinal expansion profile. Thus a more realistic spatial model should increase the formation yield beyond our simple estimates. Overall, we predict that at high energies the production rate will provide an even better signal for deconfinement than originally proposed. Consideration of multiple heavy quark production made possible by higher collision energy effectively adds another dimension to the parameter space within which one searches for patterns of quarkonium behavior in a deconfined medium. The recent initial operation of RHIC at = 56 and 130 GeV provides an opportunity to test the predicted energy dependence of this new mechanism. We show in Fig. 3 the expected energy variation of the total yield per central collision at RHIC. The individual lines include full variation over the initial temperature and charm quark momentum distributions. The strong increase with energy comes from the quadratic dependence on initial charm production, coupled with the increase of the charm production cross section with energy as calculated in pQCD [11]. For comparison, we show the energy dependence which results from just initial production, followed by dissociation alone. If such a strong increase is observed at RHIC, it would signal the existence of a production mechanism nonlinear in initial charm. Taken together, the enhanced magnitude and centrality and energy dependence predict signals which will be difficult to imitate with conventional hadronic processes. The extension of this scenario to LHC energies will involve hundreds of initially-produced charm quark pairs, and we expect the effects of this new production mechanism to be striking. This work was supported by a grant from the U.S. Department of Energy, DE-FG03-95ER40937. References • [1] For a review, see J. Harris and B. Müller, Ann. Rev. Nucl. Part. Sci. 46, 71 (1996). • [2] T. Matsui and H. Satz, Phys. Lett. B178, 416 (1986). • [3] M. J. Leitch et al., Phys. Rev. Lett. 84, 3256 (2000); D. M. Alde et al., Phys. Rev. Lett. 66, 133 (1991); M.C. Abreu et al., Phys. Lett. B444, 516 (1998). • [4] C. Gerschel and J. Hüfner, Z. Phys. C47, 171 (1992). • [5] M.C. Abreu et al., Phys. Lett. B477, 28 (2000). • [6] M. Nardi and H. Satz, Phys. Lett. B442, 14 (1998). • [7] A. Capella, E.G. Ferreiro, and A.B. Kaidalov, hep-ph/0002300; Y. He, J. Hüfner, and B. Kopeliovich, Phys. Lett. B477, 93 (2000); P. Hoyer, and S. Peigné, Phys. Rev. D59, 034011 (1999). • [8] M. Schroedter, R.L. Thews, and J. Rafelski, Phys. Rev. C62, 024905 (2000). • [9] D. Kharzeev and H. Satz, Phys. Lett. B334, 155 (1994). • [10] M. E. Peskin, Nucl. Phys. B156, 365 (1979); G. Bhanot and M. E. Peskin, Nucl. Phys. B156, 391 (1979). • [11] P. L. McGaughey, E. Quack, P. V. Ruuskanen, R. Vogt, and X.-N. Wang, Int. J. Mod. Phys. A10, 2999 (1995). • [12] R. Gavai, D. Kharzeev, H. Satz, G. Schuler, K. Sridhar, and R. Vogt, Int. J. Mod. Phys. A10, 3043 (1995). • [13] R. Vogt, Nucl. Phys. A661, 250c (1999). • [14] A. Białas, M. Bleszyński, and W. Czyz, Nucl. Phys. B111, 461 (1976). • [15] S. Margetis et al., Nucl Phys. A590, 355c (1995). • [16] J. D. Bjorken, Phys. Rev. D27, 140 (1983). • [17] K.J. Eskola, V.J. Kolhinen and C.A. Salgado, Eur. Phys. J. C9, 61 (1999).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638528823852539, "perplexity": 1017.6094639429061}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375439.77/warc/CC-MAIN-20210308112849-20210308142849-00488.warc.gz"}
http://users.ox.ac.uk/~sedm4978/BG_2017.html
# A novel approach for solving convex problems with cardinality constraints G. Banjac and P. Goulart in IFAC World Congress, Toulouse, France, July 2017. BibTeX  URL @inproceedings{BG:2017, author = {G. Banjac and P. Goulart}, title = {A novel approach for solving convex problems with cardinality constraints}, booktitle = {IFAC World Congress}, year = {2017}, url = {https://doi.org/10.1016/j.ifacol.2017.08.2174}, doi = {10.1016/j.ifacol.2017.08.2174} } In this paper we consider the problem of minimizing a convex differentiable function subject to sparsity constraints. Such constraints are non-convex and the resulting optimization problem is known to be hard to solve. We propose a novel generalization of this problem and demonstrate that it is equivalent to the original sparsity-constrained problem if a certain weighting term is sufficiently large. We use the proximal gradient method to solve our generalized problem, and show that under certain regularity assumptions on the objective function the algorithm converges to a local minimum. We further propose an updating heuristic for the weighting parameter, ensuring that the solution produced is locally optimal for the original sparsity constrained problem. Numerical results show that our algorithm outperforms other algorithms proposed in the literature.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117537140846252, "perplexity": 406.6438569818502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00582.warc.gz"}
https://jmlr.org/beta/papers/v21/19-496.html
Estimation of a Low-rank Topic-Based Model for Information Cascades Ming Yu, Varun Gupta, Mladen Kolar. Year: 2020, Volume: 21, Issue: 71, Pages: 1−47 Abstract We consider the problem of estimating the latent structure of a social network based on the observed information diffusion events, or cascades, where the observations for a given cascade consist of only the timestamps of infection for infected nodes but not the source of the infection. Most of the existing work on this problem has focused on estimating a diffusion matrix without any structural assumptions on it. In this paper, we propose a novel model based on the intuition that an information is more likely to propagate among two nodes if they are interested in similar topics which are also prominent in the information content. In particular, our model endows each node with an influence vector (which measures how authoritative the node is on each topic) and a receptivity vector (which measures how susceptible the node is for each topic). We show how this node-topic structure can be estimated from the observed cascades, and prove the consistency of the estimator. Experiments on synthetic and real data demonstrate the improved performance and better interpretability of our model compared to existing state-of-the-art methods.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617413640022278, "perplexity": 496.95978961711535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00490.warc.gz"}
http://www.njohnston.ca/publications/the-multiplicative-domain-in-quantum-error-correction/
## The Multiplicative Domain in Quantum Error Correction Abstract: We show that the multiplicative domain of a completely positive map yields a new class of quantum error correcting codes. In the case of a unital quantum channel, these are precisely the codes that do not require a measurement as part of the recovery process, the so-called unitarily correctable codes. Whereas in the arbitrary, not necessarily unital case they form a proper subset of unitarily correctable codes that can be computed from properties of the channel. As part of the analysis we derive a representation theoretic characterization of subsystem codes. We also present a number of illustrative examples. Authors: Status: Cite as: Presentation Dates and Locations: • Quantum Information & Geometric Statistics Seminar (QuIGS) – University of Guelph. July 2008 • Canadian Mathematical Society Winter 2008 Meeting – Ottawa, Ontario. December 2008 • Canadian Quantum Information Student Conference – Toronto, Ontario. August 2009 Supplementary Material: Journal of Physics A: Mathematical and Theoretical {\bf 42} 245303 (2009) 1. No comments yet. 1. July 24th, 2009 at 00:06 | #1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250589370727539, "perplexity": 1026.5384978131249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00308.warc.gz"}
http://proceedings.mlr.press/v89/ali19a.html
# A Continuous-Time View of Early Stopping for Least Squares Regression Alnur Ali, J. Zico Kolter, Ryan J. Tibshirani ; Proceedings of Machine Learning Research, PMLR 89:1370-1378, 2019. #### Abstract We study the statistical properties of the iterates generated by gradient descent, applied to the fundamental problem of least squares regression. We take a continuous-time view, i.e., consider infinitesimal step sizes in gradient descent, in which case the iterates form a trajectory called gradient flow. Our primary focus is to compare the risk of gradient flow to that of ridge regression. Under the calibration $t=1/\lambda$—where $t$ is the time parameter in gradient flow, and $\lambda$ the tuning parameter in ridge regression—we prove that the risk of gradient flow is no less than 1.69 times that of ridge, along the entire path (for all $t \geq 0$). This holds in finite samples with very weak assumptions on the data model (in particular, with no assumptions on the features $X$). We prove that the same relative risk bound holds for prediction risk, in an average sense over the underlying signal $\beta_0$. Finally, we examine limiting risk expressions (under standard Marchenko-Pastur asymptotics), and give supporting numerical experiments.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655169248580933, "perplexity": 538.6177293728027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578655155.88/warc/CC-MAIN-20190424174425-20190424200425-00367.warc.gz"}
https://www.physicsforums.com/threads/can-anyone-integrate-this.277487/
# Homework Help: Can anyone integrate this? 1. Dec 6, 2008 ### Hamid1 Hi all. can anyone integrate xdx/x^2+4x+5 and this one : xdx/sqrt(x^2+4x+13) thank you and excuse me for english:) 2. Dec 6, 2008 ### virus Are you interested in the answer or the method? If you need just the answer then you can use "www.integrals.wolfram.com " 3. Dec 6, 2008 ### Hamid1 Yes,I want the method. 4. Dec 6, 2008 ### virus rewrite the numerator as (2x+4-4)/2.Try splitting the integral into two functions now. 5. Dec 6, 2008 ### Hootenanny Staff Emeritus That's not the way it works here. We will help you with your homework, but we will not do it for you. You have to put some effort in. What have you tried thus far? 6. Dec 6, 2008 ### Hamid1 can you explain more? I have solved about 100 integrals but I can't do these two.I don't know the method. thank you. 7. Dec 6, 2008 ### virus rewrite as integral of 1/2*( 2x+4/x^2+4x+5) -2*integral of (1/x^2+4x+5) . Now try to solve both of these integrals seperately. 8. Dec 6, 2008 ### Hamid1 Thank you.the first part is easy to solve but how can I solve the second part? 9. Dec 6, 2008 ### Hootenanny Staff Emeritus Use partial fractions. 10. Dec 6, 2008 ### HallsofIvy Since you have "xdx" in the numerator- which should remind you of the derivative of x2, you should immediately think about getting the denominator in the form "x2- a" so you can substitute. In other words, start by completing the square in the denominator. 11. Dec 6, 2008 ### Hamid1 Can you tell me how?Because I don't know english(partial fractions) very well. 12. Dec 6, 2008 ### Hootenanny Staff Emeritus Can you factorise the denominator and then split the fraction into two different fractions? 13. Dec 6, 2008 ### HallsofIvy No, the whole point of this problem is that you cannot factor the denominator. Complete the square instead. 14. Dec 6, 2008 ### Hootenanny Staff Emeritus Whoops! I thought the denominator was (x2-4x-5).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984656035900116, "perplexity": 4004.774571145674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215284.54/warc/CC-MAIN-20180819184710-20180819204710-00000.warc.gz"}
http://hitchhikersgui.de/Hurwitz_zeta_function
# Hurwitz zeta function In mathematics, the Hurwitz zeta function, named after Adolf Hurwitz, is one of the many zeta functions. It is formally defined for complex arguments s with Re(s) > 1 and q with Re(q) > 0 by ${\displaystyle \zeta (s,q)=\sum _{n=0}^{\infty }{\frac {1}{(q+n)^{s}}}.}$ This series is absolutely convergent for the given values of s and q and can be extended to a meromorphic function defined for all s≠1. The Riemann zeta function is ζ(s,1). Hurwitz zeta function corresponding to q = 1/3. It is generated as a Matplotlib plot using a version of the Domain coloring method.[1] ## Analytic continuation Hurwitz zeta function corresponding to q = 24/25. If ${\displaystyle \mathrm {Re} (s)\leq 1}$ the Hurwitz zeta function can be defined by the equation ${\displaystyle \zeta (s,q)=\Gamma (1-s){\frac {1}{2\pi i}}\int _{C}{\frac {z^{s-1}e^{qz}}{1-e^{z}}}dz}$ where the contour ${\displaystyle C}$ is a loop around the negative real axis. This provides an analytic continuation of ${\displaystyle \zeta (s,q)}$. The Hurwitz zeta function can be extended by analytic continuation to a meromorphic function defined for all complex numbers ${\displaystyle s}$ with ${\displaystyle s\neq 1}$. At ${\displaystyle s=1}$ it has a simple pole with residue ${\displaystyle 1}$. The constant term is given by ${\displaystyle \lim _{s\to 1}\left[\zeta (s,q)-{\frac {1}{s-1}}\right]={\frac {-\Gamma '(q)}{\Gamma (q)}}=-\psi (q)}$ where ${\displaystyle \Gamma }$ is the Gamma function and ${\displaystyle \psi }$ is the digamma function. ## Series representation Hurwitz zeta function as a function of q with s = 3+4i. A convergent Newton series representation defined for (real) q > 0 and any complex s ≠ 1 was given by Helmut Hasse in 1930:[2] ${\displaystyle \zeta (s,q)={\frac {1}{s-1}}\sum _{n=0}^{\infty }{\frac {1}{n+1}}\sum _{k=0}^{n}(-1)^{k}{n \choose k}(q+k)^{1-s}.}$ This series converges uniformly on compact subsets of the s-plane to an entire function. The inner sum may be understood to be the nth forward difference of ${\displaystyle q^{1-s}}$; that is, ${\displaystyle \Delta ^{n}q^{1-s}=\sum _{k=0}^{n}(-1)^{n-k}{n \choose k}(q+k)^{1-s}}$ where Δ is the forward difference operator. Thus, one may write {\displaystyle {\begin{aligned}\zeta (s,q)&={\frac {1}{s-1}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n+1}}\Delta ^{n}q^{1-s}\\&={\frac {1}{s-1}}{\log(1+\Delta ) \over \Delta }q^{1-s}\end{aligned}}} Other series converging globally include these examples ${\displaystyle \zeta (s,v-1)={\frac {1}{s-1}}\sum _{n=0}^{\infty }H_{n+1}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(k+v)^{1-s}}$ ${\displaystyle \zeta (s,v)={\frac {k!}{(s-k)_{k}}}\sum _{n=0}^{\infty }{\frac {1}{(n+k)!}}\left[{n+k \atop n}\right]\sum _{l=0}^{n+k-1}\!(-1)^{l}{\binom {n+k-1}{l}}(l+v)^{k-s},\quad k=1,2,3,\ldots }$ ${\displaystyle \zeta (s,v)={\frac {v^{1-s}}{s-1}}+\sum _{n=0}^{\infty }|G_{n+1}|\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(k+v)^{-s}}$ ${\displaystyle \zeta (s,v)={\frac {(v-1)^{1-s}}{s-1}}-\sum _{n=0}^{\infty }C_{n+1}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(k+v)^{-s}}$ ${\displaystyle \zeta (s,v){\big (}v-{\tfrac {1}{2}}{\big )}={\frac {s-2}{s-1}}\zeta (s-1,v)+\sum _{n=0}^{\infty }(-1)^{n}G_{n+2}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(k+v)^{-s}}$ ${\displaystyle \zeta (s,v)=-\sum _{l=1}^{k-1}{\frac {(k-l+1)_{l}}{(s-l)_{l}}}\zeta (s-l,v)+\sum _{l=1}^{k}{\frac {(k-l+1)_{l}}{(s-l)_{l}}}v^{l-s}+k\sum _{n=0}^{\infty }(-1)^{n}G_{n+1}^{(k)}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(k+v)^{-s}}$ where Hn are the Harmonic numbers, ${\displaystyle \left[{\cdot \atop \cdot }\right]}$ are the Stirling numbers of the first kind, ${\displaystyle (\ldots )_{\ldots }}$ is the Pochhammer symbol, Gn are the Gregory coefficients, G(k) n are the Gregory coefficients of higher order and Cn are the Cauchy numbers of the second kind (C1 = 1/2, C2 = 5/12, C3 = 3/8,...), see Blagouchine's paper[3]. ## Integral representation The function has an integral representation in terms of the Mellin transform as ${\displaystyle \zeta (s,q)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {t^{s-1}e^{-qt}}{1-e^{-t}}}dt}$ for ${\displaystyle \Re s>1}$ and ${\displaystyle \Re q>0.}$ ## Hurwitz's formula Hurwitz's formula is the theorem that ${\displaystyle \zeta (1-s,x)={\frac {1}{2s}}\left[e^{-i\pi s/2}\beta (x;s)+e^{i\pi s/2}\beta (1-x;s)\right]}$ where ${\displaystyle \beta (x;s)=2\Gamma (s+1)\sum _{n=1}^{\infty }{\frac {\exp(2\pi inx)}{(2\pi n)^{s}}}={\frac {2\Gamma (s+1)}{(2\pi )^{s}}}{\mbox{Li}}_{s}(e^{2\pi ix})}$ is a representation of the zeta that is valid for ${\displaystyle 0\leq x\leq 1}$ and s > 1. Here, ${\displaystyle {\text{Li}}_{s}(z)}$ is the polylogarithm. ## Functional equation The functional equation relates values of the zeta on the left- and right-hand sides of the complex plane. For integers ${\displaystyle 1\leq m\leq n}$, ${\displaystyle \zeta \left(1-s,{\frac {m}{n}}\right)={\frac {2\Gamma (s)}{(2\pi n)^{s}}}\sum _{k=1}^{n}\left[\cos \left({\frac {\pi s}{2}}-{\frac {2\pi km}{n}}\right)\;\zeta \left(s,{\frac {k}{n}}\right)\right]}$ holds for all values of s. ## Taylor series The derivative of the zeta in the second argument is a shift: ${\displaystyle {\frac {\partial }{\partial q}}\zeta (s,q)=-s\zeta (s+1,q).}$ Thus, the Taylor series has the distinctly umbral form: ${\displaystyle \zeta (s,x+y)=\sum _{k=0}^{\infty }{\frac {y^{k}}{k!}}{\frac {\partial ^{k}}{\partial x^{k}}}\zeta (s,x)=\sum _{k=0}^{\infty }{s+k-1 \choose s-1}(-y)^{k}\zeta (s+k,x).}$ Alternatively, ${\displaystyle \zeta (s,q)={\frac {1}{q^{s}}}+\sum _{n=0}^{\infty }(-q)^{n}{s+n-1 \choose n}\zeta (s+n),}$ with ${\displaystyle |q|<1}$.[4] Closely related is the Stark–Keiper formula: ${\displaystyle \zeta (s,N)=\sum _{k=0}^{\infty }\left[N+{\frac {s-1}{k+1}}\right]{s+k-1 \choose s-1}(-1)^{k}\zeta (s+k,N)}$ which holds for integer N and arbitrary s. See also Faulhaber's formula for a similar relation on finite sums of powers of integers. ## Laurent series The Laurent series expansion can be used to define Stieltjes constants that occur in the series ${\displaystyle \zeta (s,q)={\frac {1}{s-1}}+\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\gamma _{n}(q)\;(s-1)^{n}.}$ Specifically ${\displaystyle \gamma _{0}(q)=-\psi (q)}$ and ${\displaystyle \gamma _{0}(1)=-\psi (1)=\gamma _{0}=\gamma }$. ## Fourier transform The discrete Fourier transform of the Hurwitz zeta function with respect to the order s is the Legendre chi function. ## Relation to Bernoulli polynomials The function ${\displaystyle \beta }$ defined above generalizes the Bernoulli polynomials: ${\displaystyle B_{n}(x)=-\Re \left[(-i)^{n}\beta (x;n)\right]}$ where ${\displaystyle \Re z}$ denotes the real part of z. Alternately, ${\displaystyle \zeta (-n,x)=-{B_{n+1}(x) \over n+1}.}$ In particular, the relation holds for ${\displaystyle n=0}$ and one has ${\displaystyle \zeta (0,x)={\frac {1}{2}}-x.}$ ## Relation to Jacobi theta function If ${\displaystyle \vartheta (z,\tau )}$ is the Jacobi theta function, then ${\displaystyle \int _{0}^{\infty }\left[\vartheta (z,it)-1\right]t^{s/2}{\frac {dt}{t}}=\pi ^{-(1-s)/2}\Gamma \left({\frac {1-s}{2}}\right)\left[\zeta (1-s,z)+\zeta (1-s,1-z)\right]}$ holds for ${\displaystyle \Re s>0}$ and z complex, but not an integer. For z=n an integer, this simplifies to ${\displaystyle \int _{0}^{\infty }\left[\vartheta (n,it)-1\right]t^{s/2}{\frac {dt}{t}}=2\ \pi ^{-(1-s)/2}\ \Gamma \left({\frac {1-s}{2}}\right)\zeta (1-s)=2\ \pi ^{-s/2}\ \Gamma \left({\frac {s}{2}}\right)\zeta (s).}$ where ζ here is the Riemann zeta function. Note that this latter form is the functional equation for the Riemann zeta function, as originally given by Riemann. The distinction based on z being an integer or not accounts for the fact that the Jacobi theta function converges to the Dirac delta function in z as ${\displaystyle t\rightarrow 0}$. ## Relation to Dirichlet L-functions At rational arguments the Hurwitz zeta function may be expressed as a linear combination of Dirichlet L-functions and vice versa: The Hurwitz zeta function coincides with Riemann's zeta function ζ(s) when q = 1, when q = 1/2 it is equal to (2s−1)ζ(s),[5] and if q = n/k with k > 2, (n,k) > 1 and 0 < n < k, then[6] ${\displaystyle \zeta (s,n/k)={\frac {k^{s}}{\varphi (k)}}\sum _{\chi }{\overline {\chi }}(n)L(s,\chi ),}$ the sum running over all Dirichlet characters mod k. In the opposite direction we have the linear combination[5] ${\displaystyle L(s,\chi )={\frac {1}{k^{s}}}\sum _{n=1}^{k}\chi (n)\;\zeta \left(s,{\frac {n}{k}}\right).}$ There is also the multiplication theorem ${\displaystyle k^{s}\zeta (s)=\sum _{n=1}^{k}\zeta \left(s,{\frac {n}{k}}\right),}$ of which a useful generalization is the distribution relation[7] ${\displaystyle \sum _{p=0}^{q-1}\zeta (s,a+p/q)=q^{s}\,\zeta (s,qa).}$ (This last form is valid whenever q a natural number and 1 − qa is not.) ## Zeros If q=1 the Hurwitz zeta function reduces to the Riemann zeta function itself; if q=1/2 it reduces to the Riemann zeta function multiplied by a simple function of the complex argument s (vide supra), leading in each case to the difficult study of the zeros of Riemann's zeta function. In particular, there will be no zeros with real part greater than or equal to 1. However, if 0<q<1 and q≠1/2, then there are zeros of Hurwitz's zeta function in the strip 1<Re(s)<1+ε for any positive real number ε. This was proved by Davenport and Heilbronn for rational or transcendental irrational q,[8] and by Cassels for algebraic irrational q.[5][9] ## Rational values The Hurwitz zeta function occurs in a number of striking identities at rational values.[10] In particular, values in terms of the Euler polynomials ${\displaystyle E_{n}(x)}$: ${\displaystyle E_{2n-1}\left({\frac {p}{q}}\right)=(-1)^{n}{\frac {4(2n-1)!}{(2\pi q)^{2n}}}\sum _{k=1}^{q}\zeta \left(2n,{\frac {2k-1}{2q}}\right)\cos {\frac {(2k-1)\pi p}{q}}}$ and ${\displaystyle E_{2n}\left({\frac {p}{q}}\right)=(-1)^{n}{\frac {4(2n)!}{(2\pi q)^{2n+1}}}\sum _{k=1}^{q}\zeta \left(2n+1,{\frac {2k-1}{2q}}\right)\sin {\frac {(2k-1)\pi p}{q}}}$ One also has ${\displaystyle \zeta \left(s,{\frac {2p-1}{2q}}\right)=2(2q)^{s-1}\sum _{k=1}^{q}\left[C_{s}\left({\frac {k}{q}}\right)\cos \left({\frac {(2p-1)\pi k}{q}}\right)+S_{s}\left({\frac {k}{q}}\right)\sin \left({\frac {(2p-1)\pi k}{q}}\right)\right]}$ which holds for ${\displaystyle 1\leq p\leq q}$. Here, the ${\displaystyle C_{\nu }(x)}$ and ${\displaystyle S_{\nu }(x)}$ are defined by means of the Legendre chi function ${\displaystyle \chi _{\nu }}$ as ${\displaystyle C_{\nu }(x)=\operatorname {Re} \,\chi _{\nu }(e^{ix})}$ and ${\displaystyle S_{\nu }(x)=\operatorname {Im} \,\chi _{\nu }(e^{ix}).}$ For integer values of ν, these may be expressed in terms of the Euler polynomials. These relations may be derived by employing the functional equation together with Hurwitz's formula, given above. ## Applications Hurwitz's zeta function occurs in a variety of disciplines. Most commonly, it occurs in number theory, where its theory is the deepest and most developed. However, it also occurs in the study of fractals and dynamical systems. In applied statistics, it occurs in Zipf's law and the Zipf–Mandelbrot law. In particle physics, it occurs in a formula by Julian Schwinger,[11] giving an exact result for the pair production rate of a Dirac electron in a uniform electric field. ## Special cases and generalizations The Hurwitz zeta function with a positive integer m is related to the polygamma function: ${\displaystyle \psi ^{(m)}(z)=(-1)^{m+1}m!\zeta (m+1,z)\ .}$ For negative integer −n the values are related to the Bernoulli polynomials:[12] ${\displaystyle \zeta (-n,x)=-{\frac {B_{n+1}(x)}{n+1}}\ .}$ The Barnes zeta function generalizes the Hurwitz zeta function. The Lerch transcendent generalizes the Hurwitz zeta: ${\displaystyle \Phi (z,s,q)=\sum _{k=0}^{\infty }{\frac {z^{k}}{(k+q)^{s}}}}$ and thus ${\displaystyle \zeta (s,q)=\Phi (1,s,q).\,}$ Hypergeometric function ${\displaystyle \zeta (s,a)=a^{-s}\cdot {}_{s+1}F_{s}(1,a_{1},a_{2},\ldots a_{s};a_{1}+1,a_{2}+1,\ldots a_{s}+1;1)}$ where ${\displaystyle a_{1}=a_{2}=\ldots =a_{s}=a{\text{ and }}a\notin \mathbb {N} {\text{ and }}s\in \mathbb {N} ^{+}.}$ Meijer G-function ${\displaystyle \zeta (s,a)=G\,_{s+1,\,s+1}^{\,1,\,s+1}\left(-1\;\left|\;{\begin{matrix}0,1-a,\ldots ,1-a\\0,-a,\ldots ,-a\end{matrix}}\right)\right.\qquad \qquad s\in \mathbb {N} ^{+}.}$ ## Notes 1. ^ http://nbviewer.ipython.org/github/empet/Math/blob/master/DomainColoring.ipynb 2. ^ Hasse, Helmut (1930), "Ein Summierungsverfahren für die Riemannsche ζ-Reihe", Mathematische Zeitschrift, 32 (1): 458–464, doi:10.1007/BF01194645, JFM 56.0894.03 3. ^ Blagouchine, Iaroslav V. (2018). "Three Notes on Ser's and Hasse's Representations for the Zeta-functions". Integers (Electronic Journal of Combinatorial Number Theory). 18A: 1–45. arXiv:. 4. ^ Vepstas, Linas (2007). "An efficient algorithm for accelerating the convergence of oscillatory series, useful for computing the polylogarithm and Hurwitz zeta functions". Numerical Algorithms. 47: 211–252. arXiv:. doi:10.1007/s11075-007-9153-8. 5. ^ a b c Davenport (1967) p.73 6. ^ Lowry, David. "Hurwitz Zeta is a sum of Dirichlet L functions, and vice-versa". mixedmath. Retrieved 8 February 2013. 7. ^ Kubert, Daniel S.; Lang, Serge (1981). Modular Units. Grundlehren der Mathematischen Wissenschaften. 244. Springer-Verlag. p. 13. ISBN 0-387-90517-0. Zbl 0492.12002. 8. ^ Davenport, H. & Heilbronn, H. (1936), "On the zeros of certain Dirichlet series", Journal of the London Mathematical Society, 11 (3): 181–185, doi:10.1112/jlms/s1-11.3.181, Zbl 0014.21601 9. ^ Cassels, J. W. S. (1961), "Footnote to a note of Davenport and Heilbronn", Journal of the London Mathematical Society, 36 (1): 177–184, doi:10.1112/jlms/s1-36.1.177, Zbl 0097.03403 10. ^ Given by Cvijović, Djurdje & Klinowski, Jacek (1999), "Values of the Legendre chi and Hurwitz zeta functions at rational arguments", Mathematics of Computation, 68 (228): 1623–1630, Bibcode:1999MaCom..68.1623C, doi:10.1090/S0025-5718-99-01091-1 11. ^ Schwinger, J. (1951), "On gauge invariance and vacuum polarization", Physical Review, 82 (5): 664–679, Bibcode:1951PhRv...82..664S, doi:10.1103/PhysRev.82.664 12. ^ Apostol (1976) p.264
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 73, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851648211479187, "perplexity": 682.9425665997342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865411.56/warc/CC-MAIN-20180523024534-20180523044534-00101.warc.gz"}
https://www.littlehouseinthevalley.com/9jbuy7l/n6fkq.php?ad4082=gaussian-process-regression-machine-learning
0\) are hyperparameters. Connection to … The model is then trained with the RSS training samples. Gaussian process regression (GPR). The CV can be used for feature selection and hyperparameter tuning. Accumulated errors could be introduced into the localization process when the robot moves around. ISBN 0-262-18253-X 1. The model-based positioning system involves offline and online phases. N(\bar{f}_*, \text{cov}(f_*)) Wireless indoor positioning is attracting considerable critical attention due to the increasing demands on indoor location-based services. (b) Learning rate. Overall, XGBoost still has the best performance among RF and GPR models. Recall that a gaussian process is completely specified by its mean function and covariance (we usually take the mean equal to zero, although it is not necessary). The hyperparameter $$\sigma_f$$ enoces the amplitude of the fit. (a) Impact of the number of RSS samples. 2020, Article ID 4696198, 10 pages, 2020. https://doi.org/10.1155/2020/4696198, 1School of Petroleum Engineering, Changzhou University, Changzhou 213100, China, 2School of Information Science and Engineering, Changzhou University, Changzhou 213100, China, 3Electronics and Computer Science, University of Southampton, University Road, Southampton SO17 1BJ, UK. No guidelines of the size of training samples and the number of AP are provided to train the models. The training procedure is repeated five times to calculate the average accuracy of the model with the specific parameter. The implementation is based on Algorithm 2.1 of Gaussian Processes for Machine Learning (GPML) by Rasmussen and Williams. The task is then to learn a regression model that can predict the price index or range. This trend indicates that only three APs are required to determine the indoor position. I… Trained with a few samples, it can obtain the prediction results of the whole region and the variance information of the prediction that is used to measure confidence. Moreover, there is no state-of-the-art work that evaluates the model performance of different algorithms. We propose a new robust GP regression algorithm that iteratively trims a portion of the data points with the largest deviation from the predicted mean. There are my kernel functions implemented in Scikit-Learn. f_*|X, y, X_* Gaussian processes—Data processing. f_* Acknowledgments: Thank you to Fox Weng for pointing out a typo in one of the formulas presented in a previous version of the post. every finite linear combination of them is normally distributed. Hyperparameter tuning for Random Forest model. Their results show that the SVR models have better positioning performance compared with NN models. Thus, ensemble methods are proposed to construct a set of tree-based classifiers and combine these classifiers’ decision with different weighting algorithms [18]. We demonstrate … However, using one single tree to classify or predict data might cause high variance. Series. This means that we expect points far away can still have some interaction, i.e. Sign up here as a reviewer to help fast-track new submissions. Figure 5 shows the tuning process that calculates the optimum value for the number of boosting iterations and the learning rate for the AdaBoost model. Hsieh, K.-W. Chang, M. Ringgaard, and C.-J. Bekkali et al. Gaussian processes for classification Laplace approximation 8. But they are also used in a large variety of applications … Section 6 concludes the paper and outlines some future work. III. time or space. Examples of use of GP 2. In GPR, covariance functions are also essential for the performance of GPR models. (a) Number of estimators. A relatively rare technique for regression is called Gaussian Process Model. Results show that GP with a rational quadratic kernel and eXtreme gradient tree boosting model has the best positioning accuracy compared to other models. The RSS data of seven APs are taken as seven features. The training process of supervised learning is to minimize the difference between predicted value and the actual value with a loss function . The prediction results are evaluated with different sizes of training samples and numbers of AP. However, in some cases, the distribution of data is nonlinear. \right) Drucker et al. The output is the coordinates of the location on the two-dimensional floor. Next, we generate some training sample observations: We now consider test data points on which we want to generate predictions. Brunato evaluated the k-nearest-neighbor approach for indoor positioning with wireless signals from several access points [8], which has an average uncertainty of two meters. With the increase of the training size, GPR gets the better performance, while its performance is still slightly weaker compared with the XGBoost model. \]. Thus, given the training data points with label , the estimated of target can be calculated by maximizing the joint likelihood in equation (7). As is shown in Section 2, the machine learning models require hyperparameter tuning to get the best model that fits the data. The RSS readings from different AP are collected during the offline phase with the machine learning approach, which captures the indoor environment’s complex radiofrequency profile [7]. The method is tested using typical option schemes with … The advantages of Gaussian processes are: The prediction interpolates the observations (at least for regular kernels). This paper mainly evaluates three covariance functions, namely, Radial Basis Function (RBF) kernel, Matérn kernel, and Rational Quadratic kernel. data points, that is, we are interested in computing $$f_*|X, y, X_*$$. The marginal likelihood is the integral of the likelihood times the prior. However, based on our proposed XGBoost model with RSS signals, the robot can predict the exact position without the accumulated error. Hyperparameter tuning for SVR with linear and RBF kernel. \text{cov}(f(x_p), f(x_q)) = k_{\sigma_f, \ell}(x_p, x_q) = \sigma_f \exp\left(-\frac{1}{2\ell^2} ||x_p - x_q||^2\right) We now compute the matrix $$C$$. Figure 3 shows the tuning process that determines the optimum value for the penalty parameter and kernel coefficient parameter for the SVR with RBF and linear kernels. Tables 1 and 2 show the distance error of different machine learning models. how far the points interact. In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. As the coverage range of infrared-based clients is up to 10 meters while the coverage range of radiofrequency-based clients is up to 50 meters, radiofrequency has become the most commonly used technique for indoor positioning. A better approach is to use the Cholesky decomposition of $$K(X,X) + \sigma_n^2 I$$ as described in Gaussian Processes for Machine Learning, Ch 2 Algorithm 2.1. Table 2 shows the distance error with a confidence interval for different kernels with length scale bounds. proposed a support vector regression (SVR) algorithm that applies a soft margin of tolerance in SVM to approximate and predict values [15]. As SVR has the best prediction performance in the current work, we select SVR as a baseline model to evaluate the performance of the other three machine learning approaches and the GPR approach with different kernels. $$K(X_*, X) \in M_{n_* \times n}(\mathbb{R})$$, Sampling from a Multivariate Normal Distribution, Regularized Bayesian Regression as a Gaussian Process, Gaussian Processes for Machine Learning, Ch 2, Gaussian Processes for Timeseries Modeling, Gaussian Processes for Machine Learning, Ch 2.2, Gaussian Processes for Machine Learning, Appendinx A.2, Gaussian Processes for Machine Learning, Ch 2 Algorithm 2.1, Gaussian Processes for Machine Learning, Ch 5, Gaussian Processes for Machine Learning, Ch 4, Gaussian Processes for Machine Learning, Ch 4.2.4, Gaussian Processes for Machine Learning, Ch 3. Let’s assume a linear function: y=wx+ϵ. —(Adaptive computation and machine learning) Includes bibliographical references and indexes. In recent years, there has been a greater focus placed upon eXtreme Gradient Tree Boosting (XGBoost) models [21]. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. Using the results of Gaussian Processes for Machine Learning, Appendinx A.2, one can show that, $A GP is usually parameterized by a mean function and a covariance function , formalized in equations (3) and (4). The RBF and Matérn kernel have the 4.4 m and 8.74 m confidence interval with 95% accuracy while the Rational Quadratic kernel has the 0.72 m confidence interval with 95% accuracy. This paper is organized as follows. compared different kernel functions of the support vector regression to estimate locations with GSM signals [6]. Figure 7(a) shows the impact of the training sample size on different machine learning models.$, $In the building, we place 7 APs represented as red pentagram on the floor with an area of 21.6 M 15.6 m. The RSS measurements are taken at each point in a grid of 0.6 m spacing between each other. During the online phase, the client’s position is determined by the signal strength and the trained model. There are many questions which are still open: I hope to keep exploring these and more questions in future posts. Results show that the XGBoost model outperforms all the other models and related work in positioning accuracy. At last, the weak models are combined to generate the strong model . Gaussian Processes in Reinforcement Learning Carl Edward Rasmussen and Malte Kuss Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 Tubingen,¨ Germany carl,malte.kuss @tuebingen.mpg.de Abstract We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and dis-crete time. Hyperparameter tuning for XGBoost model. On the machine learning side, Gonzalez´ et al. In the training process, we use the RSS collected from different APs as features to train the model. When the validation score decreases, the model is overfitting. This course covers the fundamental mathematical concepts needed by the modern data scientist to … However, the global positioning system (GPS) has been used for outdoor positioning in the last few decades, while its positioning accuracy is limited in the indoor environment. In contrast, the eXtreme gradient tree boosting model could achieve higher positioning accuracy with smaller training size and fewer access points. During the training process, the number of trees and the trees’ parameter are required to be determined to get the best parameter set for the RF model. In all stages, XGBoost has the lowest distance error compared with all the other models. The hyperparameter tuning technique is used to select the optimum parameter set for each model. built Gaussian process models with the Matérn kernel function to solve the localization problem in cellular networks [5]. Wu et al. Unlike many popular supervised machine learning algorithms that learn exact values for every parameter in a function, the Bayesian approach infers a probability distribution over all possible values. Results also reveal that 3 APs are enough for indoor positioning as the distance error does not decrease with more APs. Let us denote by $$K(X, X) \in M_{n}(\mathbb{R})$$, $$K(X_*, X) \in M_{n_* \times n}(\mathbb{R})$$ and $$K(X_*, X_*) \in M_{n_*}(\mathbb{R})$$ the covariance matrices applies to $$x$$ and $$x_*$$.$, $Abstract We give a basic introduction to Gaussian Process regression models. Battiti et al. Thus, kernel functions map the nonlinear separable feature space to linear separable feature space with kernel functions [16]. In the past decade, machine learning played a fundamental role in artificial intelligence areas such as lithology classification, signal processing, and medical image analysis [11–13]. More APs are not helpful as the indoor positioning accuracy is not improving with more APs. However, the confidence interval has a huge difference between the three kernels. Besides machine learning approaches, Gaussian process regression has also been applied to improve the indoor positioning accuracy. K(X, X) + \sigma^2_n I & K(X, X_*) \\ Let us now sample from the posterior distribution: We now study the effect of the hyperparameters $$\sigma_f$$ and $$\ell$$ of the kernel function defined above. Moreover, the traditional geometric approach that deduces the location based on the angle and distance estimates from different signal transmitters is problematic as the transmitted signal might be distorted due to reflections and refraction and the indoor environment [5]. The validation curve shows that the maximum depth of the tree might affect the performance of the RF model. Thus, validation curves can be used to select the best parameter of a model from a range of values. In machine learning they are mainly used for modelling expensive functions. Overall, the GPR with Rational Quadratic kernel has the lowest distance error among all the GP models, and XGBoost has the lowest distance error compared with other machine learning models. We write Android applications to collect RSS data at reference points within the test area marked by the seven APs, whereas the RSS comes from the Nighthawk R7000P commercial router. Results show that the NN model performs better than the k-nearest-neighbor model and can achieve a standard average of 1.8 meters. Can we combine kernels to get new ones? The validation curve shows that when is 0.01, the SVR has the best performance in predicting the position. Thus, we use machine learning approaches to construct an empirical model that models the distribution of Received Signal Strength (RSS) in an indoor environment. Machine learning—Mathematical models. As a concrete example, let us consider (1-dim problem). Lin, “Training and testing low-degree polynomial data mappings via linear svm,”, T. G. Dietterich, “Ensemble methods in machine learning,” in, R. E. Schapire, “The boosting approach to machine learning: an overview,” in, T. Chen and C. Guestrin, “Xgboost: a scalable tree boosting system,” in, J. H. Friedman, “Stochastic gradient boosting,”. After a sequence of preliminary posts (Sampling from a Multivariate Normal Distribution and Regularized Bayesian Regression as a Gaussian Process), I want to explore a concrete example of a gaussian process regression. Our work assesses the positioning performance of different models and experiments on the size of training samples and the number of APs for the optimum model. Section 3 introduces the background of machine learning approaches as well as the kernel functions for GPR. Each model is trained with the optimum parameter set obtained from the hyperparameter tuning procedure. \left( We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. Observe that we need to add the term $$\sigma^2_n I$$ to the upper left component to account for noise (assuming additive independent identically distributed Gaussian noise). In this section, we evaluate the result by evaluating the performance of the models with 200 collected RSS samples with location coordinates. I. Williams, Christopher K. I. II. Probabilistic modelling, which falls under the Bayesian paradigm, is gaining popularity world-wide. Park C and Apley D (2018) Patchwork Kriging for large-scale Gaussian process regression, The Journal of Machine Learning Research, 19:1, (269-311), Online publication date: 1-Jan-2018. Here, is the penalty parameter of the error term : SVR uses a linear hyperplane to separate the data and predict the values. compared the neural network- (NN-) based model and k-nearest-neighbor model to determine the mobile terminal under the wireless LAN environment [9]. The Matérn kernel adds parameter that controls the resulting function’s smoothness, which is given in equation (9). Given the feature space and its corresponding labels, the RF algorithm takes a random sample from the features and constructs the CART tree with randomly selected features. K(X_*, X) & K(X_*, X_*) The Housing data set is a popular regression benchmarking data set hosted on the UCI Machine Learning Repository. We calculate the confidence interval by multiplying the standard deviation with 1.96. The 200 RSS data are collected during the day with people moving or environment changes, which are used to evaluate the model performance.$. Tuning is a process that uses a performance matrix to rank the regressors with different parameters to optimize a parameter for each specific model [11]. Consistency: If the GP specifies y(1),y(2) ∼ N(µ,Σ), then it must also specify y(1) ∼ N(µ 1,Σ 11): A GP is completely specified by a mean function and a The weights of the model are calculated given that model function is at most from the target ; formally, . To avoid overfitting, we also tune the subsample parameter that controls the ratio of training data before growing trees. \left( Here $$f$$ does not need to be a linear function of $$x$$. Alfakih et al. Machine Learning Srihari Topics in Gaussian Processes 1. Results show that XGBoost has the best performance compared with all the other machine learning models. \], $Maximum likelihood estimation (MLE) has been used in statistical models, given the prior knowledge of the data distribution [25]. Analyzing Machine Learning Models with Gaussian Process for the Indoor Positioning System, School of Petroleum Engineering, Changzhou University, Changzhou 213100, China, School of Information Science and Engineering, Changzhou University, Changzhou 213100, China, Electronics and Computer Science, University of Southampton, University Road, Southampton SO17 1BJ, UK, Determine the leaf weight for the learnt structure with, A. Serra, D. Carboni, and V. Marotto, “Indoor pedestrian navigation system using a modern smartphone,” in, P. Bahl, V. N. Padmanabhan, V. Bahl, and V. Padmanabhan, “Radar: an in-building rf-based user location and tracking system,” in, A. Harter and A. Hopper, “A distributed location system for the active office,”, H. Hashemi, “The indoor radio propagation channel,”, A. Schwaighofer, M. Grigoras, V. Tresp, and C. Hoffmann, “Gpps: a Gaussian process positioning system for cellular networks,”, Z. L. Wu, C. H. Li, J. K. Y. Ng, and K. R. Leung, “Location estimation via support vector regression,”, A. Bekkali, T. Masuo, T. Tominaga, N. Nakamoto, and H. Ban, “Gaussian processes for learning-based indoor localization,” in, M. Brunato and C. Kiss Kallo, “Transparent location fingerprinting for wireless services,”, R. Battiti, A. Villani, and T. Le Nhat, “Neural network models for intelligent networks: deriving the location from signal patterns,” in, M. Alfakih, M. Keche, and H. Benoudnine, “Gaussian mixture modeling for indoor positioning wifi systems,” in, Y. Xie, C. Zhu, W. Zhou, Z. Li, X. Liu, and M. Tu, “Evaluation of machine learning methods for formation lithology identification: a comparison of tuning processes and model performances,”, Y. Ups Tracking Uk, Vizio Tv App, Ncert Mcq Class 8 History Chapter 1, Church Meadow, Sproughton, Spectrum Sight Words Grade 2key Concepts In International Relations Upsc, Grade 3 Module In English Pdf, Water Heater Pilot Light Won't Light, Carbon Trust Energy Efficiency Guide, " /> Yunxin Xie, Chenyang Zhu, Wei Jiang, Jia Bi, Zhengwei Zhu, "Analyzing Machine Learning Models with Gaussian Process for the Indoor Positioning System", Mathematical Problems in Engineering, vol. The hyperparameter $$\ell$$ is a locality parameter, i.e. Their approach reaches the mean error of 1.6 meters. Thus, these parameters are tuned to with cross-validation to get the best XGBoost model. Gaussian Processes (GP) are a generic supervised learning method designed to solve regression and probabilistic classification problems. A model is built with supervised learning for the given input and the predicted value is . We are committed to sharing findings related to COVID-19 as quickly as possible. Besides, the GPR is trained with … The gaussian process fit automatically selects the best hyperparameters which maximize the log-marginal likelihood. basis functions number of basis function.” (Gaussian Processes for Machine Learning, Ch 2.2). prior distribution to contain only those functions which agree with the observed The radiofrequency-based system utilizes signal strength information at multiple base stations to provide user location services [2]. \begin{array}{cc} The number of boosting iterations and other parameters concerning the tree structure do not affect the prediction accuracy a lot. Thus, we select this as the kernel of the GPR model to compare with other machine learning models. Machine learning approaches can avoid the complexity of determining an appropriate propagation model with traditional geometric approaches and adapt well to local variations of indoor environment [6]. p. cm. Equation (10) shows the Rational Quadratic kernel, which can be seen as a mixture of RBF kernels with different length scales. When the maximum depth of the individual tree reaches 10, the model comes to the best performance. Indoor position estimation is usually challenging for robots with only built-in sensors. [1989] Indoor positioning modeling procedure with offline phase and online phase. Gaussian Process Regression Gaussian Processes: Definition A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution. Recently, there has been growing interest in improving the efficiency and accuracy of the Indoor Positioning System (IPS). During the training process, the model is trained with the four folds of data and test with the left fold of data. Updated Version: 2019/09/21 (Extension + Minor Corrections). Distance error with confidence interval for different Gaussian progress regression kernels. Gaussian process regression (GPR) models are nonparametric kernel-based probabilistic models. Then the current model is updated with the previous model with the shrunk base model . The model can determine the indoor position based on the RSS information in that position. Learning the hyperparameters Automatic Relevance Determination 7. From the consistency requirement of gaussian processes we know that the prior distribution for $$f_*$$ is $$N(0, K(X_*, X_*))$$. Let us plot the resulting fit: In contrast, we see that for these set of hyper parameters the higher values of the posterior covariance matrix are concentrated along the diagonal. function corresponds to a Bayesian linear regression model with an infinite Then, we got the final model that maps the RSS to its corresponding position in the building. The training set’s size could be adjusted accordingly based on the model performance, which would be discussed in the following section. Besides, the GPR is trained with three kernels, namely, Radial-Basis Function (RBF) kernel, Matérn kernel, and Rational Quadratic (RQ) kernel, and evaluated with the average error and standard deviation. A machine-learning algorithm that involves a Gaussian pro In this paper, we use the validation curve with 5-fold cross-validation to show the balanced trade-off between the bias and variance of the model. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Updated Version: 2019/09/21 (Extension + Minor Corrections). The Gaussian process model is mainly divided into Gaussian process classification and Gaussian process regression (GPR), … Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. Results reveal that there has been a gradual decrease in distance error with the increasing of the training size for all machine learning models. In the validation curve, the training score is higher than the validation score as the model will be a better fit to the training data than test data. When I was reading the textbook and watching tutorial videos online, I can follow the majority without too many difficulties. Given a set of data points associated with set of labels , supervised learning could build a regressor or classifier to predict or classify the unseen from . More recently, there has been extensive research on supervised learning to predict or classify some unseen outcomes from some existing patterns. Hyperparameter tuning is used to select the optimum parameter set for each model. The graph also shows that there has been a sharp drop in the distance error in the first three APs for XGBoost, RF, and GPR models. Later in the online phase, we can use the generated model for indoor positioning. We consider de model $$y = f(x) + \varepsilon$$, where $$\varepsilon \sim N(0, \sigma_n)$$. GP Definition and Intuition 4. Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. This is actually the implementation used by Scikit-Learn. In the first step, cross-validation (CV) is used to test whether the model is suitable for the given machine learning model. (d) Learning rate. Results show that the distance error decreases gradually for the SVR model. (d) Min samples leaf. How to apply these techniques to classification problems. Thus, linear models cannot describe the model correctly. The data are available from the corresponding author upon request. During the procedure, trees are built to generate the forest. Here, is the covariance matrix based on training data points , is the covariance matrix between the test data points and training points, and is the covariance matrix between test points. The infrared-based system uses sensor networks to collect infrared signals and deduce the infrared client’s location by checking the location information of different sensors [3]. We reshape the variables into matrix form. The size of the APs determines the size of the features. Here, defines the stochastic map for each data point and its label and defines the measurement noise assumed to satisfy the Gaussian noise with standard deviation: Given the training data with its corresponding labels as well as the test data with its corresponding labels with the same distribution, then equation (6) is satisfied. Please refer to the docomentation example to get more detailed information. Hyperparameter tuning for different machine learning models. Section 2 summarizes the related work that constructs models for indoor positioning. First, they areextremely common when modeling “noise” in statistical algorithms. Machine Learning Summer School 2012: Gaussian Processes for Machine Learning (Part 1) - John Cunningham (University of Cambridge) http://mlss2012.tsc.uc3m.es/ XGBoost also outperforms the SVR with RBF kernel. In this paper, we evaluate different machine learning approaches for indoor positioning with RSS data. Gaussian processes for machine learning / Carl Edward Rasmussen, Christopher K. I. Williams. This means that we expect points far away to have no effect on each other, i.e. During the field test, we collect 799 RSS data as the training set. Random Forest (RF) algorithm is one of the ensemble methods that build several regression trees and average the result of the final prediction of each regression tree [19]. Figure 4 shows the tuning process that calculates the optimum value for the number of trees in the random forest as well as the tree structure of the individual tree in the forest. To overcome these challenges, Yoshihiro Tawada and Toru Sugimura propose a new method to obtain a hedge strategy for options by applying Gaussian process regression to the policy function in reinforcement learning. [39] proposed methods for preference-based Bayesian optimization and GP regression, re-spectively, but they were not active. \text{cov}(f_*) = K(X_*, X_*) - K(X_*, X)(K(X, X) + \sigma^2_n I)^{-1} K(X, X_*) \in M_{n_*}(\mathbb{R}) Gaussian process regression offers a more flexible alternative to typical parametric regression approaches. where $$\sigma_f , \ell >0$$ are hyperparameters. Connection to … The model is then trained with the RSS training samples. Gaussian process regression (GPR). The CV can be used for feature selection and hyperparameter tuning. Accumulated errors could be introduced into the localization process when the robot moves around. ISBN 0-262-18253-X 1. The model-based positioning system involves offline and online phases. N(\bar{f}_*, \text{cov}(f_*)) Wireless indoor positioning is attracting considerable critical attention due to the increasing demands on indoor location-based services. (b) Learning rate. Overall, XGBoost still has the best performance among RF and GPR models. Recall that a gaussian process is completely specified by its mean function and covariance (we usually take the mean equal to zero, although it is not necessary). The hyperparameter $$\sigma_f$$ enoces the amplitude of the fit. (a) Impact of the number of RSS samples. 2020, Article ID 4696198, 10 pages, 2020. https://doi.org/10.1155/2020/4696198, 1School of Petroleum Engineering, Changzhou University, Changzhou 213100, China, 2School of Information Science and Engineering, Changzhou University, Changzhou 213100, China, 3Electronics and Computer Science, University of Southampton, University Road, Southampton SO17 1BJ, UK. No guidelines of the size of training samples and the number of AP are provided to train the models. The training procedure is repeated five times to calculate the average accuracy of the model with the specific parameter. The implementation is based on Algorithm 2.1 of Gaussian Processes for Machine Learning (GPML) by Rasmussen and Williams. The task is then to learn a regression model that can predict the price index or range. This trend indicates that only three APs are required to determine the indoor position. I… Trained with a few samples, it can obtain the prediction results of the whole region and the variance information of the prediction that is used to measure confidence. Moreover, there is no state-of-the-art work that evaluates the model performance of different algorithms. We propose a new robust GP regression algorithm that iteratively trims a portion of the data points with the largest deviation from the predicted mean. There are my kernel functions implemented in Scikit-Learn. f_*|X, y, X_* Gaussian processes—Data processing. f_* Acknowledgments: Thank you to Fox Weng for pointing out a typo in one of the formulas presented in a previous version of the post. every finite linear combination of them is normally distributed. Hyperparameter tuning for Random Forest model. Their results show that the SVR models have better positioning performance compared with NN models. Thus, ensemble methods are proposed to construct a set of tree-based classifiers and combine these classifiers’ decision with different weighting algorithms [18]. We demonstrate … However, using one single tree to classify or predict data might cause high variance. Series. This means that we expect points far away can still have some interaction, i.e. Sign up here as a reviewer to help fast-track new submissions. Figure 5 shows the tuning process that calculates the optimum value for the number of boosting iterations and the learning rate for the AdaBoost model. Hsieh, K.-W. Chang, M. Ringgaard, and C.-J. Bekkali et al. Gaussian processes for classification Laplace approximation 8. But they are also used in a large variety of applications … Section 6 concludes the paper and outlines some future work. III. time or space. Examples of use of GP 2. In GPR, covariance functions are also essential for the performance of GPR models. (a) Number of estimators. A relatively rare technique for regression is called Gaussian Process Model. Results show that GP with a rational quadratic kernel and eXtreme gradient tree boosting model has the best positioning accuracy compared to other models. The RSS data of seven APs are taken as seven features. The training process of supervised learning is to minimize the difference between predicted value and the actual value with a loss function . The prediction results are evaluated with different sizes of training samples and numbers of AP. However, in some cases, the distribution of data is nonlinear. \right) Drucker et al. The output is the coordinates of the location on the two-dimensional floor. Next, we generate some training sample observations: We now consider test data points on which we want to generate predictions. Brunato evaluated the k-nearest-neighbor approach for indoor positioning with wireless signals from several access points [8], which has an average uncertainty of two meters. With the increase of the training size, GPR gets the better performance, while its performance is still slightly weaker compared with the XGBoost model.$. Thus, given the training data points with label , the estimated of target can be calculated by maximizing the joint likelihood in equation (7). As is shown in Section 2, the machine learning models require hyperparameter tuning to get the best model that fits the data. The RSS readings from different AP are collected during the offline phase with the machine learning approach, which captures the indoor environment’s complex radiofrequency profile [7]. The method is tested using typical option schemes with … The advantages of Gaussian processes are: The prediction interpolates the observations (at least for regular kernels). This paper mainly evaluates three covariance functions, namely, Radial Basis Function (RBF) kernel, Matérn kernel, and Rational Quadratic kernel. data points, that is, we are interested in computing $$f_*|X, y, X_*$$. The marginal likelihood is the integral of the likelihood times the prior. However, based on our proposed XGBoost model with RSS signals, the robot can predict the exact position without the accumulated error. Hyperparameter tuning for SVR with linear and RBF kernel. \text{cov}(f(x_p), f(x_q)) = k_{\sigma_f, \ell}(x_p, x_q) = \sigma_f \exp\left(-\frac{1}{2\ell^2} ||x_p - x_q||^2\right) We now compute the matrix $$C$$. Figure 3 shows the tuning process that determines the optimum value for the penalty parameter and kernel coefficient parameter for the SVR with RBF and linear kernels. Tables 1 and 2 show the distance error of different machine learning models. how far the points interact. In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. As the coverage range of infrared-based clients is up to 10 meters while the coverage range of radiofrequency-based clients is up to 50 meters, radiofrequency has become the most commonly used technique for indoor positioning. A better approach is to use the Cholesky decomposition of $$K(X,X) + \sigma_n^2 I$$ as described in Gaussian Processes for Machine Learning, Ch 2 Algorithm 2.1. Table 2 shows the distance error with a confidence interval for different kernels with length scale bounds. proposed a support vector regression (SVR) algorithm that applies a soft margin of tolerance in SVM to approximate and predict values [15]. As SVR has the best prediction performance in the current work, we select SVR as a baseline model to evaluate the performance of the other three machine learning approaches and the GPR approach with different kernels. $$K(X_*, X) \in M_{n_* \times n}(\mathbb{R})$$, Sampling from a Multivariate Normal Distribution, Regularized Bayesian Regression as a Gaussian Process, Gaussian Processes for Machine Learning, Ch 2, Gaussian Processes for Timeseries Modeling, Gaussian Processes for Machine Learning, Ch 2.2, Gaussian Processes for Machine Learning, Appendinx A.2, Gaussian Processes for Machine Learning, Ch 2 Algorithm 2.1, Gaussian Processes for Machine Learning, Ch 5, Gaussian Processes for Machine Learning, Ch 4, Gaussian Processes for Machine Learning, Ch 4.2.4, Gaussian Processes for Machine Learning, Ch 3. Let’s assume a linear function: y=wx+ϵ. —(Adaptive computation and machine learning) Includes bibliographical references and indexes. In recent years, there has been a greater focus placed upon eXtreme Gradient Tree Boosting (XGBoost) models [21]. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. Using the results of Gaussian Processes for Machine Learning, Appendinx A.2, one can show that, $A GP is usually parameterized by a mean function and a covariance function , formalized in equations (3) and (4). The RBF and Matérn kernel have the 4.4 m and 8.74 m confidence interval with 95% accuracy while the Rational Quadratic kernel has the 0.72 m confidence interval with 95% accuracy. This paper is organized as follows. compared different kernel functions of the support vector regression to estimate locations with GSM signals [6]. Figure 7(a) shows the impact of the training sample size on different machine learning models.$, $In the building, we place 7 APs represented as red pentagram on the floor with an area of 21.6 M 15.6 m. The RSS measurements are taken at each point in a grid of 0.6 m spacing between each other. During the online phase, the client’s position is determined by the signal strength and the trained model. There are many questions which are still open: I hope to keep exploring these and more questions in future posts. Results show that the XGBoost model outperforms all the other models and related work in positioning accuracy. At last, the weak models are combined to generate the strong model . Gaussian Processes in Reinforcement Learning Carl Edward Rasmussen and Malte Kuss Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 Tubingen,¨ Germany carl,malte.kuss @tuebingen.mpg.de Abstract We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and dis-crete time. Hyperparameter tuning for XGBoost model. On the machine learning side, Gonzalez´ et al. In the training process, we use the RSS collected from different APs as features to train the model. When the validation score decreases, the model is overfitting. This course covers the fundamental mathematical concepts needed by the modern data scientist to … However, the global positioning system (GPS) has been used for outdoor positioning in the last few decades, while its positioning accuracy is limited in the indoor environment. In contrast, the eXtreme gradient tree boosting model could achieve higher positioning accuracy with smaller training size and fewer access points. During the training process, the number of trees and the trees’ parameter are required to be determined to get the best parameter set for the RF model. In all stages, XGBoost has the lowest distance error compared with all the other models. The hyperparameter tuning technique is used to select the optimum parameter set for each model. built Gaussian process models with the Matérn kernel function to solve the localization problem in cellular networks [5]. Wu et al. Unlike many popular supervised machine learning algorithms that learn exact values for every parameter in a function, the Bayesian approach infers a probability distribution over all possible values. Results also reveal that 3 APs are enough for indoor positioning as the distance error does not decrease with more APs. Let us denote by $$K(X, X) \in M_{n}(\mathbb{R})$$, $$K(X_*, X) \in M_{n_* \times n}(\mathbb{R})$$ and $$K(X_*, X_*) \in M_{n_*}(\mathbb{R})$$ the covariance matrices applies to $$x$$ and $$x_*$$.$, $Abstract We give a basic introduction to Gaussian Process regression models. Battiti et al. Thus, kernel functions map the nonlinear separable feature space to linear separable feature space with kernel functions [16]. In the past decade, machine learning played a fundamental role in artificial intelligence areas such as lithology classification, signal processing, and medical image analysis [11–13]. More APs are not helpful as the indoor positioning accuracy is not improving with more APs. However, the confidence interval has a huge difference between the three kernels. Besides machine learning approaches, Gaussian process regression has also been applied to improve the indoor positioning accuracy. K(X, X) + \sigma^2_n I & K(X, X_*) \\ Let us now sample from the posterior distribution: We now study the effect of the hyperparameters $$\sigma_f$$ and $$\ell$$ of the kernel function defined above. Moreover, the traditional geometric approach that deduces the location based on the angle and distance estimates from different signal transmitters is problematic as the transmitted signal might be distorted due to reflections and refraction and the indoor environment [5]. The validation curve shows that the maximum depth of the tree might affect the performance of the RF model. Thus, validation curves can be used to select the best parameter of a model from a range of values. In machine learning they are mainly used for modelling expensive functions. Overall, the GPR with Rational Quadratic kernel has the lowest distance error among all the GP models, and XGBoost has the lowest distance error compared with other machine learning models. We write Android applications to collect RSS data at reference points within the test area marked by the seven APs, whereas the RSS comes from the Nighthawk R7000P commercial router. Results show that the NN model performs better than the k-nearest-neighbor model and can achieve a standard average of 1.8 meters. Can we combine kernels to get new ones? The validation curve shows that when is 0.01, the SVR has the best performance in predicting the position. Thus, we use machine learning approaches to construct an empirical model that models the distribution of Received Signal Strength (RSS) in an indoor environment. Machine learning—Mathematical models. As a concrete example, let us consider (1-dim problem). Lin, “Training and testing low-degree polynomial data mappings via linear svm,”, T. G. Dietterich, “Ensemble methods in machine learning,” in, R. E. Schapire, “The boosting approach to machine learning: an overview,” in, T. Chen and C. Guestrin, “Xgboost: a scalable tree boosting system,” in, J. H. Friedman, “Stochastic gradient boosting,”. After a sequence of preliminary posts (Sampling from a Multivariate Normal Distribution and Regularized Bayesian Regression as a Gaussian Process), I want to explore a concrete example of a gaussian process regression. Our work assesses the positioning performance of different models and experiments on the size of training samples and the number of APs for the optimum model. Section 3 introduces the background of machine learning approaches as well as the kernel functions for GPR. Each model is trained with the optimum parameter set obtained from the hyperparameter tuning procedure. \left( We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. Observe that we need to add the term $$\sigma^2_n I$$ to the upper left component to account for noise (assuming additive independent identically distributed Gaussian noise). In this section, we evaluate the result by evaluating the performance of the models with 200 collected RSS samples with location coordinates. I. Williams, Christopher K. I. II. Probabilistic modelling, which falls under the Bayesian paradigm, is gaining popularity world-wide. Park C and Apley D (2018) Patchwork Kriging for large-scale Gaussian process regression, The Journal of Machine Learning Research, 19:1, (269-311), Online publication date: 1-Jan-2018. Here, is the penalty parameter of the error term : SVR uses a linear hyperplane to separate the data and predict the values. compared the neural network- (NN-) based model and k-nearest-neighbor model to determine the mobile terminal under the wireless LAN environment [9]. The Matérn kernel adds parameter that controls the resulting function’s smoothness, which is given in equation (9). Given the feature space and its corresponding labels, the RF algorithm takes a random sample from the features and constructs the CART tree with randomly selected features. K(X_*, X) & K(X_*, X_*) The Housing data set is a popular regression benchmarking data set hosted on the UCI Machine Learning Repository. We calculate the confidence interval by multiplying the standard deviation with 1.96. The 200 RSS data are collected during the day with people moving or environment changes, which are used to evaluate the model performance.$. Tuning is a process that uses a performance matrix to rank the regressors with different parameters to optimize a parameter for each specific model [11]. Consistency: If the GP specifies y(1),y(2) ∼ N(µ,Σ), then it must also specify y(1) ∼ N(µ 1,Σ 11): A GP is completely specified by a mean function and a The weights of the model are calculated given that model function is at most from the target ; formally, . To avoid overfitting, we also tune the subsample parameter that controls the ratio of training data before growing trees. \left( Here $$f$$ does not need to be a linear function of $$x$$. Alfakih et al. Machine Learning Srihari Topics in Gaussian Processes 1. Results show that XGBoost has the best performance compared with all the other machine learning models. \], \[ Maximum likelihood estimation (MLE) has been used in statistical models, given the prior knowledge of the data distribution [25]. Analyzing Machine Learning Models with Gaussian Process for the Indoor Positioning System, School of Petroleum Engineering, Changzhou University, Changzhou 213100, China, School of Information Science and Engineering, Changzhou University, Changzhou 213100, China, Electronics and Computer Science, University of Southampton, University Road, Southampton SO17 1BJ, UK, Determine the leaf weight for the learnt structure with, A. Serra, D. Carboni, and V. Marotto, “Indoor pedestrian navigation system using a modern smartphone,” in, P. Bahl, V. N. Padmanabhan, V. Bahl, and V. Padmanabhan, “Radar: an in-building rf-based user location and tracking system,” in, A. Harter and A. Hopper, “A distributed location system for the active office,”, H. Hashemi, “The indoor radio propagation channel,”, A. Schwaighofer, M. Grigoras, V. Tresp, and C. Hoffmann, “Gpps: a Gaussian process positioning system for cellular networks,”, Z. L. Wu, C. H. Li, J. K. Y. Ng, and K. R. Leung, “Location estimation via support vector regression,”, A. Bekkali, T. Masuo, T. Tominaga, N. Nakamoto, and H. Ban, “Gaussian processes for learning-based indoor localization,” in, M. Brunato and C. Kiss Kallo, “Transparent location fingerprinting for wireless services,”, R. Battiti, A. Villani, and T. Le Nhat, “Neural network models for intelligent networks: deriving the location from signal patterns,” in, M. Alfakih, M. Keche, and H. Benoudnine, “Gaussian mixture modeling for indoor positioning wifi systems,” in, Y. Xie, C. Zhu, W. Zhou, Z. Li, X. Liu, and M. Tu, “Evaluation of machine learning methods for formation lithology identification: a comparison of tuning processes and model performances,”, Y. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813769519329071, "perplexity": 1034.6327582612066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538082.57/warc/CC-MAIN-20210123125715-20210123155715-00570.warc.gz"}
http://latex.wikia.com/wiki/Sum-class_symbol
# Sum-class symbol 143pages on this wiki Sum-class symbols, or accumulation symbols, are symbols whose sub- and superscripts appear directly below and above the symbol rather than beside it. For example, the following example illustrates that \sum is one of these elite symbols whereas \Sigma is not. The terminology from AMS-LaTeX documentation. ### Table of sum-class symbolsEdit $\int$  \int $\oint$  \oint $\bigcap$  \bigcap $\bigcup$  \bigcup $\bigodot$  \bigodot $\bigoplus$  \bigoplus $\bigotimes$  \bigotimes $\bigsqcup$  \bigsqcup $\biguplus$  \biguplus $\bigvee$  \bigvee $\bigwedge$  \bigwedge $\coprod$  \coprod $\prod$  \prod $\sum$  \sum ### Using sumEdit LaTeX markup... ...results in: ...is used for: \sum\limits_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6} $\sum\limits_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6}$ the \limits tag puts the limits below and above the sigma symbol. It is typically used in equations \sum\nolimits_{P_i \in Paths(I)} Probes(P_{i}) $\sum\nolimits_{P_i \in Paths(I)} Probes(P_{i})$ the \nolimits tag puts the limits on the right of the sigma symbol. It is typically used in the math wired in the text TeX is smart enough to only show \sum in its expanded form in the displaymath environment. In the regular math environment, \sum does the right thing and revert to non-sum-class behavior, thus conserving vertical space. ### Using prodEdit Another common sum-class symbol is \prod. As in \sum we can use the directive \limits or \nolimits in order to show the limits on top-down or on the right. LaTeX markup... ...results in: ...is used for: \prod\limits_{i=1}^n x = x^n $\prod\limits_{i=1}^n x = x^n$ the product of a sequence of factors
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955256581306458, "perplexity": 3824.1195506673944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189377.63/warc/CC-MAIN-20170322212949-00108-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/36650/yoni-rozenshein?tab=activity
# Yoni Rozenshein less info reputation 620 bio website location age member for 1 year, 7 months seen 16 hours ago profile views 300 # 414 Actions 22h accepted Generalization of the Riemann integral to functions on the sphere Mar4 asked Generalization of the Riemann integral to functions on the sphere Mar2 reviewed Approve suggested edit on Solving simultaneous equations in terms of variables Feb26 reviewed Approve suggested edit on What's the limit of $\tan x$ when $x$ aproaches $\pi /2$ by the left? Feb15 comment Why is $\mu(E)=0$? From $\mu(E) = 1$ it follows that $\int_E f\,d\mu = \int_{\Omega} f\,d\mu$, because $\Omega\setminus E$ is a null set. Why that is zero - as @Did wrote, this should be assumed at the beginning, without loss of generality. Feb15 answered What is a motivation for this theorem and what is an example this theorem is applied? Feb12 accepted If a Laplacian eigenfunction is zero in an open set, is it identically zero? Feb12 comment If a Laplacian eigenfunction is zero in an open set, is it identically zero? Thanks so much for the very detailed answer and especially all the references :) Feb12 comment If a Laplacian eigenfunction is zero in an open set, is it identically zero? Thank you for your answer! Is it a standard / easy result that eigenfunctions are real-analytic? The books I've been reading only say they're $C^\infty$. Feb11 answered Nodes of eigenfunctions and Courant's nodal domain theorem Feb11 asked If a Laplacian eigenfunction is zero in an open set, is it identically zero? Jan19 reviewed Approve suggested edit on How to convert expression to its NOR form Jan19 reviewed Approve suggested edit on limit problem - can't get rid of $0$. Jan13 answered On a constant defined by Ramanujan. Jan12 accepted If $|\nabla F| > 1$ and $|F| \le 1$, is there a zero nearby? Jan11 comment If $|\nabla F| > 1$ and $|F| \le 1$, is there a zero nearby? Hi, thanks for the response! A follow-up question: The $F$ I'm really interested in is actually $C^\infty$ so I am not worried in assuming a locally Lipschitz gradient. But I am wondering, does the existence of $\gamma$ have a name? Is it a Picard theorem for example? Jan11 asked If $|\nabla F| > 1$ and $|F| \le 1$, is there a zero nearby? Dec23 accepted Nodes of eigenfunctions and Courant's nodal domain theorem Dec23 comment Nodes of eigenfunctions and Courant's nodal domain theorem This is an amazing survey. I am working in my master's thesis on nodal sets of eigenfunctions and this is going to help a lot! Thanks! Dec22 asked Nodes of eigenfunctions and Courant's nodal domain theorem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477291703224182, "perplexity": 748.3802644493248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999645422/warc/CC-MAIN-20140305060725-00021-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.preprints.org/manuscript/202008.0244/v1
Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed # Multi-Rate Real-Time Simulation Method Based on the Norton Equivalent Version 1 : Received: 9 August 2020 / Approved: 10 August 2020 / Online: 10 August 2020 (08:24:53 CEST) A peer-reviewed article of this Preprint also exists. Zhu, J.; Zhang, B. Multi-Rate Real-Time Simulation Method Based on the Norton Equivalent. Energies 2020, 13, 4562. Zhu, J.; Zhang, B. Multi-Rate Real-Time Simulation Method Based on the Norton Equivalent. Energies 2020, 13, 4562. Journal reference: Energies 2020, 13, 4562 DOI: 10.3390/en13174562 ## Abstract For the problem of poor accuracy of the existing multi-rate simulation methods, this paper proposes a multi rate real-time simulation method based on the Norton equivalent, compared with multi-rate simulation method based on the ideal source equivalent. After the Norton equivalence of the fast subsystem and the slow subsystem, they are obtained simultaneously at the junction nodes. In order to reduce the amount of simulation calculation, the Norton equivalent circuit is obtained by incremental calculation. The data interface between the fast subsystem and the slow subsystem is realized by extrapolation method. For ensuring the real-time performance of the simulation, the method that the slow subsystem calculates ahead of the fast subsystem is given for the slow subsystem with a large amount of calculation. Finally, the AC/DC hybrid power system was simulated on the real-time simulation platform (FRTDS), and the simulation results were compared with the single-rate simulation, which verified the correctness and accuracy of the method. ## Keywords multi-rate real-time simulation; the ideal source equivalent; the Norton equivalent; increment; extrapolation method ## Subject ENGINEERING, Electrical & Electronic Engineering Views 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080196976661682, "perplexity": 2798.567590720315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00368.warc.gz"}
https://hal-centralesupelec.archives-ouvertes.fr/hal-01330299
# A new optimization method for solving electromagnetic inverse scattering problems Abstract : We propose a new optimization scheme for solving electromagnetic inverse scattering problems. As known, they mean to retrieve the physical properties of hidden targets, i.e., the permittivity and/or permeability, from the measured scattering data excited by several incidences. By considering only the physical properties of the targets as unknowns, one usually resorts to the more traditional optimization searching scheme to find the solution, by which one needs to solve the corresponding forward problems for all the incidences at each iteration of the optimization, such as the well-known distorted Born iterative method (DBIM). This is often time-consuming even given the fast forward solvers. Later, a different optimization scheme, the modified gradient method, was proposed, where not only the physical properties of the targets are considered as unknowns but also the electric fields, which are simultaneously updated at each iteration of the optimization. From such a pioneering work, the same authors proposed the well-known contrast source inversion (CSI), considering the unknown to be the induced contrast sources instead of fields in order to form a new type of formulation. Compared to the original modified gradient method, the CSI method uses alternative optimization scheme so as to reduce the complexity of the nonlinear calculations. Considering the electric fields or the induced current as unknowns together with the permittivity/permeability, the inversion solver does not need to repetitively solve the forward problems as in the traditional inversion solvers mentioned above. In this talk, we further stretch the idea in the CSI to consider only the contrast sources as the unknowns by introducing a new type of formulations. While also avoiding to solve the forward problems at each iteration, the merits of doing so is to obtain different optimization paths in every inversion, each of which is di®erent from the one by the CSI method. Having such a new optimization scheme, one is able to justify the obtained reconstructed results by comparing them to each other and to the one obtained by the original CSI. Keywords : Document type : Conference papers Domain : Complete list of metadatas https://hal-centralesupelec.archives-ouvertes.fr/hal-01330299 Contributor : Dominique Lesselier <> Submitted on : Friday, June 10, 2016 - 1:02:49 PM Last modification on : Monday, January 13, 2020 - 3:12:12 PM ### Citation Yu Zhong, Marc Lambert, Dominique Lesselier. A new optimization method for solving electromagnetic inverse scattering problems. 2016 Progress in Electromagnetic Research Symposium (PIERS), Aug 2016, Shanghai, China. pp.930-930, ⟨10.1109/PIERS.2016.7734526⟩. ⟨hal-01330299⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333039283752441, "perplexity": 972.5196370761422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00318.warc.gz"}
http://mathhelpforum.com/pre-calculus/203537-quick-functions-question.html
1. Quick functions question... Find the range of values of x for which 10 - f(x) is positive. Could somebody tell me how to do this? I've already done the hard bit... Also, f(x) = (x-2)^2 + 1 2. Re: Quick functions question... "Done the hard bit". Exactly which part is the hard bit? 3. Re: Quick functions question... Just plunge f(x) in to 10-f(x) and solve for x 10-((x-2)^2+ 1>0 (x-2)^2<9 -1<x<5
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142108917236328, "perplexity": 3159.9135832679844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189090.69/warc/CC-MAIN-20170322212949-00375-ip-10-233-31-227.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/33522/flatness-and-local-freeness
Flatness and local freeness The following statement is well-known: $A$ a commutative Noetherian ring, $M$ a finitely generated $A$-module. Than $M$ is flat if and only if $M_{\mathfrak{p}}$ is free for all $\mathfrak{p}$. My question is: do we need the assumption that $A$ is Noetherian? I have a proof (from Matsumura) which doesn't require that assumption, but the fact that other references (e.g. Atiyah, Wikipedia) are including this assumption makes me rather uneasy. - By request, my earlier comments are being upgraded to an answer, as follows. For finitely generated modules over any local ring $A$, flat implies free (i.e., Theorem 7.10 of Matsumura's CRT book is correct: that's what proofs are for). So the answer to the question asked is "no". The CRT book uses the "equational criterion for flatness", which isn't in Atiyah-MacDonald (and so is why the noetherian hypothesis was imposed there). This criterion is in the Wikipedia entry for "flat module", but Wikipedia has many entries on flatness so it's not a surprise that this criterion under "flat module" would not be appropriately invoked in whatever Wikipedia entry was seen by the OP. An awe-inspiring globalization by Raynaud-Gruson (in their overall awesome paper, really with authors in that order) is given without noetherian hypotheses: if $A$ has finitely many associated primes (e.g., any noetherian ring, or any domain whatsoever) and if $M$ is a finitely generated flat $A$-module then it's finitely presented (so Zariski-locally free!). See 3.4.6 (part I) of Raynaud-Gruson (set $X=S$ there). By 3.4.7(iii) of R-G, the finiteness condition on the set of associated primes cannot be removed, as any absolutely flat ring that isn't a finite product of fields provides a counterexample. (An explicit counterexample is provided by the link at the end of Daniel Litt's answer, namely a finitely generated flat module that is not finitely presented, over everyone's favorite crazy ring $\prod_{n=0}^{\infty} \mathbf{F}_2$.) - I believe the non-Noetherian statement is that "flat and finitely presented" implies locally free (i.e., projective). A proof of this can be found, for instance, in Weibel's An Introduction to Homological Algebra. - This is correct, but here locally free means "Zariski locally free", which (at least a priori) is stronger than just the stalks being free, which is what Kwan's question is about. –  Emerton Jul 27 '10 at 15:08 @Akhil: The point being that finitely generated and Noetherian implies finitely presented. –  Daniel Litt Jul 27 '10 at 15:10 Also, in the Noetherian setting, stalkwise free is equivalent to Zariski locally free. –  Emerton Jul 27 '10 at 15:14 I agree with your statement, but the question is whether we can replace "finitely presented" with "finitely generated". –  ashpool Jul 27 '10 at 15:22 This is to expand on Akhil's answer. Locally free implies flat easily, so let's look at the other direction. It suffices to assume $A$ is local with maximal ideal $\mathfrak{m}$. Pick a basis of $M/\mathfrak{m}M$; by Nakayama, this lifts to a surjective map $A^n\to M$. We want to show this map is injective. If $M$ is finitely presented (or if $A$ is Noetherian) then the kernel is finitely generated. But tensoring with $A/\mathfrak{m}A$ kills the kernel, so by Nakayama again the map is injective. The finite generation of the kernel is the key point. UPDATE: Exercise 6, part 3, in this pdf gives a finitely generated, not finitely presented module which is flat but not projective. By BCnrd's comment on Akhil's answer it is, however, stalk-wise free. - I agree with Akhil's answer wholeheartedly. But the question is whether we can relax the condition of finite presentation with finite generation. –  ashpool Jul 27 '10 at 15:26 Aren't you saying that the sequence obtained $0\rightarrow K\rightarrow A^n\rightarrow M\rightarrow 0$ by lifting a basis for $M/\mathfrak{m}M$ remains exact upon tensoring with the residue field? I don't understand why this is the case, maybe 'cause I don't see where you're using the flatness of $M$. Is it true that $M$ is necessarily projective, i.e., does finite flat over a Noetherian ring imply projective? –  Keenan Kidwell Jul 27 '10 at 15:32 That should be "Noetherian local ring." –  Keenan Kidwell Jul 27 '10 at 15:33 @Keenan: The exactness of that sequence is exactly where I'm using the flatness of $M$. Consider the LES of Tor. And finite flat over a Noetherian local ring implies free, so obviously projective. –  Daniel Litt Jul 27 '10 at 15:37 Also, that example in your UPDATE is great. Does anyone know a published reference with an argument? (I now mention it as an aside in Exercise 25.4.E in the July ~21 notes <a href="math.stanford.edu/~vakil/216blog/">here</a>.) –  Ravi Vakil Jul 21 '11 at 17:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867961764335632, "perplexity": 501.62091519608987}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
http://yozh.org/2011/08/29/mmm048/
# MMM XLVIII ## Image Details The above is a detail from a sixth degree Julia set centered near $$-0.496+0.560i$$. The original image which weighs in at about 11.5 MB, and can be found by clicking the detail above. This Julia set is centered near an area of the six degree multibrot set which is roughly equivalent to seahorse valley. It isn’t really deep enough into the valley to see clearly defined seahorses, and the degree of the fractal further obfuscates the familiar shapes, but a careful eye can distinguish the general form. This entry was posted in MMM and tagged , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403208494186401, "perplexity": 1395.3302939670746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258620.81/warc/CC-MAIN-20190526004917-20190526030917-00458.warc.gz"}
https://web2.0calc.com/questions/what-is-the-median-of-the-distinct-positive-values
+0 # What is the median of the distinct positive values... +1 205 2 +140 What is the median of the distinct positive values of all of the fractions less than or equal to 1 with positive integer denominators less than or equal to 5? Express your answer as a common fraction. Jul 4, 2019 #1 +140 +1 Jul 4, 2019 #2 +6045 +2 $$\text{We're talking about the set of numbers}\\ \Large \frac{1}{5},\frac{1}{4},\frac{1}{3},\frac{2}{5},\frac{1}{2},\frac{3}{5},\frac{2}{3},\frac{3}{4},\frac{4}{5},1$$ $$\text{There are ten of them so we have to take the average of the middle two}\\ \text{This is the average of \dfrac 1 2 and \dfrac 3 5 which is \dfrac{11}{20}}$$ . Jul 5, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523121118545532, "perplexity": 251.674630000167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00259.warc.gz"}
https://www.physicsforums.com/threads/find-all-equivalence-classes.14099/
# Find all equivalence classes. 1. Feb 8, 2004 ### Caldus Let A be a set and let f: A -> A be a function. For x,y belongs to A, define x ~ y if f(x) = f(y): a. Prove that ~ is an equivalence relation on A. This is my guess, but I am not sure whether I'm right: Proving reflexiveness: If (x,y) belong to A, then f(x) = f(x), therefore, (x,y) ~ (x,y). Proving symmetry: If (x,y) belong to A, then f(x) = f(y), therefore if (y,x) belong to A, then f(y) = f(x), so (x,y) ~ (y,x). Proving transitivity: If (x,y) and (y,z) belong to A, then if f(x) = f(y) and f(y) = f(z), then f(x) = f(z). Therefore, (x,y) ~ (x,z). Is this right? b. Suppose A = {1, 2, 3, 4, 5, 6} and f = {(1,2), (2,1), (3,1), (4,5), (5,6), (6,1)}. Find all equivalence classes. I have no idea where to start with this one. Could someone start this one out? I would really appreciate it. 2. Feb 8, 2004 ### master_coda Reflexivness is just proving that x ~ x, for any x in A. There shouldn't be any "y" term in your proof. The way to prove symmetry is to assume x ~ y and show that this implies y ~ x (for any x,y in A). Similarly, for transitivity you must show that for any x,y,z in A if x ~ y and y ~ z then x ~ z. In all three of your proofs you seem to be using "if (x,y) belong to A" as a synonym for "if x,y in A and x ~ y". This is incorrect. The easiest way to solve the second question is just by brute force. Group the elements of {1,2,3,4,5,6} by what the result of f(x) is. In other words, split the numbers up into a set where f(x)=1, a set where f(x)=2, and so on. These are your equivalence classes. 3. Feb 8, 2004 ### Caldus Wouldn't x,y be used in each one? Since a function is a cartesian product or whatever. 4. Feb 8, 2004 ### master_coda Yes, elements of a function can be considered as ordered pairs. However the equivalence relation is on the set A, which his not made up of ordered pairs. So you have to examine elements of A, not elements of the function f. 5. Feb 8, 2004 ### HallsofIvy Staff Emeritus You start everyone with "If (x,y) belongs to A" which is incorrect. A is the set containing x or y. If does not contain (x,y)! 6. Feb 8, 2004 ### Caldus Reflexive: If x belongs to A, then f(x) = f(x). Therefore, x ~ x. Symmetry: If x ~ y, then f(x) = f(y). So f(y) = f(x) and y ~ x. Transitive: If x ~ y and y ~ z, then f(x) = f(y) and f(y) = f(z). Then f(x) = f(z). Therefore, x ~ z. And I still don't know what to do for the second part. 7. Feb 8, 2004 ### Caldus For the second part, could this be a possibility?: Equivalence class of (1,2): {(x,y) | (x,y) belongs to (1,2)} Equivalence class of (2,1): {(x,y) | (x,y) belongs to (2,1)} (And so on...?) 8. Feb 8, 2004 ### master_coda Given a set and an equivalence relation, in this case A and ~, you can partition A into sets called equivalence classes. These equivalence classes have the special property that: If x ~ y if and only if x and y are in the same equivalance class. In this case, two elements are equivalent if f(x) = f(y). Thus all the elements with f(x) = 1 are in the same equivalence class, and all the elements with f(x) not= 1 are in different equivalence classes. Similarly, all the elements with f(x) = 2 are in the same equivalence class, and all the elements with f(x) = 3 are in the same equivalence class, and so on. Note that in this case, we are still working with elements of A, not elements of f. Thus the elements of the equivalence classes with be numbers, not ordered pairs. 9. Feb 8, 2004 ### Caldus Thank you. That helped a lot. My quotient set ends up being: {{1}, {2,3,6}, {4}, {5}} Is this right? 10. Feb 8, 2004 ### master_coda Yes. Similar Discussions: Find all equivalence classes.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905026912689209, "perplexity": 685.3727950026399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948616132.89/warc/CC-MAIN-20171218122309-20171218144309-00475.warc.gz"}
https://export.arxiv.org/abs/2004.00043
astro-ph.HE (what is this?) # Title: Neutrinos and gravitational waves from magnetized neutrino-dominated accretion discs with magnetic coupling Abstract: Gamma-ray bursts (GRBs) might be powered by a black hole (BH) hyperaccretion systems via the Blandford-Znajek (BZ) mechanism or neutrino annihilation from neutrino-dominated accretion flows (NDAFs). Magnetic coupling (MC) between the inner disc and BH can transfer angular momentum and energy from the fast-rotating BH to the disc. The neutrino luminosity and neutrino annihilation luminosity are both efficiently enhanced by the MC process. In this paper, we study the structure, luminosity, MeV neutrinos, and gravitational waves (GWs) of magnetized NDAFs (MNDAFs) under the assumption that both the BZ and MC mechanisms are present. The results indict that the BZ mechanism will compete with the neutrino annihilation luminosity to trigger jets under the different partitions of the two magnetic mechanisms. The typical neutrino luminosity and annihilation luminosity of MNDAFs are definitely higher than those of NDAFs. The typical peak energy of neutrino spectra of MNDAFs is higher than that of NDAFs, but similar to those of core-collapse supernovae. Moreover, if the MC process is dominant, then the GWs originating from the anisotropic neutrino emission will be stronger particularly for discs with high accretion rates. Comments: 10 pages, 7 figures, accepted for publication in MNRAS Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) DOI: 10.1093/mnras/staa932 Cite as: arXiv:2004.00043 [astro-ph.HE] (or arXiv:2004.00043v1 [astro-ph.HE] for this version) ## Submission history From: Tong Liu [view email] [v1] Tue, 31 Mar 2020 18:14:25 GMT (1530kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8925318121910095, "perplexity": 4650.307905691796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00207.warc.gz"}
https://www.physicsforums.com/threads/the-domain-of-the-fourier-transform.989578/
# The domain of the Fourier transform • I • Thread starter redtree • Start date • Tags • #1 234 5 ## Summary: How does the Fourier transform handle functions of complex variables when only integrating over the reals? Given the domain of the integral for the Fourier transform is over the real numbers, how does the Fourier transform transform functions whose independent variable is complex? For example, given \begin{split} \hat{f}(k_{\mathbb{C}}) &= \int_{\mathbb{R}} f(z_{\mathbb{C}}) e^{2 \pi i k_{\mathbb{C}} z_{\mathbb{C}}} d z_{\mathbb{C}} \end{split} where ##z_{\mathbb{C}} = z_1 + z_2 i## and ##k_{\mathbb{C}} = k_1 + k_2 i##. It seems that evaluating ##f(z_{\mathbb{C}}) e^{2 \pi i k_{\mathbb{C}} z_{\mathbb{C}}}## over the reals includes only ##z_1## and ignores ##z_2 i## in ##f(z_{\mathbb{C}}) ##. Likes Delta2 ## Answers and Replies • #2 Delta2 Homework Helper Gold Member 2,990 1,046 The way I 've been taught fourier transform is that it applies to complex valued functions of a real variable and it gives as result also a complex valued function of a real variable. That is, it is ##f:\mathbb{R}\to\mathbb{C}## and its fourier transform ##\hat f:\mathbb{R}\to\mathbb{C}## so the integrals of the Fourier and the inverse Fourier transform are inevitably over the real line. The integral you are proposing here seems to be an interesting generalization of the Fourier transform so that it can include functions ##f:\mathbb{C}\to\mathbb{C}##, that is complex valued functions of a complex variable. Only thing is that you have to make it a contour integral in order to make it integrate over the complex plane. Or simply a double integral over the real line like for example $$\int_\mathbb{R}\int_\mathbb{R}f(z_1+z_2i)e^{2\pi i k_{\mathbb{C}}(z_1+z_2i)}dz_1dz_2$$ I cant tell if such an integral has interesting theoretical or practical applications. • #3 RPinPA Homework Helper 571 319 There is such a thing as a two-dimensional Fourier transform, for instance the spatial Fourier transform of an image. It consists of Fourier transforms applied independently in the x and y directions. Perhaps that's what you need to do here, transform Re(z) and Im(z) to Re(k) and Im(k). • #4 Svein 2,099 675 ... and remember that $i=e^{\frac{i\pi}{2}}$, which means that the transformation of a purely imaginary variable is the same as the transformation of the real version of the variable rotated π/2. This leads to the Laplace transformation. • Last Post Replies 6 Views 859 • Last Post Replies 3 Views 2K • Last Post Replies 3 Views 1K • Last Post Replies 1 Views 1K • Last Post Replies 1 Views 2K • Last Post Replies 2 Views 4K • Last Post Replies 2 Views 2K • Last Post Replies 3 Views 3K • Last Post Replies 1 Views 1K • Last Post Replies 1 Views 612
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9868468046188354, "perplexity": 1091.2885331939187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00543.warc.gz"}
https://www.gfdl.noaa.gov/blog_held/7-why-focus-so-much-on-global-mean-temperature/
7. Why focus so much on global mean temperature? Posted on April 5th, 2011 in Isaac Held's Blog Upper panel: Interdecadal component of annual mean temperature changes relative to 1890–1909. Lower panel: Area-mean (22.5°S to 67.5°N) temperature change (black) and its interdecadal component (red).  Based on the methodology in Schneider and Held, 2001 and HadCRUT3v temperatures.  More info about the figure. Perhaps the first thing one notices when exposed to discussions of climate change is how much emphasis is placed on a single time series, the globally averaged surface temperature. This is more the case in popular and semi-popular discussions than in the scientific literature itself, but even in the latter it still plays a significant role.  Why such an emphasis on the global mean? Two of the most common explanations involve 1) the connection between the global mean surface temperature and the energy balance of the Earth, and 2) the reduction in noise that results from global averaging.  I’ll consider each of these rationales in turn. The energy balance of any sub-portion of the atmosphere-ocean system is complicated by the need to consider energy fluxes between this selected portion and the rest of the system.  It is only for the global mean that the balance simplifies to one involving only radiative fluxes escaping to space, providing a basic starting point for a lot of considerations.  But is there a tight relationship between the global mean surface temperature and the global mean energy budget? I have already indicated in a previous post (#5) that this coupling is not very tight in many climate models.  In these models, the pattern of temperature change in response to an increase in CO2 evolves in time, becoming more polar amplified as equilibrium is approached.  And, as a consequence of these changes in spatial pattern, the relationship between the global mean temperature and global mean top-of-atmosphere (TOA) flux changes as well.  Among other things, the dynamics governing the vertical structure of the atmosphere is very different in low and high latitudes, and one needs to know how the vertical structure responds to estimate how radiative fluxes respond.  There are also plenty of reasons why cloud feedbacks might have a different flavor in high and low latitudes, and might be controlled more by changes in temperature gradients than in local temperature. The potential for some decoupling of global mean surface temperature and global mean TOA flux clearly seems to be there. [One sometimes sees the claim that it is the 4th power in the Stefan-Boltzmann law that primarily decouples the global mean temperature from the global mean TOA flux. But this is a very weak effect, at most 5% according to my estimate; the effects of differing vertical atmospheric structures in high and low latitudes, and the effects of clouds, are potentially much larger.) There is a tendency, especially when discussing “observational constraints” on climate sensitivity, to ignore this issue — assuming, say, that interannual variability is characterized by the same proportionality between global mean temperature and TOA fluxes as is the trend forced by the well-mixed greenhouse gases. This is not to say that the internanual constant of proportionality is irrelevant to constraining climate sensitivity.  One can imagine, if interannual variability is characterized by one spatial pattern, and the response to CO2 by another pattern, that one might be able to compensate for this difference in pattern when trying to use this information to constrain the magnitude of the response to CO2. Let’s turn now to the noise reduction rationale. There is plenty of variability in the climate system due to underlying chaotic dynamics, in the absence of changing external forcing agents. To the extent that a substantial part of this internal variability is on smaller scales than the forced signal, spatial averaging will reduce the noise as compared to the signal.  But is global averaging the optimal way to reduce noise? Suppose one has a time series at each point on the Earth’s surface.  There are a lot of different linear combinations of these individual time series that one could conceivably construct; the global mean is just one possibility.  Some of these linear combinations will have the property of reducing the noise more than others. One can turn this around and ask which linear combination reduces the noise most effectively. Tapio Schneider and I examined this question in a paper in 2001.  One has to first define what one means by “minimizing noise”.  In our case, we define a “signal” by time- filtering the local temperature data to retain variations on time scales of 10 or 15 years and longer and then define the “noise” to be what is left over.  We are not saying that this signal is forced by exterrnal agents; it is presumably some combination of forced responses and free low-frequency variations.  But the forced response due to slowly varying external agents is presumably captured within this signal.  We then maximize the ratio of the variance in the “signal” to the variance of the “noise”.  This is an example of discriminant analysis, in which you group the  data and look for those patterns that best discriminate between the data in different groups.  (Roughly speaking, the different decades are different groups for our analysis, although we do not actually use non-overlapping decadal groups.) The result is a ranked set of patterns and a time series associated with each pattern.  The most dominant pattern, the one that reduces the noise most effectively, turns out to be quite different from uniform spatial weighting.  The animation at the top of the blog shows the evolution of annual mean temperatures filtered to retain the 4 most discriminating patterns (this is the number of patterns with a ratio of signal to noise greater than one.)  Tapio has comparable animations for the individual seasonal means. A more popular approach to multivariate analysis of the surface temperature record, complimentary to discriminant analysis, is “fingerprinting”.  Here models provide one or more patterns (starting with the pattern forced by the well-mixed greenhouse gases) and, using multiple regression, we test the hypothesis that these patterns are discernible in the observed record.  These approaches are complimentary because discriminant analysis does not start with a given pattern and test the hypothesis that it is present in the data; it is just a way of describing the data.  A purely descriptive analysis can only take you so far, but for some purposes it is advantageous to let the data tell you what the dominant patterns are, rather than having models suggest how to project out interesting degrees of freedom. In any case, you can do better than take a global mean if you want to reduce the noise level in the data. The information content in the global mean depends on how many distinct patterns are present.  Let’s assume that one has already isolated from the full time series what one might call “climate change”, either through a discriminant analysis or some other algorithm. If the evolution of the signal is dominated by one perturbation pattern $T(x,t) \approx A(t)B(x)$, and if we normalize the pattern $B$ so that it has an integral over the sphere of one, we can just think of the perturbation to the global mean as equal to $A$, the amplitude of the pattern.  If 2 (or more) things are going on that contribute to observed climate changes, you are obviously going to need 2 (or more) pieces of data to describe the observations, and the value of the global mean is more limited. If the response to CO2 , or the sum of the well-mixed greenhouse gases, is linear,  the spatial response of surface temperature could still be a function of the frequency of the forcing changes.  If one assumes in addition that this frequency dependence is weak, as in the “intermediate” regime discussed in earlier posts, then one can expect evolution of the forced response that is approximately self-similar, with a fixed spatial structure,  in which case the global mean is a perfectly fine measure of the amplitude of the forced response. It is easy to come up with examples of how an exclusive emphasis on global mean temperature can be confusing.  Suppose two different treatments of  data-sparse regions such as the Arctic or the Southern Oceans yield different estimates of the global mean evolution but give the same results over data rich regions.  And suppose, for the sake of this simple example only,  that the actual climate change is self-similar, $T(x,t) \approx A(t)B(x)$ and is, in fact, entirely the response to increasing well-mixed greenhouse gases.  One is tempted to conclude that the method that gives the larger global mean warming suggests a larger “climate sensitivity”.  But both would be providing the same estimate of  the response to greenhouse gases in data-rich regions. There are other interesting model-independent multivariate approaches to describing the instrumental temperature record besides the discriminant analysis referred to above.  Typically one needs to choose something to maximize or minimize. For example, in this paper, Tim Del Sole maximizes the integral time scale, $\int \rho(\tau) d\tau$, where $\rho$ is the autocorrelation function of the time series associated with a particular pattern.  I encourage readers to think about other alternatives. In response to a question below, here (thanks to Tapio) is the spatial weight vector that is multiplied by the data to generate the canonical variate time series for the first (the most dominant) term in the discriminant decomposition of the annual mean data set shown at the top of the post. The lower panel shows the filtered and unfiltered canonical variate time series.  The shading is the band within which 90% of the values should lie in the absence of interdecadal variability, as estimated by bootstrapping. The low weights over land are interesting. Although this is only the first term in the expansion, it suggests that climate variations over land can be estimated by using a discriminant analysis of ocean data alone and then regressing the resulting canonical variates with the land data.  This is consistent with dynamical model studies, such as Campo and Sardeshmukh, 2009, of the extent to which land variations are slaved to the ocean on these time scales. [The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.] 24 thoughts on “7. Why focus so much on global mean temperature?” 1. Part of the utility in talking about a “global temperature anomaly”, at least in popular discussions, is that it is a relatively uniform field (within a factor of 2-3 or so [maybe a bit more near the poles]);what’s more, anomalies are often well-correlated even over synoptic-scale distances. It therefore provides some intuition as to what is happening. While a “global temperature increase of 3 C” may not mean much to a farmer in Wisconsin, if he applies that number to the temperature change averaged over Wisconsin, he is probably going to be more in the ballpark than if he applied that sort of reasoning to the precipitation anomaly field, whose “globally averaged value” is not often discussed in any useful setting. In contrast, the paleo-climate community generally does not talk in such terms when discussing events such as the abrupt climate changes involved with see-saw like oscillations during the last glaciation, with anomalies anti-correlated between hemispheres. So the utility of a ‘global temperature anomaly’ as a variable depends on the nature of the forcing, but it’s reasonable (to first order) to think in such terms for increases in CO2. This utility probably breaks down as you continue to make your spatial scale smaller, but the science is, for better or worse, currently at a stage where the regional effects are details that branch off of the “big picture.” 2. HR says: Thanks for the post, I’ve got side-tracked by one question that your post raised which I hope isn’t too much off topic. One idea I have come across revolves around the importance of the homogeneous truly global nature of the warming trend in recent decades. For example if one plays with the NASA-GISS mapping tool (http://data.giss.nasa.gov/gistemp/maps/) the mid-20th century warming has regions of warming and cooling while the most recent warming period is almost globally warmer. It’s suggested this is important supporting evidence for attribution of the warming. At the height of the mid-20th century warming period your animation seems to have warming as a fairly complete global phenomenon which seems quite different to NASA-GISS. Is my observation about your work accurate? Is the pattern of warming using your methodology very different to others? Is it possible to think of the homogeneous/heterogeneous nature of the warming signal as being a methodological issue/artifact? 1. Isaac Held says: The discriminant analysis is a filter. This filter will make things more spatially homogeneous if and only if it is the more homogeneous components of variability that most clearly distinguish the inter-decadal from higher frequency variations. Is this process homogenizing the mid-century warming more than the late century warming? I guess I would need to quantify that before commenting further. It’s an interesting point. 3. NiV says: It would be interesting to see the spatial distribution of the noise amplitude. Does it occur uniformly, or at temperate latitudes, or something else? In discriminating between signal and noise, you have to have statistical models for each. Supposing we keep things linear for the moment, we can have a subspace over which the signal varies, and another subspace over which the noise varies, and observations over the sum of the two. (In the above example, the subspaces appear to be frequency bands above and below 1/10yrs.) The subspaces are probably not orthogonal in general (although your frequency bands are), so complete separation is unlikely. You would then get four components: a part attributable to the signal, a part attributable to the noise, a part attributable to both (so you couldn’t tell which was causing it) and a part attributable to neither (an indication of whether your statistical models are effective). The subspace intersection and null space intersection give the latter two. A linear sum (signal+noise) of non-linear models should also be feasible, although there could be multiple solutions. The translates of each model manifold would act like a set of curvilinear coordinates. The coordinates of the observed history would give the signal, noise, and indeterminate components. (The indeterminates would be along curves of intersection of signal and noise manifolds.) So you could, for example, take an ARMA(2,1) noise model, a ‘low frequency band’ signal, and determine how much was attributable to each, how much couldn’t be distinguished, and where (spatially) each component was strongest. By considering a range of different noise models and signal models, you could perhaps determine where to concentrate future measurements to be able to best distinguish between them or eliminate them, or how sensitive a result was to outlying observations in a small geographic area. 1. Isaac Held says: Perhaps the terminology “signal” and “noise” is a source of confusion (or perhaps I am confused). If you look at the temperature data from a random year, can you tell which decade it belongs to? Which aspects of the temperature field would you look at to optimize your chance of making the right assignment? Is looking at the global mean the surest way? These are the kind of question that the discriminant analysis tries to answer. An explicit noise model doesn’t seem necessary in this context. Asking different questions will motivate different ways of looking at the data, of course 4. NiV says: Now I’m confused too. If you have the data for all the years as your training set, then it is trivial to construct a discriminator that identifies any given pattern. Consider each year’s data as an n-dimensional point, consider the decade as a function sampled on this set of of points, and interpolate. The question only becomes interesting when you restrict your interpolation function to some smaller subspace – like linear discriminants – so you have to find a best fit to the sample using functions of a given form. This restriction reflects your assumptions about the distribution of the data. For example, if you assume each class has Gaussian distributions with equal covariances but differing means, the linear discriminant is optimal. Without assuming equal covariances, a quadratic discriminant is optimal. And so on. The space of functions you use for ‘curve-fitting’ boundaries between classes depends on your implicit assumptions about the space of distributions that you use to model the data. By choosing to minimize variance, for example, you implicitly optimize for a Gaussian distribution. That’s not unreasonable, of course – they’re very common. But I think it does count as a sort of noise model. I’m no expert, though, so I may have gotten confused over what you’re trying to do. As you said – you have to define what you mean by “noise” and what you mean by “minimizing” it. (Minimum noise having maximum probability under some distribution.) Your way of dividing it up is very reasonable. But I thought that it could be generalized to other perhaps more realistic noise models, and assumed this sort of generalization was what you were asking for. No problem, though – it was just a thought. 1. Isaac Held says: I was both confused and misunderstood your comment. As you say, the whole point is to discriminate with a few degrees of freedom only. And to determine that one’s algorithm is optimal requires that one thinks of the observations as part of an ensemble. So in this sense there is an implicit (Gaussian stationary) noise model underlying our analysis. Thanks for the clarification. Needless to say, statistics is not one of my strengths. 5. Tim DelSole says: Isaac. I enjoyed your post. There is another very different way to use discriminant analysis to describe the temperature record, and it has a connection to fingerprinting that might help clarify things. Suppose you have two model runs, one forced and one unforced. If the forced response varies in time and can be idealized as additive and independent of unforced variability, then any component in the forced run should have larger variance than in the unforced run. In fact, the larger the ratio of the two variances, the larger the forced response. This reasoning suggests that the component with the largest possible ratio must be associated with the strongest response. Discriminant analysis can be used to find the weights that maximize the ratio of the forced to unforced variance. The resulting discriminant function must then be the thing to look at to best discriminate between forced and free fluctuations. An interesting by-product of the forced-to-unforced discriminant pattern is that it optimizes detectability in the climate model. What I mean by this is that if you were to use this discriminant pattern as your forced pattern, and apply fingerprinting to the output of the forced run, then this pattern maximizes the statistic for testing climate detection. When I used this approach to define the forced response pattern in a recent paper, I was surprised to find that discriminant analysis detects only one forced pattern, despite the fact that the forced runs are subject to multiple climate forcings. The time series of this pattern clearly shows that it captures both the warming trend and volcanic signals. I was not aware of your self-similarity argument as a possible explanation and would like to learn more about. At first I was confused about how it is possible to attribute changes to specific forcings given that discriminant analysis identifies only one forced component. The answer is that all attribution studies that distinguish different forcings use more than surface spatial structure. Specifically, they use temporal information, vertical structure, or seasonality in the response pattern. It is easy to see how additional information, like temporal evolution, can enhance discrimination: for instance, the response to anthropogenic forcing is primarily a trend whereas the response to volcanic aerosols occur at very specific and isolated points in time, so even if the response is pattern is similar the time evolution is very different. The fact that discriminant analysis of annual mean surface fields identifies only one significant component implies that it is absolutely impossible to distinguish different forcings based on surface spatial information alone. I wonder if your self-similarity argument explains this empirical result. 6. Isaac: Thanks for the clear and insightful post. I have one comment regarding “optimal fingerprinting.” It is, as you write, complementary to space-time filtering of observations, which focuses on slow climate variations. But I do not think it is the optimal complement, for several reasons. Optimal fingerprinting relies on a linear regression of observations onto simulated climate change signals. The simulated climate change signals typically are linear trends (e.g., of surface temperature). To obtain a well-posed regression problem, it is furthermore necessary to reduce the effective number of degrees of freedom in the climate change signal, for example, by severe spatial smoothing. Climate change is then said to be detected if the generalized linear regression of observations onto the climate change signal is statistically significant, relative to a noise background of natural variability that is likewise estimated from a simulation. So at least three assumptions are typically made: (1) climate change “signals” evolve approximately linearly in time; (2) models simulate natural variability adequately; (3) signals of interest have very large spatial scales (typically thousands of kilometers). Assumption (1) seems unnecessary to me, (2) is best not made at the outset, and (3) may not always be adequate, for example, for some forms of aerosol forcing, which can give spatially localized responses. I think it would be better to separate the analyses of simulations and observations to a greater degree. One could compute “slow manifolds” of simulations and observations separately (e.g., by a discriminant filter, or Tim DelSole’s optimally persistence patterns), without introducing model adequacy assumptions into the analysis of the observations. These independently obtained slow manifolds of observations and simulations could then be compared statistically. Statistical inference gets a bit more complicated (because the slow manifolds generally are nonlinear functions of the data), but this is not a fundamental problem because resampling methods can be used in the statistical comparison of observations and simulations. This would have the benefit of clearly showing model inadequacies. It can be done with small ensembles of simulations (or just one simulation) because ensembles are not necessary to estimate slow manifolds reliably (as the animation above shows for observations). Tim: The discriminant analysis you are describing seems to be exactly the predictable component analysis for predictability studies of the second kind (the relation to optimal fingerprinting is briefly discussed here). But I am confused about your statement that it is surprising that the forced-to-unforced discriminant analysis finds only one forced pattern, if you compare a forced and an unforced simulation. In two-group LDA when the group-means have to be estimated, there is only one degree of freedom for discrimination (Fisher’s linear discriminant: the inverse of the within-group covariance matrix times the difference between the group means). So why would you expect to find more? It seems to me to distinguish more “climate signals” (different signatures of different forcings), you would need to compare more forced simulations, each incorporating a different forcing, with an unforced simulation. 1. Tim DelSole says: Tapio: The discriminant analysis I was describing was *not* LDA, which tests for a difference in means and has only one degree of freedom, but rather, the version that test for a difference in variance and generates as many discriminants as vector dimension (perhaps I am using the wrong terminology?). The version I was describing is more analogous to your discriminant analysis paper, in which the two data sets are low-pass and high-pass time series, except that in my case the two data sets are the twentieth century run and the pre-industrial control run. In regards to whether fingerprinting is optimal, I agree that studies that use trend maps or spatial filters have the limitations you point out, but other studies analyze the data in a different way that avoids these limitations. For instance, one can define the forced response by a sequence of 10-year means. In this case, no linear trend assumption is made, rather, the evolution of the response is defined by the model. In addition, this evolution can be expressed by the leading extended EOFs (based on 10-year means), in which case no explicit spatial filtering is applied. Finally, Allen and Tett (1999, Climate Dynamics) proposed the “residual consistency test,” which tests whether the residuals about the forced response have the same variance as predicted by the control runs. In this way, the assumption that the models adequately simulate natural variability can be checked (to some extent). I think your idea of using slow manifolds to identify model inadequacies is good. I tried to do this to some extent in the recent paper I mentioned– I identified a slow manifold from the control runs (which turned out to be highly correlated with the AMO), and then compared the autocorrelation time scale of this component in observations and twentieth century runs. I found that the observed value was “in the middle” of the values produced by the models. 1. Isaac Held says: To help orient some readers: There are things one can do with the observations alone and others that one can do with observations + simulations. My post was focused on “exploratory” analyses of the instrumental temperature data with no reference to GCMs. Tim’s comments refer to the problem of how best to use simulations and observations together to separate forced from internal variations, a fundamental problem about which quite a bit has been written lately and to which I want to return (if only to force me to read some of these papers more carefully). I would second Tapio’s point that there is value in developing new exploratory analyses, keeping these separated more cleanly from simulations. 1. Tim DelSole says: Isaac is right. I apologize for not sticking within the boundaries of your problem Isaac! I should have explained why I didn’t stick with observations and avoid models completely. There are two reasons. First, when we say we are interested in “low-frequency” variations, we are implicitly saying we are interested in variations that have a large amount of low-frequency variance *relative* to the high-frequency variance. The key word here is “relative.” A simple counter example is white noise– white noise can have tremendous power at low frequencies, but we would not ordinarily characterize it as “low-frequency,” even if the absolute amount of power at low frequencies was larger than for any other variable. This logic implies that a measure of low-frequency variability must be some ratio of low-pass to high-pass variance. After that, the details of the low- and high-pass filtering do not seem to be important. For instance, the patterns produced by optimal persistence analysis (as described in my paper that Isaac mentioned) are nearly the same as those produced by Tapio’s method, provided they are applied to the same data (I verified this some time ago when Tapio kindly provided me his imputed data set). The reason they are nearly the same is because both methods effectively maximize the ratio of low-pass to high-pass variance (the fact that optimal persistence analysis does this is shown in here ). If the problem is to seek alternative methods for finding low-frequency variability, and the approach is to maximize some ratio of low-pass to high-pass variance, I question whether it is productive to consider different definitions of “low-pass” and “high-pass,” because (1) different measures give similar results and (2) there is little a priori basis for deciding between different measures of “low-pass.” Second, as Isaac mentioned, since observation-based discriminants capture a combination of the forced response and free low-frequency variations, you end up with a mixture of free and forced variations. There really is no way to avoid this contamination when deriving weights using only observational data. To avoid this contamination, one must go outside data and define properties of the forced response that can be used to discriminant it from unforced fluctuations. This is possible with models. 2. Isaac Held says: Tim, how about doing the discriminant analysis with the spatial gradient of temperature, rather than the temperature itself (motivated by the idea that spatial gradients are more relevant for the circulation), or analyzing the surface temperature and pressure simultaneously, or working with the difference between cold half year and warm half year temperatures, rather than their sum, etc. I just think there are a variety of exploratory analyses that have the potential to provide new perspectives on the surface instrumental record. 1. Tim DelSole says: I am very interested in your thoughts in this area. One point to recognize is that discriminant analysis has the remarkable property that the maximized ratios are invariant to linear, non-singular transformations. This means that taking spatial gradients and then performing discriminant analysis should not lead to larger ratios than performing discriminant analysis on the original data, since spatial gradients are merely a linear transformation of the data. (There are a couple of technicalities I am ignoring here; e.g., we usually maximize in the space spanned by the leading principal components, and gradients are not invertible.) The problem with adding variables is that it increases overfitting problems, which means you have to compensate for the additional variables by using fewer principal components. But, with long model runs, overfitting can be overcome. The question I would like to hear more about is which combination of variables would you say ought to provide the best discrimination of low-frequency or forced variability? (Was this your original question?) The few cases I’ve explored did not improve discrimination power. Working with cold half year and warm half year would be equivalent to including seasonal information, which actually works well. I did this in the paper you refer to in your original post. 7. Alexander Harvey says: A very small point and rather OT and a pet peeve. The not uncommon extension of the Stefan-Boltzmann to grey bodies where it does not hold. An example is the construction of an effective emissivity, and then to carry on as if the fourth power law still held blind to the effective emissivity being itself a function of temperature. From what I can remember the positioning of the atmospheric window would (if nothing else changed) gives emission rising faster than the fourth power due to the motion of the spectral peak towards the window. OTOH increasing water vapour changes the shape of the spectrum in a way that tends to counteract this producing something more akin to a linear response. Whatever the precise real world effect their assumption of an effective fourth power law is not sound. Sorry I am not saying you do it. It just irks me. 1. Isaac Held says: My impression is that the consequences of non-grey radiation when the tropospheric temperature change is assumed to be uniform in the vertical is not as important as differences in the vertical structure of temperature change in high and low latitudes, and the differences in cloud feedbacks. 1. Alexander Harvey says: Isaac, Sorry I was making a much narrower point. That being: even if everything else remained the same except for a uniform rise in temperature, there is no reason to assume a fourth power law. I was also ambiguous, I meant that the grey body approximation doesn’t hold in the real atmosphere. Because the SB law is the result of integration of the Planck function over all frequencies for unit emissivity, the correct equivalent is to integrate over the product of the Planck function and the emissivity function. As the later is also a function of frequency and temperature one is unlikely to end up with a neat function such as a fourth power law. I believe that any non isothermic atmosphere containing GHGs is more or less guaranteed not to follow a fourth power law. Well that is what I think. I was not disagreeing with other effects being more significant, simply querying whether assuming a fourth power law tends to an overstatement of that particular effect. 8. Alexander Harvey says: I am a bit puzzled as to how this works. Dealing with the data for just one calendar month: As I see it there is a data matrix X and some weighting vector u whose product is the canonical variate c. The discriminating pattern v is obtained by regressing X on c. Correct? The maximisation principle consists in essence in splitting c into high pass and low pass components say h(c) and l(c) and varying u until the ratio R = V[l(c)]/V[h(c)] is maximal, where V[·] is the variance operator. Correct? So I must imagine that u adds weight to those grid points which as a set are highly correlated at low frequencies and poorly correlated at high freqencies and also have significant low frequency variance and not too much high frequency variance. Now I don’t think u is actually displayed in the paper and it strikes me that it may have some peculiar properties. The case reminds me of a story line about a town being the perfect bellwether for political polling purpose, so nowhere else needed polling. If a super R maximising grid square existed u could have only one non zero element however unlikely such an occurrence might be. I am sure I will have more questions but for now I am interested to know if u is sparse or more broadly just a few grid cells contribute most of its variance, and if it can and does take on negative values? Alex 1. Alex: Temperature variations generally have long-range spatial autocorrelations, in particular at low frequencies. So while it would be handy if a “super R maximising grid square existed,” this is very unlikely. The weight vectors u indeed have large-scale structure. For example, for the first canonical variate that maximizes R for the data in the animation at the top of the page, the weight vector u generally gives greatest weight to Northern Hemisphere ocean grid points, except in the North Atlantic (which exhibits substantial decadal variability). Land surfaces are weighted down (many land areas have weights close to zero, others have negative weights). There are also large ocean regions in the Southern Hemisphere with negative weights (e.g., in the South Pacific). Mathematically, you can see that the scale of variations in u is large from the relation v ~ S u, where S (up to a scaling matrix) is the covariance matrix of the data. The large-scale structure in u gives rise to the large-scale structure in the spatial patterns of temperature changes v (the long-range correlations represented by S lead to additional ‘smoothing’ of the structures in v). 2. Isaac Held says: 3. Alexander Harvey says: Tapio and Isaac, Thanks for the clarification. I am gratefull for your giving of the additional graphic to us, that was generous. I can now see that the weighting and discriminating patterns are quite distinct so the method has found something to chew on in the data. I am innately suspicious of algorithms that minimize or maximize if they have free rein to mine the data. On reflection I can see that various steps, particularly the use of a PCs (giving you orthogonality) , should help to discourage them from latching on to some “thin” vector by a process of subtraction of similar vectors. The weighting vector goes some way to answering the “how was it achieved” question, and the answer looks intriguing. The use of many techniques, Regularization, PC truncation, Smoothing, Discriminant Analysis, does not make for an easy read, and I am not sure how much more detail I could easily extract or wish to. So I shall try to comprehend it at some sort of conceptual level. As I understand it, what you have achieved is a separation of the signals into high and low frequency components, of which you keep the low frequency part, by selection of optimal measurement weights (that is the obvious bit). What is not so obvious is why this is better than filtering. The clever bit seems to be that a process of cancelling out unwanted noise by taking advantage of the spatial element avoids the temporal dispersion inherent in filtering, i.e. it maintains zero phase shift in the frequency domain. Thereby you may have achieved quite a sharp or high order low pass separation without producing nasty high Q resonances with their associated temporal distortions. If that is right then it might be important to know that the weighting vectors will maintain there noise “cancelling” properties, i.e. that they are inherent properties of the system (of a geographic origin) not opportunistic vectors. It might be good to know if the GCMs for which some have logged many centuries of data do exhibit weighting vectors that are stable from century to century and run to run. Many thanks Alex 1. Tim DelSole says: Alex: I liked your concise summary of discriminant analysis. You’ve hit on a problem with discriminant analysis that worries me a lot, namely, the fact that the weight vector is sensitive to the number of PCs (or more generally, sensitive to the degree of regularization). For “short” climate time series, the weight vector typically develops more short scale variability as the number of PCs increases. Given this sensitivity, it is dangerous to interpret the weight vector literally because it can change dramatically when the number of PCs change. Despite this sensitivity in the weighting pattern, the discriminant pattern itself is nearly independent of the number of PCs, beyond a certain threshold, so it is clear it is extracting a robust pattern. What happens is that as more PCs are included, discriminant analysis typically assigns order one weight to the extra PCs, but then these PCs get damped strongly when the weight vector is multiplied by the covariance matrix to obtain the discriminant pattern (as described in Tapio’s May 7 post). It is surprising to me that there are few studies of this sensitivity and of the reliability of methods for selecting the number of PCs (at least as far as I am aware). 1. Alexander Harvey says: Tim, Thanks, my intuitive insight, such as it is, harks back to when I knew something of acoustics and how to exploit the spatial and hence temporal variations of the acoustic field by the positioning of microphones which are in turn commonly discriminant by design. I shied away from Tapio’s description of the relationship between the spatial vectors through the application of the covariance matrix but I think I am getting it. I am impressed that a discriminate approach works yet I do worry that I am ignorant as to what has been achieved. Visually it looks like a slow component (~60yr) has been discriminated against, or at least not enhanced along with the underlying trend. Were that to be true then it must surely share much ot its spatial pattern with discriminated against fast components such as ENSO. But no matter for I am getting a bit beyond my reach. As, (or if), we progress with mitigation then an ability to detect the effect of mitigation by lifting it out of the noise by techniques more sophisticated and timely than simply waiting for the medium term trend to emerge would be hugely beneficial. Alex 9. Alex: To add to Tim’s comment, the results of the analysis here are not very sensitive to the kind of regularization employed. Of course, as in any rank-deficient or ill-posed problem, one has to be judicious in the choice of regularization approach and parameters. We used truncated principal component analysis with determination of the truncation parameter by generalized cross-validation. But using, for example, reasonable fixed values of the truncation parameter gives similar results. As Tim mentioned, higher-order principal components give rise to smaller-scale variations in weight vectors (because of the relation between principal component analysis, or singular value decomposition, and Fourier analysis of convolution operators). But they generally are not associated with coherent slow variability. Your broader question is about how to assess the significance of any slow component of climate variation identified with this approach. Since the analysis does not make explicit use of the time ordering of the data, one can assess the significance of any slow component by bootstrapping (repeating the analyses on resampled versions of the dataset, in which time is scrambled). This gives the confidence bands in the plot above. Any time variations of the canonical variate that are coherent and outside these confidence bands are indicative of true slow variations (rather than overfitting). The advantage of the discriminant approach over simple point-by-point time filtering of the data is, as you note, that it uses the spatial covariability of the data and thus creates a more efficient filter. Testing it with long control simulations from climate models (without anthropogenic forcing) gives the expected null result of smaller ratios of slow (interdecadal) to fast (intradecadal) variance. Such results could be used as the basis for climate change detection.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510539531707764, "perplexity": 724.3101176893847}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323680.18/warc/CC-MAIN-20170628120308-20170628140308-00605.warc.gz"}
http://math.tntech.edu/e-stat/page32.html
e-Statistics ## t-Distribution The t distribution is symmetric but comparatively flatter (see the solid line in the graph below) than the standard normal distribution (the dashed line below). The shape of particular t-distribution is determined by the degrees of freedom (df) = . When the sample mean and the sample standard deviation  are obtained from the data of observations, it is often assumed that the test statistic has the t-distribution with degrees of freedom with true population mean . If the true standard deviation is known, use df = +Inf (the infinity ). We can calculate the critical region corresponding to the level . Level (p-value) Right-tailed region Two-sided region Left-tailed region The appropriateness of this calculation can be ensured if (a) the sample distribution is approximately normal (the use of QQ plot is recommended), or (b) the sample size is adequately large (as a rule of thumb it is desirable to have ). Conversely when the statistic is given, we can find the corresponding so that the value belongs to the critical region, and call it p-value.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661611318588257, "perplexity": 492.567463908472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811830.17/warc/CC-MAIN-20180218100444-20180218120444-00730.warc.gz"}
https://www.physicsforums.com/threads/noob-needs-help.55748/
# Noob needs help 1. Dec 7, 2004 ### zemoth Im an aussie in yr 10 pathway 1 physics, so im no newton. However, does newtons 1st law apply to a satellite and if so, how?(u don't have to anwer the how). Remember please keep it fairly simple. :yuck: Last edited: Dec 7, 2004 2. Dec 7, 2004 ### dextercioby Newton's first law applies only to bodies which are either isolated,meaning the interaction with other bodies is absent/may be consiedered neglecteble,or it interaction with bodies,so that the the vector sum of all forces applied on the body by external bodies is nil.It's the case for the satellite,where are only 2 forces acting on the satellite:the centrifugal force and the (earth's attraction) gravitational force.Since the satellite rounds on a stabile orbit,you might say that it is equilibrium ans so,the first principle would apply.So the satellite would move around earth on an stable velocity.Those satellites are called "geostationary",since their angular velocity is the same as earth's. Daniel. 3. Dec 7, 2004 ### Evil_Kyo No, Newton's first law doesn't apply to satellites. The first law states that if the force applied over a body is null (or the sum of all forces applied are null) that body remains in rest or moving in straight line with constant speed. In this case our satellite is orbiting, so there's a net force that keeps it moving that way. That's the gravity. It's a common misconception think in centrifugal force as a real force acting over the body. If it were a centrifugal force that canceled the effect of gravity, what prevents the satellite to keep moving in a straight line? I hope my explanation is clear to you. 4. Dec 7, 2004 ### dextercioby My mistake,sorry.The centrifugal force is an inertial force (hence the name) and it appears only in the earth's frame if reference.In the satellite's (which provides an inertial frame of reference) there are no inertial forces.The only one that acts is gravity.Silly me... :yuck: I'm ashamed of myself... 5. Dec 8, 2004 ### zemoth Gee Wizz Thnx Hey Thnx that just saved me around 4 nights of constant migrains and sleep deprivation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144115805625916, "perplexity": 1763.790338595635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512332.36/warc/CC-MAIN-20181019062113-20181019083613-00040.warc.gz"}
http://fun-phys.blogspot.kr/2016/05/principle-of-superposition-simulation.html
# Principle of Superposition Simulation #### Quiz When two waves overlap, the displacement of the medium is the sum of the displacements of the two individual waves. This is the principle of __________. A. constructive interference B. destructive interference C. standing waves D. superposition The displacement due to two waves that pass through the same point in space is the algebraic sum of displacements of the two waves. #### Principle of Superposition Simulation When the next simulation is not visible, please refer to the following link. Interference of Waves On the surface of a lake on a windy day, you will see many complicated wave motions (Figure 1). You will not see a simple wave moving in a particular direction. The water surface appears this way because of the action of many thousands of waves from various directions and with various amplitudes and wavelengths. When waves meet, a new wave is generated in a process called interference. In this section, you will learn what happens when two waves meet and interfere with each other. Figure 1 The surface waves on this lake are the result of the interference of thousands of waves of different wavelengths and amplitudes. Most of these waves are caused by the wind, but they are also caused by passing boats and ships. Wave Interference at the Particle Level Waves are the result of particle vibrations, and that the particles in a medium are connected by forces that behave like small springs. Wave interference is influenced by the behaviour of the particles. Wave motion is efficient: in most media, little energy is lost as waves move. When waves come together, this efficiency continues. When one wave passes in the vicinity of a particle, the particle moves up and down in an oval path, which allows the wave to move in a specific direction, as shown in Figure 2(a). When a second wave is also present, the vibration of the particle is modified. Th e oval motion of the particles stimulates the next particle in the direction of the wave’s motion to begin vibrating. When two (or more) waves come together, as shown in Figure 2(b), the particle moves up and down rather than in an oval path because the speeds of the combined waves cancel each other out. Th e motion of the particle allows the waves to pass through each other. Th e waves are not modified, so the amount of energy stays the same. Th us, when two or more waves interact, the particle vibration is such that the direction and energy of each wave are preserved. Aft er the waves have passed through each other, none of their characteristics—wavelength, frequency, and amplitude—change. Figure 2 (a) The basic motion of a vibrating particle in a travelling wave. (b) When two waves meet, the particle motion is more up and down. The wave characteristics are unchanged after the waves pass through each other. Constructive and Destructive Interference When two waves meet, the forces on their particles are added together. If the two waves are in phase (the phase shift between them is zero), then the resulting amplitude is the sum of the two original amplitudes. This is called the principle of superposition: the resulting amplitude of two interfering waves is the sum of the individual amplitudes. Constructive interference occurs when two or more waves combine to form a wave with an amplitude greater than the amplitudes of the individual waves (Figure 3). Destructive interference occurs when two or more waves that are out of phase combine to form a wave with an amplitude less than at least one of the initial waves (Figure 4). Figure 3 Constructive interference. Two wave pulses approach each other on a rope. Notice how the amplitudes of the two waves add together. Notice, also, how the waves are unchanged after they pass through each other. The amplitude during interference in (c) is the sum of the amplitudes of the two waves. Figure 4 Destructive interference. When two wave pulses that are out of phase come together, the resulting amplitude is reduced. Technology Using Interference of Waves Noise-cancelling headphones, shown in Figure 5, use the concept of destructive interference. The electronics inside the headphones generate a wave that is out of phase with sound waves in the exterior environment. This out-of-phase wave is played inside the headset. Using destructive interference, the outside noise is cancelled. Such devices allow users to listen to music at lower volume levels, reducing potential damage to their hearing. Figure 5 A detector inside the headset determines what noise there is, and a speaker in the headphone emits the out-of-phase wave.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083734154701233, "perplexity": 422.9789131730453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607648.39/warc/CC-MAIN-20170523163634-20170523183634-00169.warc.gz"}
http://www.physicsforums.com/showthread.php?p=4158662
# Understanding precipitation reactions by ASidd Tags: precipitate Share this thread: P: 73 "When the precipitating agent is added to the solution it causes it to become supersaturated. This starts the process of nucleation where ions and molecules will clump together to form small particles. A small amount of nucleation is necessary to start precipitation. However as the reaction progresses we need particle growth to occur rather than nucleation. Particle growth results in the formation of large 3 dimensional crystals. It is an opposite process to nucleation which favors supersaturated conditions. Thus to increase particle growth we must decrease supersaturation" The above paragraph is what I wrote for an assignment. Can somebody guide me as to whether it is correct or incorrect? P: 255 What exactly is you assignment? I honestly didn't understand why particle growth and nucleation are opposite processes and why it is that nucleation favors supersaturated conditions. P: 73 Quote by Amok What exactly is you assignment? I honestly didn't understand why particle growth and nucleation are opposite processes and why it is that nucleation favors supersaturated conditions. Nucleation is when initial atoms and ions join together to make small molecules. Then further atoms and molecules attach to these molecules and then even more so the particle size is larger. If there are many nucleation sites then there are too many molecules for the particles to attach to; so they are spread out and particles aren't as large. Related Discussions Biology, Chemistry & Other Homework 1 Biology, Chemistry & Other Homework 2 Biology, Chemistry & Other Homework 6 Biology, Chemistry & Other Homework 5 Biology, Chemistry & Other Homework 4
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880539536476135, "perplexity": 1608.845647004141}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127285.44/warc/CC-MAIN-20140914011207-00146-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.physicsforums.com/threads/is-this-the-hup-or-diffraction.671593/
# Is this the HUP or diffraction? 1. Feb 13, 2013 ### mrspeedybob Consider the following video... Am I really seeing a demonstration of the HUP or am I just seeing diffraction? In this instance, is it just different models with different terminology describing the same thing? If so, what scenario would require the HUP? What can you not model with wave equations that can be modeled with the HUP? 2. Feb 13, 2013 ### ZapperZ Staff Emeritus The diffraction that you see is a clear example of the HUP at work! See this: https://www.physicsforums.com/blog.php?b=4364 [Broken] Zz. Last edited by a moderator: May 6, 2017 Similar Discussions: Is this the HUP or diffraction?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252115845680237, "perplexity": 2505.9099128238063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00445.warc.gz"}
https://infoscience.epfl.ch/record/134484
Infoscience Journal article # Distributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a Slepian-Wolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in what we call almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We derive achievability bounds on the number of samples needed for both scenarios. Our results show that, only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition to the above theoretical analysis, we propose an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters. Finally, we evaluate the performance of our method in one synthetic scenario, and two practical applications, including the distributed audio sampling in binaural hearing aids and the efficient estimation of room impulse responses. The numerical results confirm the effectiveness and robustness of the proposed algorithm in both synthetic and practical setups.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964334726333618, "perplexity": 521.4290339869752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00449-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/are-mathematical-manipulations-admisible-if-integral-are-divergents.426626/
# Are mathematical manipulations admisible if integral are divergents ? 1. Sep 5, 2010 ### zetafunction are mathematical manipulations admisible if integral are divergents ?? are formal manipulations of divergent integrals admisible whenever the integral are divergent $$\infty$$ i mean if i have a 4-dimensional integral $$\int_{R^{4}}dxdydzdt F(x,y,z,t)$$ why can we make a change of variable to polar coordinates ?? , or for example if we have an UV divergent integral $$\int_{0}^{\infty}dxx^{4}$$ by means of a change of variable $$x= 1/y$$ this integral is IR divergent $$\int_{0}^{\infty}dyy^{-6}$$ or if i have the divergent integral $$\int_{0}^{\infty}\int_{0}^{\infty}dxdy \frac{(xy)^{2}}{x^{2}+y^{2}+1}$$ this is an overlapping divergence but if i change to polar coordinates then i should only care about $$\int_{0}^{\infty}dr \frac{r^{3}}{r^{2}+1}$$ which is just a one dimensional integral. 2. Sep 5, 2010 ### tom.stoer Re: are mathematical manipulations admisible if integral are divergents ?? I think that all manipulations have to be backed up by some cutoff procedure. So given a divergent integral this is just mathematical nonsense. Introducing a cutoff, doing some manipulations and concluding that the two well-defined integrals are identical is fine. In this sense it may be reasonable to conclude that two divergent integrals are "identical", namely because their finite counterparts are related.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914739727973938, "perplexity": 759.0069004031488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823348.23/warc/CC-MAIN-20181210144632-20181210170132-00436.warc.gz"}
https://www.physicsforums.com/threads/time-ordering-in-qft.729088/
# Time ordering in QFT 1. Dec 18, 2013 ### aaaa202 I have asked this question once, but noone seemed to notice it, so I'll try again. In my book the time ordering operator is used to rewrite an operator product: U(β,τ)A(τ)U(τ,τ')B(τ')U(τ',0) = T_τ(U(β,0)A(τ)B(τ')) To refresh your memories the time ordering operator T_τ orders operators according to time such that: T_τ(A(τ)B(τ')) = A(τ)B(τ') for τ>τ' and B(τ')A(τ) for τ'>τ And the operator U(t,t') is a unitary operator that propagates a state from t' to t and has the property that: U(t,t')=U(t,t'')U(t'',t') I am still unsure how the rewriting is done though. One key ingredient is to use the property of the unitary above to write: This way we have: U(0,β)=U(0,τ')U(τ',τ)U(τ,β) And i think the idea is then to insert in the expression and use time-order but I am not sure how to. 2. Dec 18, 2013 ### Avodyne There is an implicit assumption here that $\beta>\tau>\tau'>0$. $$T[U(\beta,0)A(\tau)B(\tau')]$$ Then substitute in $$U(\beta,0)=U(\beta,\tau)U(\tau,\tau')U(\tau',0)$$ to get $$T[U(\beta,\tau)U(\tau,\tau')U(\tau',0)A(\tau)B(\tau')]$$ Now rearrange the operators so that time labels decrease as you go left to right: $$T[U(\beta,\tau)A(\tau)U(\tau,\tau')B(\tau')U(\tau',0)]$$ The labels are now in time-order, so the time-ordering symbol can be dropped: $$U(\beta,\tau)A(\tau)U(\tau,\tau')B(\tau')U(\tau',0)$$ QED. 3. Dec 19, 2013 ### aaaa202 hmm okay I thought it was something like that, but I am still unsure though. Which time do you assing to the operator U(t1,t2)? It propagates a state from t1 to t2, so it is not really a function of one time - or am I missing something? 4. Dec 19, 2013 ### Qubix Last edited by a moderator: May 6, 2017 5. Dec 19, 2013 ### Avodyne You can think of it as a product of many operators at closely spaced times, and break it up as needed; this is what I did above. 6. Dec 19, 2013 ### aaaa202 So I should basically assign to U(t1,t2) a value of time between t1 and t2? Similar Discussions: Time ordering in QFT
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724839091300964, "perplexity": 1532.3492957983688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110485.9/warc/CC-MAIN-20170822065702-20170822085702-00175.warc.gz"}
http://www.ipam.ucla.edu/abstract/?tid=6891&pcode=RSWS2
## Random Averaging #### Eli Ben-NaimLos Alamos National Laboratory Averaging processes underlie collisions in granular media as well interactions of social and opinion dynamics. In the averaging process there are are infinitely many interacting "particles", each characterized by a single variable. In the averaging process, two particles are chosen at random and both are set to their average. We study averaging processes using kinetic theory and find a number of interesting phenomena: 1) Multiscaling. The moments of the distribution exhibit multiscaling so that knowledge of the average behavior is not sufficient to characterize the probability distribution function. 2) Extremal selection. The characteristic scale may obey an extremal selection principle, as in nonlinear traveling waves. 3) Patterns and bifurcations. When the averaging process excludes particles that exceeds a threshold, there system organizes into clusters. These clusters are patterned and the number of clusters undergoes a series of bifurcations as a function of the initial conditions. 4) Synchronization. When the averaged quantity represents a phase and in the presence of noise, there is phase transition from an ordered state where the particles are aligned into a disordered state where the particles are not correlated. The behavior is nicely related to a special partition of the integers. Back to Workshop II: Random Curves, Surfaces, and Transport
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8708380460739136, "perplexity": 899.8915055782833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00766.warc.gz"}
http://tex.stackexchange.com/questions/19421/how-can-i-use-scrpage2-with-scrartcl
# How can I use scrpage2 with scrartcl? For displaying the current section's title in the page-header, I usually use the scrpage2 package with scrbook. However, now I would like to write a report with scrartcl, and my settings are not supported anymore: the section is not displayed. Here is the scrpage2 code, that works with scrbook: \usepackage[% % footsepline, %% Separation line above the footer markuppercase ]{scrpage2} \lefoot{} %% Bottom left on even pages \lofoot{} %% Bottom left on odd pages \refoot{} %% Bottom right on even pages \rofoot{} %% Bottom right on odd pages \cfoot{} %% Bottom center \lehead{\bfseries\pagemark} %% Top left on even pages \rohead{\bfseries\pagemark} %% Top right on odd pages I found this resource (in German), but I do not understand the solution from that. How can I make the packages work with each other? - The FAQ entry you mention points to section 4.1.2 of the KOMA-Script documentation (English), more information can be found there. – diabonas May 30 '11 at 10:23 Some more useful information would have been nice. For the present you can test this. \documentclass[english,twoside]{scrartcl} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[% automark, % footsepline, %% Separation line above the footer markuppercase ]{scrpage2} \usepackage{babel} \usepackage{blindtext} \lehead{\bfseries\pagemark} %% Top left on even pages \rohead{\bfseries\pagemark} %% Top right on odd pages \automark[subsection]{section}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595006704330444, "perplexity": 4595.894085482513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00138-ip-10-164-35-72.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/33787/how-do-i-create-separate-columns-in-latex-without-text-flow
# How do I create separate columns in LaTeX without text flow? I want to make a document with two columns, with each column separate. In other words, I have two separate columns in which I can type, without text flowing from one column to another. My desired use for this is to create 'Questions' on the left column and 'Answers' on the the right column. If there was also a way to align the questions with the corresponding answers, that would be great. - Welcome to TeX.sx! Your question was migrated here from Stack Overflow. Please register on this site, too, and make sure that both accounts are associated with each other, otherwise you won't be able to comment on or accept answers or edit your question. – Werner Nov 6 '11 at 0:54 Did you look at Independent left and right columns? – Ignasi Nov 8 '11 at 9:47 ## migrated from stackoverflow.comNov 5 '11 at 23:39 Parcolumns should work, but you need to wrap each question and answer in an environment to get good spacing: \documentclass{article} \usepackage{parcolumns} \newcommand{\question}[1]{\colchunk{\begin{description}\item[Q:]{#1}% \end{description}}} \end{description}}\colplacechunks} \begin{document} \begin{parcolumns}[colwidths={1=2 in},nofirstindent]{2} % \question{Lorem ipsum dolor?} Nulla facilisi. Proin tortor turpis, sodales quis porttitor in.} % \question{Aliquam condimentum lectus et mauris elementum vitae commodo libero aliquet. Integer quis lectus nec nunc viverra tempus?} % \question{Donec vitae nisi eu massa?} % \end{parcolumns} \end{document} - I've recently run across the paracol package on CTAN. It has some very nice features for synchronizing items in the columns. Here's a working example: \documentclass[11pt]{article} \usepackage[margin=1in]{geometry} \usepackage{paracol} \title{Q \& A example} \author{John Q. Public} \begin{document} \maketitle \begin{paracol}{2}[\section{Part I}] \subsection*{Question: Age of earth} What is the estimated age of the earth? Discuss geophysical or astronomical evidence that supports this estimate. \switchcolumn \subsection*{Answer: Age of earth} Current estimate suggest the age of the earth is approx 4.5 BY. The key evidence for this estimate includes... \switchcolumn \subsection*{Question: Life on earth} What is the current minimum estimate for how long life has existed on earth? Discuss the paleontological evidence in support of this estimate... \switchcolumn \subsection*{Answer: Life on earth} Current estimates place the earliest signs of life in rocks of approx 3.8 BYA... \end{paracol} \end{document} - An ugly solution could also be the following: \documentclass{report} \begin{document} \begin{tabular}{p{0.5\textwidth}p{0.5\textwidth}} Question 1 & Question 2 & \end{tabular} \end{document} It does the wrapping because I set the max width of each column, and the alignment because, well, it's a table. - Nor sure if I have completely grasped exactly what you need to do... If you do not need continuous flow of related text over page boundaries, and are using an editor like TeXworks TeX User Group TeXworks Editor ( Development version ) or similar that lets you work with the pdf previewer (Crtl Click in editor text or pdf)—I've use minipage environments for things like that in the past. It helps also by keeping the distinct text segments positionally related to each other, and the areas can even be positioned, and of different widths. There are also minipage options available. \noindent\begin{minipage}{12cm} blah blah \end{minipage} \hspace*{2mm}\begin{minipage}{5.1cm} Blah blah \end{minipage} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055579662322998, "perplexity": 3985.305859904187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703317384/warc/CC-MAIN-20130516112157-00064-ip-10-60-113-184.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Binary_tetrahedral_group
# Binary tetrahedral group The regular complex polytope, 3{6}2 or or , represents the Cayley diagram for the binary tetrahedral group, with each red and blue triangle a directed subgraph.[1] In mathematics, the binary tetrahedral group, denoted 2T or ⟨2,3,3⟩ is a certain nonabelian group of order 24. It is an extension of the tetrahedral group T or (2,3,3) of order 12 by a cyclic group of order 2, and is the preimage of the tetrahedral group under the 2:1 covering homomorphism Spin(3) → SO(3) of the special orthogonal group by the spin group. It follows that the binary tetrahedral group is a discrete subgroup of Spin(3) of order 24. The complex reflection group named 3(24)3 by G.C. Shephard or 3[3]3 and by Coxeter, is isomorphic to the binary tetrahedral group. The binary tetrahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism ${\displaystyle \operatorname {Spin} (3)\cong \operatorname {Sp} (1)}$ where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) ## Elements Cayley graph of SL(2,3) Explicitly, the binary tetrahedral group is given as the group of units in the ring of Hurwitz integers. There are 24 such units given by ${\displaystyle \{\pm 1,\pm i,\pm j,\pm k,{\tfrac {1}{2}}(\pm 1\pm i\pm j\pm k)\}}$ with all possible sign combinations. All 24 units have absolute value 1 and therefore lie in the unit quaternion group Sp(1). The convex hull of these 24 elements in 4-dimensional space form a convex regular 4-polytope called the 24-cell. ## Properties The binary tetrahedral group, denoted by 2T, fits into the short exact sequence ${\displaystyle 1\to \{\pm 1\}\to 2T\to T\to 1.}$ This sequence does not split, meaning that 2T is not a semidirect product of {±1} by T. In fact, there is no subgroup of 2T isomorphic to T. The binary tetrahedral group is the covering group of the tetrahedral group. Thinking of the tetrahedral group as the alternating group on four letters, ${\displaystyle T\cong A_{4},}$ we thus have the binary tetrahedral group as the covering group, ${\displaystyle 2T\cong {\widehat {A_{4}}}.}$ The center of 2T is the subgroup {±1}. The inner automorphism group is isomorphic to ${\displaystyle A_{4}}$, and the full automorphism group is isomorphic to ${\displaystyle S_{4}}$.[2] Left multiplication by −ω, an order-6 element: look at gray, blue, purple, and orange balls and arrows that constitute 4 orbits (two arrows are not depicted). ω itself is the bottommost ball: ω = (−ω)(−1) = (−ω)4 The binary tetrahedral group can be written as a semidirect product ${\displaystyle 2T=Q\rtimes \mathbb {Z} _{3}}$ where Q is the quaternion group consisting of the 8 Lipschitz units and Z3 is the cyclic group of order 3 generated by ω = −1/2(1 + i + j + k). The group Z3 acts on the normal subgroup Q by conjugation. Conjugation by ω is the automorphism of Q that cyclically rotates i, j, and k. One can show that the binary tetrahedral group is isomorphic to the special linear group SL(2,3) – the group of all 2×2 matrices over the finite field F3 with unit determinant, with this isomorphism covering the isomorphism of the projective special linear group PSL(2,3) with the alternating group A4. ### Presentation The group 2T has a presentation given by ${\displaystyle \langle r,s,t\mid r^{2}=s^{3}=t^{3}=rst\rangle }$ or equivalently, ${\displaystyle \langle s,t\mid (st)^{2}=s^{3}=t^{3}\rangle .}$ Generators with these relations are given by ${\displaystyle s={\tfrac {1}{2}}(1+i+j+k)\qquad t={\tfrac {1}{2}}(1+i+j-k).}$ ### Subgroups The quaternion group consisting of the 8 Lipschitz units forms a normal subgroup of 2T of index 3. This group and the center {±1} are the only nontrivial normal subgroups. All other subgroups of 2T are cyclic groups generated by the various elements, with orders 3, 4, and 6.[3] ## Higher dimensions Just as the tetrahedral group generalizes to the rotational symmetry group of the n-simplex (as a subgroup of SO(n)), there is a corresponding higher binary group which is a 2-fold cover, coming from the cover Spin(n) → SO(n). The rotational symmetry group of the n-simplex can be considered as the alternating group on n + 1 points, An+1, and the corresponding binary group is a 2-fold covering group. For all higher dimensions except A6 and A7 (corresponding to the 5-dimensional and 6-dimensional simplexes), this binary group is the covering group (maximal cover) and is superperfect, but for dimensional 5 and 6 there is an additional exceptional 3-fold cover, and the binary groups are not superperfect. ## Usage in theoretical physics The binary tetrahedral group was used in the context of Yang–Mills theory in 1956 by Chen Ning Yang and others.[4] It was first used in flavor physics model building by Paul Frampton and Thomas Kephart in 1994.[5] In 2012 it was shown [6] that a relation between two neutrino mixing angles, derived [7] by using this binary tetrahedral flavor symmetry, agrees with experiment. ## Notes 1. ^ Coxeter, Complex Regular Polytopes, p 109, Fig 11.5E 2. ^ "Special linear group:SL(2,3)". groupprops. 3. ^ ${\displaystyle SL_{2}(\mathbb {F} _{3})}$ on GroupNames 4. ^ Case, E.M.; Robert Karplus; C.N. Yang (1956). "Strange Particles and the Conservation of Isotopic Spin". Physical Review. 101: 874–876. Bibcode:1956PhRv..101..874C. doi:10.1103/PhysRev.101.874. 5. ^ Frampton, Paul H.; Thomas W. Kephart (1995). "Simple Nonabelian Finite Flavor Groups and Fermion Masses". International Journal of Modern Physics. A10: 4689–4704. arXiv:. Bibcode:1995IJMPA..10.4689F. doi:10.1142/s0217751x95002187. 6. ^ Eby, David A.; Paul H. Frampton (2012). "Nonzero theta(13)signals nonmaximal atmospheric neutrino mixing". Physical Review. D86: 117–304. arXiv:. Bibcode:2012PhRvD..86k7304E. doi:10.1103/physrevd.86.117304. 7. ^ Eby, David A.; Paul H. Frampton; Shinya Matsuzaki (2009). "Predictions for neutrino mixing angles in a T′ Model". Physics Letters. B671: 386–390. arXiv:. Bibcode:2009PhLB..671..386E. doi:10.1016/j.physletb.2008.11.074. ## References • Conway, John H.; Smith, Derek A. (2003). On Quaternions and Octonions. Natick, Massachusetts: AK Peters, Ltd. ISBN 1-56881-134-9. • Coxeter, H. S. M. & Moser, W. O. J. (1980). Generators and Relations for Discrete Groups, 4th edition. New York: Springer-Verlag. ISBN 0-387-09212-9. 6.5 The binary polyhedral groups, p. 68
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910432755947113, "perplexity": 1257.180672038416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156460.64/warc/CC-MAIN-20180920101233-20180920121633-00178.warc.gz"}
https://www.omnicalculator.com/physics/kva
# kVA Calculator By Kenneth Alambra Last updated: Jul 13, 2021 This kVA calculator will help you determine the apparent power over a particular operating voltage and current. In the same manner, it also works as a volts and amps to kVA calculator. As an added feature, this tool can also function as a kVA to kW calculator to help you easily convert kVA to kW or kVA to watts. In this tool, you will learn what kVA means, the difference between kVA and kW, and how to calculate kVA using the apparent power formulas. Keep on reading to start learning. ## What does kVA mean? Kilovolt-amps, abbreviated as kVA, is the typical unit of measure for what is called apparent power. Apparent power is the amount of electrical power produced by an electrical system at a particular applied voltage and current. Following the concept of Ohm's law, we can obtain the amount of electrical power by multiplying the amount of voltage by the current flow through any electrical system. Since we measure voltage in units of volts and current in units of amperes, we can put these together to express power in terms of units of volt-amperes. Although we already have watts as the unit measure of power, we still use volt-amperes for a particular reason that we'll discuss next. ## What is the difference between kVA and kW? The main difference between kilovolt-amperes and kilowatts, or volt-amperes and watts, is the presence of a known value called the power factor. Power factor is the ratio between the real power (measured in watts) and the apparent power. In other words, the power factor determines the amount of apparent power converted to real power. We can express this relationship in equation form, as shown below: `power factor = real power / apparent power` The value of the power factor (abbreviated to PF) depends on what kind of load the electrical unit is drawing from the electrical system. Electrical systems like transformers, generators, pumps, and motors provide power for various people's requirements, and we sometimes do not know the value of the power factor to be able to rate these electrical systems in watts. That is the reason why we use volt-amperes or kilovolt-amperes. In an ideal system, we have `1` (or `100%`) as the power factor's value. In that case, we can say that the real or actual power is equal to the apparent power. You can learn more about actual power in our watts to amps calculator. ## How do I calculate kVA? Now that we understand the significance of using kVA let us learn how to calculate the kVA apparent power given an applied voltage and current. Estimating the apparent power is quite simple. However, we have to consider that we can observe apparent power in three different power systems' cases. Below are the three different apparent power formulas we can use for each of those cases: • Single-phase power system `S = I * V / 1000` • 3-phase power system with line-to-line voltage `S = √3 * I * VL-L / 1000 ` • 3-phase power system with line-to-neutral voltage `S = 3 * I * VL-N / 1000` where: • `S` - Apparent power in kVA; • `I` - Current amperage in amperes; • `V` - Voltage in volts; • `VL-L` - Line-to-line voltage in volts; and • `VL-N` - Line-to-neutral voltage in volts. ## Sample volts and amps to kVA calculation For our first example, let us consider a transformer drawing power from a single-phase 240-volt power source at a current of 10 amperes. To determine the apparent power (`S1`) we can get from this transformer, we have to use the first equation in our list of apparent power formulas and substitute these given values as follows: `S1 = I * V / 1000` `S1 = 10 A * 240 V / 1000` `S1 = 2400 VA / 1000 = 2.4 kVA` We can now say that we can have at most `2.4 kVA` of apparent power from the electrical system that we are considering in our calculation. If our power source was being delivered over a 3-phase power system with a line-to-line total voltage of 240 V at a current of 10 amperes, we can use the second formula to find the new apparent power (`S2`), as we can see below: `S2 = √3 * I * VL-L / 1000 ` `S2 = √3 * 10 A * 240 V / 1000 ` `S2 = 4156.9219 VA / 1000 ≈ 4.157 kVA` And that is how to calculate kVA with given voltage and amperage for a particular system. 🙂 💡 With our kVA calculator, you can calculate multiple setups in no time. If you have the value for the power factor and want to find the electrical system's real power output, activate the `advanced mode` of our kVA calculator to display the kVA to kW calculator. Once you're there, simply input the needed values to convert kVA to kW or convert kVA to watts, whichever unit you want. If you are wondering how much your power consumption is for any time duration, you can find out using our electricity cost calculator. Check it out to see if your how much appliances like your air conditioner and fan contribute to your electric bill. ## FAQ ### How do I convert kVA to amps? 1. Find the voltage of the system. 2. Then, multiply the apparent power in kVA by 1,000 to obtain a value in VA (volt-amps). 3. Finally, divide the VA value by the system's voltage in volts. By doing the above steps, you will easily find the system's current in amps. ### What is the difference of kVA and kW? The main difference between kVA and kW is the presence of a value called the power factor. Once the power factor is known, we get to express the system's power output in kW. Without the power factor, it would be safe to rate an electrical system like a generator or a transformer in kVA. Using kVA indicates we are still talking about the system's potential or apparent power. ### How do I convert kVA to kW? You can convert kVA to kW by multiplying your known kVA value by your electrical system's power factor. It is also good to remember that the converted kW will never be greater than the apparent power in kVA since the power factor's value only ranges from 0 to 1. ### What does 500 kVA mean on my 500 kVA generator? It means that you can draw up to 500 kW of power from your generator. However, you could also draw less power depending on how you want to use your 500 kVA generator or the power factor of your electrical system. ### What can I do with a 500 kVA generator? You can already do a lot with a 500 kVA generator. You can use it to run a small house with the typical household appliances, including a refrigerator, some water heaters, and even an air conditioner, all running simultaneously. But, if you only need to run the essential appliances at home intermittently, it's worth noting that a 10 kVA generator is already sufficient. Kenneth Alambra Input values Single phase Voltage (V) V Current (I) A Output value Apparent power (S) kVA People also viewed… ### Centrifugal force Use our centrifugal force calculator to determine the force acting on a rotating object. ### Chilled drink With the chilled drink calculator you can quickly check how long you need to keep your drink in the fridge or another cold place to have it at its optimal temperature. You can follow how the temperature changes with time with our interactive graph. ### Rotational stiffness Determine the resistance of a body to rotation using the rotational stiffness calculator.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245887756347656, "perplexity": 1215.3466551100591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00548.warc.gz"}
http://www.ecnmag.com/news/2009/11/new-results-t2k?qt-video_of_the_day=0
News # New Results From T2K Tue, 11/24/2009 - 8:53am Brookhaven National Laboratory Contacts: Karen McNulty Walsh, (631) 344-8350 or Mona S. Rowe, (631) 344-5056 The following news release on a multinational long-baseline neutrino oscillation experiment located in Japan was issued by KEK, the High Energy Accelerator Research Organization in Tsukuba, Japan. The experiment uses superconducting corrector magnets made by physicists and engineers in the Magnet Division at the U.S. Department of Energy's Brookhaven National Laboratory with funding from the DOE Office of Science. For more information about Brookhaven's role in the research, contact Karen McNulty Walsh, [email protected], 631 344-8350. ## New Results From T2K ### Tokai to Kamioka Long Baseline Neutrino Oscillation Experiment November 24, 2009 Physicists from the Japanese-led multi-national T2K neutrino collaboration announced today that over the weekend they detected the first neutrino events generated by their newly built neutrino beam at the J-PARC accelerator laboratory in Tokai, Japan. Protons from the 30-GeV Main Ring synchrotron were directed onto a carbon target, where their collisions produced charged particles called pions. These pions travelled through a helium-filled volume where they decayed to produce a beam of the elusive particles called neutrinos. These neutrinos then flew 200 metres through the earth to a sophisticated detector system capable of making detailed measurements of their energy, direction, and type. The data from the complex detector system is still being analysed, but the physicists have seen at least 3 neutrino events, in line with the expectation based on the current beam and detector performance. This detection therefore marks the beginning of the operational phase of the T2K experiment, a 474 physicist, 13 nation collaboration to measure new properties of the ghostly neutrino. Neutrinos interact only weakly with matter, and thus pass effortlessly through the earth (and mostly through the detectors!). Neutrinos exist in three types, called electron, muon, and tau; linked by particle interactions to their more familiar charged cousins like the electron. Measurements over the last few decades, notably by the Super Kamiokande and KamLAND neutrino experiments in western Japan, have shown that neutrinos possess the strange property of neutrino oscillations, whereby one type of neutrino will turn into another as they propagate through space. Neutrino oscillations, which require neutrinos to have mass and therefore were not allowed in our previous theoretical understanding of particle physics, probe new physical laws and are thus of great interest in the study of the fundamental constituents of matter. They may even be related to the mystery of why there is more matter than anti-matter in the universe, and thus are the focus of intense study worldwide. Precision measurements of neutrino oscillations can be made using artificial neutrino beams, as pioneered by the K2K neutrino experiment where neutrinos from the KEK laboratory were detected using the vast Super Kamiokande neutrino detector near Toyama. T2K is a more powerful and sophisticated version of the K2K experiment, with a more intense neutrino beam derived from the newly-built Main Ring synchrotron at the J-PARC accelerator laboratory. The beam was built by physicists from KEK in cooperation with other Japanese institutions and with assistance from the US, Canadian, UK and French T2K institutes. Prof. Chang Kee Jung of Stony Brook University, Stony Brook, New York, leader of the US T2K project, said “I am somewhat stunned by this seemingly effortless achievement considering the complexity of the machinery, the operation and international nature of the project. This is a result of a strong support from the Japanese government for basic science, which I hope will continue, and hard work and ingenuity of all involved. I am excited about more ground breaking findings from this experiment in the near future”. The beam is aimed once again at Super-Kamiokande, which has been upgraded for this experiment with new electronics and software. Before the neutrinos leave the J-PARC facility their properties are determined by a sophisticated “near” detector, partly based on a huge magnet donated from CERN where it had earlier been used for neutrino experiments (and for the UA1 experiment, which won the Nobel Prize for the discovery of the W and Z bosons which are the basis of neutrino interactions), and it is this detector which caught the first events. The first neutrino events were detected in a specialize detector, called the INGRID, whose purpose is to determine the neutrino beam’s direction and profile. Further tests of the T2K neutrino beam are scheduled for December, and the experiment plans to begin production running in mid-January. Another major milestone should be observed soon after – the first observation of a neutrino event from the T2K beam in the Super-Kamiokande experiment. Running will continue until the summer, by which time the experiment hopes to have made the most sensitive search yet achieved for a so-far unobserved critical neutrino oscillation mode dominated by oscillations between all three types of neutrinos. In the coming years this search will be improved even further, with the hope that the 3-mode oscillation will be observed, allowing measurements to begin comparing the oscillations of neutrinos and anti-neutrinos, probing the physics of matter/ anti-matter asymmetry in the neutrino sector. The T2K collaboration consists of 474 physicists from 67 institutes in 12 countries (Japan, South Korea, Canada, the United States, the United Kingdom, France, Spain, Italy, Switzerland, Germany, Poland, and Russia). The experiment consists of a new neutrino beam using the recently constructed 30 GeV synchrotron at the J-PARC laboratory in Tokai, Japan, a set of near detectors constructed 280m from the neutrino production target, and the Super Kamiokande detector in western Japan. The U.S. participation in the T2K experiment is supported by the U.S. Department of energy. The U.S. collaboration consists of 60 physicists from 8 institutions (Brookhaven National Laboratory, University of Colorado, Boulder, Colorado State University, Louisiana State University, University of Pittsburgh, University of Rochester, Stony Brook University, and University of Washington). Japan: Dr. Takashi Kobayashi, KEK, [email protected], phone: +81-29-864-5414. USA: Prof. Chang Kee Jung, Stony Brook University, [email protected], Phone: +1 (631) 632-8108, (631) 474-4563 (h), (631) 707-2018 (c). Number: 09-1035  |  BNL Media & Communications Office SOURCE
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608487844467163, "perplexity": 2301.6595459428704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900024.23/warc/CC-MAIN-20141030025820-00211-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.answers.com/topic/gram
In Science # What is a gram? The gram is the SI unit for mass, defined as one-twelfth  (1/12) of the mass of one mole of atoms of carbon-12 isotope. It  was historically defined as, and still accurately (MORE) # What are grams? Gram is a unit of mass.. Grams and Kilograms are the units of measuring mass of an object. Thanks for the feedback! # What is the gram? The gram is a metric unit of mass. It was originally defined as the mass of 1 cm3 of pure water at the temperature of melting ice, but is now defined as 0.001 of the mass of a (MORE) # How many grams make a GRAM? Usually one gram.The prefix "kilo" (symbol k) in the International System of Units (SI) and other systems, which denotes 103, it means one thousand. Therefore, there are 1,000 (MORE) # What is larger gram or kilo gram? KilogramThe prefix "kilo" (symbol k) in the International System of Units (SI) and other systems, which denotes 103, it means one thousand. Therefore, there are 1,000 grams (g (MORE) # 3.3 grams to Kilo grams? 0.0033 kilo. The prefix "kilo" (symbol k) in the International System of Units (SI) and other systems, which denotes 103, it means one thousand. Therefore, there are 1,000 gra (MORE) # What is 400 grams in kilo grams? 0.4 kilo. The prefix "kilo" (symbol k) in the International System of Units (SI) and other systems, which denotes 103, it means one thousand. Therefore, there are 1,000 grams (MORE) # How many grams in 1.75 grams? There are 1.75 grams in 1.75 grams. (MORE) In Science # How many pico grams in a gram? 1 gram = 1 000 000 000 000 picograms Thanks for the feedback! In Science # What is the unit for grams divided by grams? When dividing a measurement in one type of unit by a measurement with the same unit, then the answer is unitless (has no new unit). Example: 6 grams divided by 3 grams is 2. (MORE)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558209776878357, "perplexity": 2262.7340477438456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813691.14/warc/CC-MAIN-20180221163021-20180221183021-00038.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdss.2016.9.157
# American Institute of Mathematical Sciences February  2016, 9(1): 157-172. doi: 10.3934/dcdss.2016.9.157 ## Regularity criteria for weak solutions of the Navier-Stokes system in general unbounded domains 1 Department of Mathematics and Center of Smart Interfaces (CSI), Technische Universität Darmstadt, 64289 Darmstadt 2 Fachbereich Mathematik, Technische Universität Darmstadt, 64289 Darmstadt, Germany Received  September 2014 Revised  February 2015 Published  December 2015 We consider weak solutions of the instationary Navier-Stokes system in general unbounded smooth domains $\Omega\subset \mathbb{R}^3$ and discuss several criteria to prove that the weak solution is locally or globally in time a strong solution in the sense of Serrin. Since the usual Stokes operator cannot be defined on all types of unbounded domains we have to replace the space $L^q(\Omega)$, $q>2$, by $\tilde L^q(\Omega) = L^q(\Omega) \cap L^2(\Omega)$ and Serrin's class $L^r(0,T;L^q(\Omega))$ by $L^r(0,T;\tilde L^q(\Omega))$ where $2< r <\infty$, $3< q <\infty$ and $\frac{2}{r} + \frac{3}{q} =1$. Citation: Reinhard Farwig, Paul Felix Riechwald. Regularity criteria for weak solutions of the Navier-Stokes system in general unbounded domains. Discrete and Continuous Dynamical Systems - S, 2016, 9 (1) : 157-172. doi: 10.3934/dcdss.2016.9.157 ##### References: [1] H. Amann, Linear and Quasilinear Parabolic Problems. Vol. I: Abstract Linear Theory, Monographs in Mathematics, 89, Birkhäuser, Basel-Boston-Berlin, 1995. doi: 10.1007/978-3-0348-9221-6. [2] M. E. Bogovskij and V. N. Maslennikova, Elliptic boundary value problems in unbounded domains with noncompact and nonsmooth boundaries, Sem. Mat. Fis. Milano, 56 (1986), 125-138. doi: 10.1007/BF02925141. [3] M. E. Bogovskij, Decomposition of $L_p(\Omega;R^n)$ into the direct sum of subspaces of solenoidal and potential vector fields, Math. Dokl., 33 (1986), 161-165. [4] R. Farwig, G. P. Galdi and H. Sohr, A new class of weak solutions of the Navier-Stokes equations with nonhomogeneous data, J. Math. Fluid Mech., 8 (2006), 423-444. doi: 10.1007/s00021-005-0182-6. [5] R. Farwig, H. Kozono and H. Sohr, An $L^q$-approach to Stokes and Navier-Stokes equations in general domains, Acta Math., 195 (2005), 21-53. doi: 10.1007/BF02588049. [6] R. Farwig, H. Kozono and H. Sohr, Very weak solutions of the Navier-Stokes equations in exterior domains with nonhomogeneous data, J. Math. Soc. Japan, 59 (2007), 127-150. doi: 10.2969/jmsj/1180135504. [7] R. Farwig, H. Kozono and H. Sohr, On the Helmholtz decomposition in general unbounded domains, Arch. Math., 88 (2007), 239-248. doi: 10.1007/s00013-006-1910-8. [8] R. Farwig, H. Kozono and H. Sohr, The Stokes resolvent problem in general unbounded domains, in Kyoto Conference on the Navier-Stokes Equations and their Applications, RIMS Kôkyûroku Bessatsu, Res. Inst. Math. Sci., B1, Kyoto, 2007, 79-91. [9] R. Farwig, H. Kozono and H. Sohr, Maximal regularity of the Stokes operator in general unbounded domains, in Functional Analysis and Evolution Equations. The Günter Lumer Volume (eds. H. Amann, W. Arendt, M. Hieber, F. Neubrander, S. Nicaise and J. von Below), Birkhäuser Verlag, Basel, 2008, 257-272. doi: 10.1007/978-3-7643-7794-6_17. [10] R. Farwig, H. Kozono and H. Sohr, On the Stokes operator in general unbounded domains, Hokkaido Math. J., 38 (2009), 111-136. doi: 10.14492/hokmj/1248787007. [11] R. Farwig and P. F. Riechwald, Very weak solutions to the Navier-Stokes system in general unbounded domains, J. Evol. Equ., 15 (2015), 253-279. doi: 10.1007/s00028-014-0258-y. [12] R. Farwig, H. Sohr and W. Varnhorn, Extensions of Serrin's uniqueness and regularity conditions for the Navier-Stokes equations, J. Math. Fluid Mech., 14 (2012), 529-540. doi: 10.1007/s00021-011-0078-6. [13] H. Kozono and H. Sohr, Remark on uniqueness of weak solutions to the Navier-Stokes equations, Analysis, 16 (1996), 255-271. doi: 10.1524/anly.1996.16.3.255. [14] P. C. Kunstmann, $H^{\infty}$-calculus for the Stokes operator on unbounded domains, Arch. Math., 91 (2008), 178-186. doi: 10.1007/s00013-008-2621-0. [15] P. F. Riechwald, Interpolation of sum and intersection spaces of $L^q$-type and applications to the Stokes problem in general unbounded domains, Ann. Univ. Ferrara Sez. VII Sci. Mat., 58 (2012), 167-181. doi: 10.1007/s11565-011-0140-6. [16] H. Sohr, The Navier-Stokes Equations. An Elementary Functional Analytic Approach, Birkhäuser Verlag, Basel, 2001. [17] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland Publ., Amsterdam, 1978. show all references ##### References: [1] H. Amann, Linear and Quasilinear Parabolic Problems. Vol. I: Abstract Linear Theory, Monographs in Mathematics, 89, Birkhäuser, Basel-Boston-Berlin, 1995. doi: 10.1007/978-3-0348-9221-6. [2] M. E. Bogovskij and V. N. Maslennikova, Elliptic boundary value problems in unbounded domains with noncompact and nonsmooth boundaries, Sem. Mat. Fis. Milano, 56 (1986), 125-138. doi: 10.1007/BF02925141. [3] M. E. Bogovskij, Decomposition of $L_p(\Omega;R^n)$ into the direct sum of subspaces of solenoidal and potential vector fields, Math. Dokl., 33 (1986), 161-165. [4] R. Farwig, G. P. Galdi and H. Sohr, A new class of weak solutions of the Navier-Stokes equations with nonhomogeneous data, J. Math. Fluid Mech., 8 (2006), 423-444. doi: 10.1007/s00021-005-0182-6. [5] R. Farwig, H. Kozono and H. Sohr, An $L^q$-approach to Stokes and Navier-Stokes equations in general domains, Acta Math., 195 (2005), 21-53. doi: 10.1007/BF02588049. [6] R. Farwig, H. Kozono and H. Sohr, Very weak solutions of the Navier-Stokes equations in exterior domains with nonhomogeneous data, J. Math. Soc. Japan, 59 (2007), 127-150. doi: 10.2969/jmsj/1180135504. [7] R. Farwig, H. Kozono and H. Sohr, On the Helmholtz decomposition in general unbounded domains, Arch. Math., 88 (2007), 239-248. doi: 10.1007/s00013-006-1910-8. [8] R. Farwig, H. Kozono and H. Sohr, The Stokes resolvent problem in general unbounded domains, in Kyoto Conference on the Navier-Stokes Equations and their Applications, RIMS Kôkyûroku Bessatsu, Res. Inst. Math. Sci., B1, Kyoto, 2007, 79-91. [9] R. Farwig, H. Kozono and H. Sohr, Maximal regularity of the Stokes operator in general unbounded domains, in Functional Analysis and Evolution Equations. The Günter Lumer Volume (eds. H. Amann, W. Arendt, M. Hieber, F. Neubrander, S. Nicaise and J. von Below), Birkhäuser Verlag, Basel, 2008, 257-272. doi: 10.1007/978-3-7643-7794-6_17. [10] R. Farwig, H. Kozono and H. Sohr, On the Stokes operator in general unbounded domains, Hokkaido Math. J., 38 (2009), 111-136. doi: 10.14492/hokmj/1248787007. [11] R. Farwig and P. F. Riechwald, Very weak solutions to the Navier-Stokes system in general unbounded domains, J. Evol. Equ., 15 (2015), 253-279. doi: 10.1007/s00028-014-0258-y. [12] R. Farwig, H. Sohr and W. Varnhorn, Extensions of Serrin's uniqueness and regularity conditions for the Navier-Stokes equations, J. Math. Fluid Mech., 14 (2012), 529-540. doi: 10.1007/s00021-011-0078-6. [13] H. Kozono and H. Sohr, Remark on uniqueness of weak solutions to the Navier-Stokes equations, Analysis, 16 (1996), 255-271. doi: 10.1524/anly.1996.16.3.255. [14] P. C. Kunstmann, $H^{\infty}$-calculus for the Stokes operator on unbounded domains, Arch. Math., 91 (2008), 178-186. doi: 10.1007/s00013-008-2621-0. [15] P. F. Riechwald, Interpolation of sum and intersection spaces of $L^q$-type and applications to the Stokes problem in general unbounded domains, Ann. Univ. Ferrara Sez. VII Sci. Mat., 58 (2012), 167-181. doi: 10.1007/s11565-011-0140-6. [16] H. Sohr, The Navier-Stokes Equations. An Elementary Functional Analytic Approach, Birkhäuser Verlag, Basel, 2001. [17] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland Publ., Amsterdam, 1978. [1] Reinhard Farwig, Yasushi Taniuchi. Uniqueness of backward asymptotically almost periodic-in-time solutions to Navier-Stokes equations in unbounded domains. Discrete and Continuous Dynamical Systems - S, 2013, 6 (5) : 1215-1224. doi: 10.3934/dcdss.2013.6.1215 [2] Lukáš Poul. Existence of weak solutions to the Navier-Stokes-Fourier system on Lipschitz domains. Conference Publications, 2007, 2007 (Special) : 834-843. doi: 10.3934/proc.2007.2007.834 [3] Donatella Donatelli, Eduard Feireisl, Antonín Novotný. On incompressible limits for the Navier-Stokes system on unbounded domains under slip boundary conditions. Discrete and Continuous Dynamical Systems - B, 2010, 13 (4) : 783-798. doi: 10.3934/dcdsb.2010.13.783 [4] Jishan Fan, Yasuhide Fukumoto, Yong Zhou. Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations. Kinetic and Related Models, 2013, 6 (3) : 545-556. doi: 10.3934/krm.2013.6.545 [5] Zijin Li, Xinghong Pan. Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations. Communications on Pure and Applied Analysis, 2019, 18 (3) : 1333-1350. doi: 10.3934/cpaa.2019064 [6] Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure and Applied Analysis, 2002, 1 (1) : 35-50. doi: 10.3934/cpaa.2002.1.35 [7] Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure and Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 [8] Lucas C. F. Ferreira, Elder J. Villamizar-Roa. On the existence of solutions for the Navier-Stokes system in a sum of weak-$L^{p}$ spaces. Discrete and Continuous Dynamical Systems, 2010, 27 (1) : 171-183. doi: 10.3934/dcds.2010.27.171 [9] Wendong Wang, Liqun Zhang, Zhifei Zhang. On the interior regularity criteria of the 3-D navier-stokes equations involving two velocity components. Discrete and Continuous Dynamical Systems, 2018, 38 (5) : 2609-2627. doi: 10.3934/dcds.2018110 [10] Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. Discrete and Continuous Dynamical Systems, 2010, 28 (1) : 161-179. doi: 10.3934/dcds.2010.28.161 [11] Alessio Falocchi, Filippo Gazzola. Regularity for the 3D evolution Navier-Stokes equations under Navier boundary conditions in some Lipschitz domains. Discrete and Continuous Dynamical Systems, 2022, 42 (3) : 1185-1200. doi: 10.3934/dcds.2021151 [12] Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete and Continuous Dynamical Systems, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033 [13] Francesca Crispo, Paolo Maremonti. A remark on the partial regularity of a suitable weak solution to the Navier-Stokes Cauchy problem. Discrete and Continuous Dynamical Systems, 2017, 37 (3) : 1283-1294. doi: 10.3934/dcds.2017053 [14] Grzegorz Karch, Maria E. Schonbek, Tomas P. Schonbek. Singularities of certain finite energy solutions to the Navier-Stokes system. Discrete and Continuous Dynamical Systems, 2020, 40 (1) : 189-206. doi: 10.3934/dcds.2020008 [15] Igor Kukavica. On regularity for the Navier-Stokes equations in Morrey spaces. Discrete and Continuous Dynamical Systems, 2010, 26 (4) : 1319-1328. doi: 10.3934/dcds.2010.26.1319 [16] Igor Kukavica. On partial regularity for the Navier-Stokes equations. Discrete and Continuous Dynamical Systems, 2008, 21 (3) : 717-728. doi: 10.3934/dcds.2008.21.717 [17] Yi Zhou, Zhen Lei. Logarithmically improved criteria for Euler and Navier-Stokes equations. Communications on Pure and Applied Analysis, 2013, 12 (6) : 2715-2719. doi: 10.3934/cpaa.2013.12.2715 [18] Ariane Piovezan Entringer, José Luiz Boldrini. A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions. Discrete and Continuous Dynamical Systems - B, 2015, 20 (2) : 397-422. doi: 10.3934/dcdsb.2015.20.397 [19] Tomás Caraballo, Xiaoying Han. A survey on Navier-Stokes models with delays: Existence, uniqueness and asymptotic behavior of solutions. Discrete and Continuous Dynamical Systems - S, 2015, 8 (6) : 1079-1101. doi: 10.3934/dcdss.2015.8.1079 [20] Giovanni P. Galdi. Existence and uniqueness of time-periodic solutions to the Navier-Stokes equations in the whole plane. Discrete and Continuous Dynamical Systems - S, 2013, 6 (5) : 1237-1257. doi: 10.3934/dcdss.2013.6.1237 2020 Impact Factor: 2.425
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446287512779236, "perplexity": 2316.1462974643023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00256.warc.gz"}
https://brilliant.org/problems/convex-mirror/
# Convex mirror As shown above, when an object with a height of $3\text{ cm}$ is placed on the optical axis of a convex mirror at a distance of $12\text{ cm}$ from the mirror, the height of its image is $1\text{ cm}.$ Then what is the focal length of the convex mirror? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261531233787537, "perplexity": 131.01164152126313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00371.warc.gz"}
http://mathoverflow.net/questions/2826/equivalent-forms-of-the-grand-riemann-hypothesis/4037
# Equivalent forms of the Grand Riemann Hypothesis I have long been curious about equivalent forms of the Riemann hypothesis for automorphic L-functions. In the case of the ordinary Riemann hypothesis, one gets a very good error term for the prime number theorem, one has the formulation involving the Mobius mu function which is a result to the effect of the parity of prime factors in a square free number having a distribution related to that of flips of an unbiased coin, and one also has the reformulation in terms of Farey fractions. I know that for L-functions attached to Dirichlet characters, one gets a very good error term for the prime number theorem for primes in arithmetic progressions. Presumably if one focuses on Dedekind zeta functions and Hecke L-series one gets a very strong effective Chebotarev density theorem or something like that. But for L-functions attached to Hecke eigenforms for GL(2), or more abstract things like symmetric n-th power L-functions attached to automorphic forms or automorphic representations, it seems quite unclear to me what the significance of the Riemann hypothesis for these L-functions is. I think that I remember something about a zero free region to the left of the boundary of the critical strip being related to the Sato-Tate conjecture, so I have a vague impression that one might be able to get a good bound on the speed of convergence to the Sato-Tate distribution as an equivalent to the Riemann hypothesis for some of these L-functions. What are some interesting equivalents to the Riemann hypothesis for automorphic L-functions that you know? I'm particularly interested in statements that have qualitative interpretations. P.S. I've blurred the distinction between an equivalent of the Riemann hypothesis for a single L-function and equivalents to the Riemann hypothesis for a specified family of L-functions. I am interested in both things P.P.S. I am more interested in equivalents than in consequences of the Riemann hypotheses for these L-functions in so far as equivalents "capture the essence" of the statement in question to a greater extent than consequences do. Still, I would would welcome references to interesting consequences of the Riemann hypothesis for automorphic L-functions, again, especially those with qualitative interpretations. - Would GRH be a Generalized Riemann Hypotesis? You might want your question easier to read by expanding that in the title. –  Ilya Nikokoshev Oct 27 '09 at 20:36 ## 1 Answer Well, suppose pi is a cuspidal automorphic representation of GL(n)/Q. This has the structure of a tensor product, indexed by primes p, of representations pi_p of the groups GLn(Qp). The Satake isomorphism tells us that at almost all primes, each pip is determined by a conjugacy class A(p) in GLn(C). In this language, the Riemann hypothesis for the L-function associated to pi says that the partial sums of tr(A(p)) over p < X show "as much cancellation as possible," and are of size sqrt(X). But if n>1, we are dealing with very complicated objects, and the local components of these automorphic representations vary in some incomprehensible way... You are right, there are certainly special cases. If we knew GRH for L-functions associated to Artin representations then the Cebotarev density theorem would follow with an optimal error term. Likewise, GRH for all the symmetric powers of a fixed elliptic curve E implies (and is in fact equivalent to; see Mazur's BAMS article for a reference) the Sato-Tate conjecture for E with an optimal error term. But in general, reformulations like this simply don't exist. There are many interesting consequences of GRH for various families of automorphic L-functions. I recommend Iwaniec and Kowalski's book (Chapter 5), the paper "Low-lying zeros of families of L-functions" by Iwaniec-Luo-Sarnak, and Sarnak's article at http://www.claymath.org/millennium/Riemann_Hypothesis/Sarnak_RH.pdf - The book by Iwaniec and Kowalski: books.google.com/…; . As mentioned by David, in section 5.7 they state some equivalent facts for L-functions. Perhaps also usefull: aimath.org/WWN/rh/rh.pdf .Here you can find many equivalent statements to RH (but I think not to GRH). –  Spinorbundle Nov 4 '09 at 9:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395765662193298, "perplexity": 406.14555827425335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657588.53/warc/CC-MAIN-20150417045737-00100-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=inttrans
Overview of the inttrans Package - Maple Programming Help Home : Support : Online Help : Mathematics : Packages : inttrans Overview of the inttrans Package Calling Sequence inttrans[command](arguments) command(arguments) Description • The inttrans package is a collection of commands designed to compute integral transforms.  These transforms are used in many branches of mathematics, and are particularly useful when solving differential equations. • Each command in the inttrans package can be accessed by using either the long form or the short form of the command name in the command calling sequence. List of inttrans Package Commands • The following is a list of available commands. To display the help page for a particular inttrans command, see Getting Help with a Command in a Package. Examples > $\mathrm{with}\left(\mathrm{inttrans}\right):$ > $\mathrm{laplace}\left(\mathrm{sin}\left(2t+3\right),t,s\right)$ $\frac{{\mathrm{sin}}{}\left({3}\right){}{s}{+}{2}{}{\mathrm{cos}}{}\left({3}\right)}{{{s}}^{{2}}{+}{4}}$ (1) > $\mathrm{invlaplace}\left(,s,t\right)$ ${\mathrm{sin}}{}\left({2}{}{t}\right){}{\mathrm{cos}}{}\left({3}\right){+}{\mathrm{cos}}{}\left({2}{}{t}\right){}{\mathrm{sin}}{}\left({3}\right)$ (2) > $\mathrm{combine}\left(,\mathrm{trig}\right)$ ${\mathrm{sin}}{}\left({2}{}{t}{+}{3}\right)$ (3) > $\mathrm{fourier}\left(t{ⅇ}^{-3t}\mathrm{Heaviside}\left(t\right),t,w\right)$ $\frac{{1}}{{\left({3}{+}{I}{}{w}\right)}^{{2}}}$ (4)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953573346138, "perplexity": 1923.7631543695068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00322.warc.gz"}
http://math.stackexchange.com/questions/174065/linearly-ordered-sets-somewhat-similar-to-mathbbq
# Linearly ordered sets “somewhat similar” to $\mathbb{Q}$ Let's say that a linear order type is q-like if every proper initial segment of an instance of this type is order-isomorphic to $\pmb{\eta}$ or $\pmb{\eta}+\pmb{1}$. For example, $\pmb{\eta}$, $\pmb{\eta}+\pmb{1}$, $\pmb{\eta}\cdot\pmb{\omega_1}$ and $\pmb{\eta}+(\pmb{1}+\pmb{\eta})\cdot\pmb{\omega_1}$are q-like. Question: How many distinct q-like order types exist? - A related question: math.stackexchange.com/questions/174043/… –  Vladimir Reshetnikov Jul 23 '12 at 3:39 This question would be well placed at mathoverflow. –  JDH Jul 23 '12 at 13:58 Two things we know: a) $\pmb{\eta}$ and $\pmb{\eta}+\pmb{1}$ are the only countable q-like order types (since the order type of a countably finite densely ordered set depends only on the (non-)existence of a minimum or a maximum, see Wikipedia); and b) an uncountable q-like order type has no maximum, since otherwise the initial segment it determines would be uncountable. –  joriki Jul 23 '12 at 16:14 This is a great question! There are a large uncountable number of these orders. Consider as a background order the long rational line, the order $\mathcal{Q}=([0,1)\cap\mathbb{Q})\cdot\omega_1$. For each $A\subset\omega_1$, let $\mathcal{Q}_A$ be the suborder obtained by keeping the first point from the $\alpha^{\rm th}$ interval whenever $\alpha\in A$, and omitting it if $\alpha\notin A$. That is, $$\mathcal{Q}_A=((0,1)\cap\mathbb{Q})\cdot\omega_1\ \cup\ \{(0,\alpha)\mid\alpha\in A\}.$$ This corresponds to adding $\omega_1$ many copies of $\eta$ or $1+\eta$, according to the pattern specified by $A$ as a subset of $\omega_1$. I claim that the isomorphism types of these orders, except for the question of a least element, correspond precisely to agreement-on-a-club for $A\subset\omega_1$. Theorem. $\mathcal{Q}_A$ is isomorphic to $\mathcal{Q}_B$ if and only if $A$ and $B$ agree on having $0$ and also agree modulo the club filter, meaning that there is a closed unbounded set $C\subset\omega_1$ such that $A\cap C=B\cap C$. In other words, this is if and only if $A$ and $B$ agree on $0$ and are equivalent to $B$ in $P(\omega_1)/\text{NS}$, as subsets modulo the nonstationary ideal. Proof. If $A$ and $B$ agree on $0$ and agree on a club $C$, then we may build an isomorphism between $\mathcal{Q}_A$ and $\mathcal{Q}_B$ by transfinite induction. Namely, for each $\alpha\in C$, we will ensure that $f$ restricted to the cut below $(0,\alpha)$ is an isomorphism of the segment in $\mathcal{Q}_A$ to that in $\mathcal{Q}_B$. The point is that $(0,\alpha)$ is actually a point in $\mathcal{Q}_A$ if and only if it is point in $\mathcal{Q}_B$, and so these points provide a common frame on which to carry out the transfinite recursion. If we have an isomorphism up to such a point, we can continue it to the next point, since this is just adding a copy of $\eta$ or of $1+\eta$ on top of each (the same for each), and at limits we take the union of what we have built so far, which still fulfills the property because $C$ is closed. So $\mathcal{Q}_A\cong\mathcal{Q}_B$. Conversely, if $f:\mathcal{Q}_A\cong\mathcal{Q}_B$, then $A$ and $B$ must agree on $0$. Let $C$ be the set of closure ordinals of $f$, that is, the set of $\alpha$ such that $f$ respects the cut determined by the point $(0,\alpha)$. This set $C$ is closed and unbounded in $\omega_1$. Furthermore, it is now easy to see that $(0,\alpha)\in \mathcal{Q}_A$ if and only if $(0,\alpha)\in \mathcal{Q}_B$ for $\alpha\in C$, since this point is the supremum of that cut, and it would have to be mapped to itself. Thus, $A\cap C=B\cap C$ and so $A$ and $B$ agree modulo the club filter. QED Corollary. There are $2^{\aleph_1}$ many distinct q-like linear orders up to isomorphism. Proof. The theorem shows that there are as many different q-like linear orders as there are equivalence classes of subsets of $\omega_1$ modulo the non-stationary ideal. So the number of such orders is $|P(\omega_1)/\text{NS}|$. This cardinality is $2^{\aleph_1}$ because we may split $\omega_1$ into $\omega_1$ many disjoint stationary sets, by a theorem of Solovay and Ulam, and the union of any two distinct subfamilies of these differ on a stationary set and hence do not agree on a club. So there are at least $2^{\aleph_1}$ many distinct q-like orders up to isomorphism, and there cannot be more than this, since every such order has cardinality at most $\omega_1$. QED Finally, let me point out, as Joriki mentions in the comments, that every uncountable q-like linear order is isomorphic to $\mathcal{Q}_A$ for some $A$. If $L$ is any such order, then select an unbounded $\omega_1$ sequence in $L$, containing none of its limits, chop $L$ into the corresponding intervals these elements and define $A$ according to whether these corresponding intervals have a least element or not. Thus, we have a complete characterization of the q-like linear orders: the four countable orders, and then the orders $\mathcal{Q}_A$ with two $A$ from each class modulo the nonstationary ideal, one with $0$ and one without. - You meant to say "There is a large number ..." in the first line, no? :-) –  Asaf Karagila Jul 23 '12 at 21:16 @Asaf: Both versions are idiomatic. –  Brian M. Scott Jul 23 '12 at 21:20 @Brian. Hmm, I see. I suppose I can live with that, but using "are a ..." seems weird to me. On the other hand, "many a ..." used to seem strange several years ago too... –  Asaf Karagila Jul 23 '12 at 21:22 I would say, "there are a large number of people in the room." But I don't think I would say, "there is a large number of people in the room." But somehow the adjective "uncountable" throws off one's grammatical sense. Perhaps it is a question for english.stackexchange.com. –  JDH Jul 23 '12 at 21:30 Could you please explain why the set of closure ordinals of $f$ is closed and unbounded? It seems this is related to the answers to this question, but I don't quite see it. –  joriki Jul 23 '12 at 22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115341305732727, "perplexity": 188.05984142895898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/isi-2015-forum/
× # ISI 2015 - Forum Today (10 May 2015, 10:30 am IST) I gave the renowned ISI (Indian Statistical Institute) Entrance test and found it extremely good. I am inviting my fellow members on Brilliant who gave this exam today to share their valuable experience with us... 2 years, 2 months ago Sort by: Hi.I also gave the test today and I think it was bit easier than the previous year papers.(mainly UGA).Btw I did 25 questions in UGA and 6 in UGB. · 2 years, 2 months ago Btw which two questions you didn't do in UGB ?? Like I couldn't do the reciprocal AP and sum of 2015 · 2 years, 2 months ago Same pinch :) · 2 years, 2 months ago The problem with easier exams is that the cutoffs become extremely high... Let's hope for the best · 2 years, 2 months ago Hope,yes that much we can do. · 2 years, 2 months ago Guys how do you prepare for UGB section? Please suggest. I just have a month to prepare. The ISI website has only sample questions, that too without solutions. Please help! · 1 year, 5 months ago Can you tell me the difficulty level? · 2 years, 2 months ago Moderate- some questions could be done in one second and some took over a minute · 2 years, 2 months ago How was your performance ? @Soutrik Bandyopadhyay · 2 years, 2 months ago I did 26 questions in UGA(out of 30) and 6 questions in UGB (out of 8) - way above my expectation · 2 years, 2 months ago That's Great !!! You did very well. Keep it up. Congrats. · 2 years, 2 months ago
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572666883468628, "perplexity": 4625.265570939179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424889.43/warc/CC-MAIN-20170724162257-20170724182257-00172.warc.gz"}
https://arxiv.org/abs/1511.01496
astro-ph.IM # Title:SDSS-IV/MaNGA: Spectrophotometric Calibration Technique Abstract: Mapping Nearby Galaxies at Apache Point Observatory (MaNGA), one of three core programs in the Sloan Digital Sky Survey-IV (SDSS-IV), is an integral-field spectroscopic (IFS) survey of roughly 10,000 nearby galaxies. It employs dithered observations using 17 hexagonal bundles of 2 arcsec fibers to obtain resolved spectroscopy over a wide wavelength range of 3,600-10,300A. To map the internal variations within each galaxy, we need to perform accurate {\it spectral surface photometry}, which is to calibrate the specific intensity at every spatial location sampled by each individual aperture element of the integral field unit. The calibration must correct only for the flux loss due to atmospheric throughput and the instrument response, but not for losses due to the finite geometry of the fiber aperture. This requires the use of standard star measurements to strictly separate these two flux loss factors (throughput versus geometry), a difficult challenge with standard single-fiber spectroscopy techniques due to various practical limitations. Therefore, we developed a technique for spectral surface photometry using multiple small fiber-bundles targeting standard stars simultaneously with galaxy observations. We discuss the principles of our approach and how they compare to previous efforts, and we demonstrate the precision and accuracy achieved. MaNGA's relative calibration between the wavelengths of H$\alpha$ and H$\beta$ has a root-mean-square (RMS) of 1.7%, while that between [NII] $\lambda$6583A and [OII] $\lambda$3727A has an RMS of 4.7%. Using extinction-corrected star formation rates and gas-phase metallicities as an illustration, this level of precision guarantees that flux calibration errors will be sub-dominant when estimating these quantities. The absolute calibration is better than 5% for more than 89% of MaNGA's wavelength range. Comments: 19 pages, 9 figures. Accepted for publication in AJ Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Astrophysics of Galaxies (astro-ph.GA) DOI: 10.3847/0004-6256/151/1/8 Cite as: arXiv:1511.01496 [astro-ph.IM] (or arXiv:1511.01496v1 [astro-ph.IM] for this version) ## Submission history From: Renbin Yan [view email] [v1] Wed, 4 Nov 2015 21:00:04 UTC (3,778 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262730240821838, "perplexity": 4374.617581005063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00552.warc.gz"}
https://blog.blueprintprep.com/mcat/free-mcat-practice-question-round-2-physics/
# Free MCAT Practice Question, Round 2 – Physics • by • May 30, 2013 • MCAT Blog, MCAT Long Form, MCAT Physics, MCAT Prep # MCAT Physics Practice Question A common concern students have in physics is being able to memorize all of the equations that are on the MCAT. My previous post talked about a technique for making study sheets as a way to memorize them. However, another very important MCAT skill is interpreting the equations that are on the test. And when the MCAT wants to test your ability to interpret an equation, often it will simply give the equation to you. Here’s an example of how the MCAT will test this crucial skill: Item 13 Escape velocity is the speed an object must reach to break free of a planet’s gravitational field regardless of the trajectory the object is following. Escape velocity is given by the equation: $v_{e}&space;=sqrt{frac{2GM}{r}}$ where v is velocity, G is the gravitational constant, M is the mass of the planet, and r is the distance between the center of gravity of the planet and the object. An object’s escape velocity will increase if: A) the mass of the object increases. B) the radius between the object and the planet increases. C) the acceleration due to gravity decreases. D) the radius between the object and the planet decreases. Explanation In addition to knowing what matters in an equation, we must also know what doesn’t matter. The MCAT does this a lot with gravity. When we’re solving questions that are set on the surface of the earth, the acceleration due to gravity, g, is usually just rounded off to 10 $m/s^{2}$ and this doesn’t change when the mass of the object changes. In our example question, g doesn’t even show up. So changes in g won’t affect anything. That lets us eliminate (C). Similarly, the mass of the object isn’t in the equation. Since the mass of the object isn’t in the equation, changes in the object’s mass won’t have any effect. (A) is gone. We’re now down to choices (B) and (D). Since the equation shows that there’s an inverse relationship between v and r (because r is in the denominator), to increase v we must decrease r, making (D) the correct answer. This question would be considered an easy one on the MCAT, although the real MCAT does have a fair number of straightforward questions like this. The goal on Test Day is to make sure you don’t make careless mistakes, that you have a grasp on the basic concept, and that you can get it done quickly and confidently before moving on to more challenging material.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849292516708374, "perplexity": 476.0184585392121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00776.warc.gz"}
https://www.physicsforums.com/threads/combinatorial-identity.391345/
# Combinatorial identity • #1 1 0 Hi, I would like some help in proving the following identity: $$\sum_{x=0}^{n}x^3 = 6\binom{n+1}{4} + 6\binom{n+1}{3} + \binom{n+1}{2}$$ I tried doing it by induction but that did not go well (perhaps I missed something). Someone told me to use the fact that $$\binom{x}{0}, \binom{x}{1},...,\binom{x}{k}$$ span the space of polynomials of degree k or less $$\mathbb{R}_k[x]$$, but I didn't really see how to use that. Any help would be welcome, but I'd rather it would not be the whole solution but rather hints. Thanks a lot and have a good day. ## Answers and Replies • #2 430 3 From the hint you know that you can write the polynomial x^3 as: $$x^3 = a_0\binom{x}{0} + a_1\binom{x}{1} + a_2\binom{x}{2} + a_3\binom{x}{3}$$ for constants $a_0,\ldots,a_3$. By substituting appropriate values for x you should be able to work out these constants. By plugging this expression into your summation you should be able to arrive at something you can compute. Also for the solution you need to remember the identity: $$\binom{n+1}{k+1} = \sum_{i=0}^n \binom{n}{k}$$ where n is an integer. EDIT: By the way induction also works fine if you express n^3 as a suitable linear combination of binomial coefficients. Last edited: ### Related Threads on Combinatorial identity • Last Post Replies 8 Views 912 • Last Post Replies 6 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 5 Views 1K • Last Post Replies 2 Views 3K Changing the Statement Combinatorial proofs & Contraposition Replies 5 Views 324 • Last Post Replies 7 Views 13K • Last Post Replies 2 Views 2K • Last Post Replies 3 Views 1K • Last Post Replies 2 Views 5K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820118308067322, "perplexity": 497.2829629011705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00795.warc.gz"}
https://chemistry.stackexchange.com/questions/107927/how-to-specify-atomic-carbon-terms-in-the-coupled-and-uncoupled-representation
# How to specify atomic carbon terms in the coupled and uncoupled representation? So, we know, that the atomic carbon in the electronic configuration $$1s^22s^22p^2$$ has the following terms $${}^1S, {}^1D, {}^3P$$ My question is - how can I correctly specify these terms in the terms of coupled and uncoupled representations? ### My attempt So, in the case of terms, we're only considering the orbital angular momentum, not the spin. Because of that, we can describe the single terms in the coupled representation $$\left|L, M_L\right>$$ which correspond with the linear combination of microstates, i.e. the uncoupled representations $$\left|m_{l1}, m_{l2}\right>$$ using Clebsch-Gordan coefficients. For $${}^1S$$ term it's pretty easy, as $$L=0$$ and $$M_L=0$$ (as described in this answer): \begin{align}{}^1S: |L = 0, M_L = 0\rangle &= \frac{1}{\sqrt 3} |m_{l1}= 1, m_{l2} = -1\rangle + \frac{1}{\sqrt 3} |-1, 1\rangle - \frac{1}{\sqrt 3} |0, 0\rangle\\ &= \frac{1}{\sqrt{3}} \left| 8 \right> + \frac{1}{\sqrt{3}} \left| 11 \right> - \frac{1}{\sqrt{3}}\left| 14 \right >\end{align} In the last expression there are wavefunctions specified with indices from the microstate table below. But further it gets somewhat more tricky - both $${}^3P$$ and $${}^1D$$ will contain multiple states. $$P$$ corresponds with $$L=1$$ and so $$M_L \in \left\{ -1, 0, 1 \right\}$$. I suppose, that its coupled representations are $$\left| L=1, M_L=-1\right>, \left| L=1, M_L=0\right>, \left| L=1, M_L=1\right>$$. \begin{align} {}^3P: \left| L=1, M_L=-1\right> &= \frac{1}{\sqrt{2}}\left| -1, 0 \right> + \frac{1}{\sqrt{2}}\left| 0, -1 \right>\\ &= \frac{1}{\sqrt{2}}\left| 2 \right> + \frac{1}{\sqrt{2}}\left|5 \right>\\ \left| L=1, M_L=0\right> &= \frac{1}{\sqrt{2}}\left| 1, -1 \right> - \frac{1}{\sqrt{2}}\left| -1, 1 \right>\\ &= \frac{1}{\sqrt{2}}\left| 3 \right> - \frac{1}{\sqrt{2}}\left|6 \right>\\ \left| L=1, M_L=1\right> &= \frac{1}{\sqrt{2}}\left| 1, 0 \right> - \frac{1}{\sqrt{2}}\left| 0, 1 \right>\\ &= \frac{1}{\sqrt{2}}\left| 1 \right> - \frac{1}{\sqrt{2}}\left|4 \right> \end{align} $${}^1D$$ corresponds with $$L=2$$ and $$M_L \in \left\{ -2,-1,0,1,2 \right\}$$. \begin{align} {}^1D:\left| L = 2, M_L = -2 \right> &= \left| -1, -1 \right> = \left| 15\right>\\ \left| L = 2, M_L = -1 \right> &= \frac{1}{\sqrt{2}}\left| 0, -1 \right> + \frac{1}{\sqrt{2}}\left|-1, 0 \right> \\ &= \frac{1}{\sqrt{2}}\left| 10 \right> + \frac{1}{\sqrt{2}}\left| 12 \right>\\ \left| L = 2, M_L = 0 \right> &= \frac{1}{\sqrt{6}}\left|1, -1 \right> + \sqrt{\frac{2}{3}}\left| 0, 0 \right> + \frac{1}{\sqrt{6}}\left| -1, 1 \right> \\ &= \frac{1}{\sqrt{6}}\left| 8 \right> + \sqrt{\frac{2}{3}}\left|14 \right> + \frac{1}{\sqrt{6}}\left| 11 \right> \\ \left| L = 2, M_L = 1 \right> &= \frac{1}{\sqrt{2}}\left| 1, 0 \right> + \frac{1}{\sqrt{2}}\left| 0, 1 \right>\\ &= \frac{1}{\sqrt{2}}\left|7 \right> + \frac{1}{\sqrt{2}}\left|9 \right> \\ \left| L = 2, M_L = 2 \right> &= \left| 1, 1 \right> = \left| 13 \right> \end{align} Is this the right approach or do I understand it incorrectly? ### Table of microstates I'll try to illustrate the procedure for two of the nine states that form the term symbol $$^3P$$. This term symbol has $$S = 1$$ and $$L = 1$$, which makes for three possible values of $$M_S$$ and three possible values of $$M_L$$, for a total of nine states. These states are "coupled" in both the spin and the spatial dimensions: $$|S,M_S,L,M_L\rangle = |1,+1,1,+1\rangle, |1,0,1,+1\rangle, \cdots$$ The microstates in the table are states which are "uncoupled" in both dimensions: $$|m_{s1},m_{s2},m_{l1},m_{l2}\rangle = |{+1/2}, {+1/2}, {+1}, {+1}\rangle, \cdots$$ To go from the term symbols ("doubly coupled" - don't use this terminology, I made it up) to the microstates ("doubly uncoupled"), we have to convert both spin and spatial components from coupled to uncoupled representations. Thankfully, the spin and spatial components are separable, so we can write $$|S,M_S,L,M_L\rangle = |S,M_S\rangle \cdot |L,M_L\rangle$$ Uncoupling the spin part is simple enough: \begin{align} |S=1,M_S=+1\rangle &= |\alpha(1)\alpha(2)\rangle \\ |S=1,M_S=0\rangle &= \frac{1}{\sqrt 2}\left\{|\alpha(1)\beta(2)\rangle + |\beta(1)\alpha(2)\rangle\right\} \\ |S=1,M_S=-1\rangle &= |\beta(1)\beta(2)\rangle \\ \end{align} where $$|\alpha\rangle$$ denotes $$m_s = +1/2$$ and $$|\beta\rangle$$ denotes $$m_s = -1/2$$, as usual. For the spatial parts, you will need to use the Clebsch–Gordan coefficients to expand $$|L,M_L\rangle$$ in terms of the "uncoupled representations" $$|m_{l1},m_{l2}\rangle$$. You already seem to understand this. For illustrative purposes we'll use $$|L = 1, M_L = +1\rangle = \frac{1}{\sqrt 2}(|m_{l1} = +1, m_{l2} = 0\rangle - |m_{l1} = 0, m_{l2} = +1\rangle)$$ and we'll simplify the notation by denoting $$m_l = +1, 0, -1$$ as $$p^+, p^0, p^-$$ respectively, so: $$|L = 1, M_L = +1\rangle = \frac{1}{\sqrt 2}\left\{|p^+(1)p^0(2)\rangle - |p^0(1)p^+(2)\rangle\right\}$$ Now, say we wish to find the microstate corresponding to $$|S,M_S,L,M_L\rangle = |1,+1,1,+1\rangle$$. What we need to do is to expand both spin and spatial components in terms of their respective uncoupled representations, then multiply them together again. \begin{align} &|S=1,M_S=+1\rangle \cdot |L = 1, M_L = +1\rangle \\ &\qquad = |\alpha(1)\alpha(2)\rangle \cdot \frac{1}{\sqrt 2}\left\{|p^+(1)p^0(2)\rangle - |p^0(1)p^+(2)\rangle\right\} \\ &\qquad = \frac{1}{\sqrt 2} \left\{|p^+(1)\alpha(1)p^0(2)\alpha(2)\rangle - |p^0(1)\alpha(1)\,p^+(2)\alpha(2)\rangle\right\} \end{align} Notice how this is a (suitably antisymmetrised) state where one electron is in $$p^+$$ with spin up, and the other electron is in $$p^0$$ with spin up. This corresponds exactly to microstate 1. Although the microstates are uncoupled states, it doesn't mean they aren't antisymmetrised. By virtue of quantum indistinguishability, they have to be antisymmetrised: that's why most of the microstates are linear combinations of "composite states" such as $$|p^+(1)\alpha(1)p^0(2)\alpha(2)\rangle$$ (such a state on its own is not physically permissible, whereas the microstates are physically permissible states). For an uglier but more instructive case, let's look at $$|S,M_S,L,M_L\rangle = |1,0,1,+1\rangle$$. \begin{align} &|S=1,M_S=0\rangle \cdot |L = 1, M_L = +1\rangle \\ &\qquad = \frac{1}{\sqrt 2}\left\{|\alpha(1)\beta(2)\rangle + |\beta(1)\alpha(2)\rangle\right\} \cdot \frac{1}{\sqrt 2}\left\{|p^+(1)p^0(2)\rangle - |p^0(1)p^+(2)\rangle\right\} \\ &\qquad = \frac{1}{2} \left\{ |p^+(1)\alpha(1)p^0(2)\beta(2)\rangle - |p^0(1)\beta(1)p^+(2)\alpha(2)\rangle + |p^+(1)\beta(1)p^0(2)\alpha(2)\rangle - |p^0(1)\alpha(1)p^+(2)\beta(2)\rangle \right\} \end{align} Notice the first two terms in this expansion describe an (antisymmetrised) state where one spin-up electron is in $$p^+$$ and one spin-down electron is in $$p^0$$. This corresponds to microstate 7. Likewise, the second two terms correspond to microstate 9. All in all, we can write: $$|S = 1, M_S = 0, L = 1, M_L = +1\rangle = \frac{1}{\sqrt 2}(|7\rangle + |9\rangle)$$ Using this procedure you can work your way through the rest of the states in $$^3P$$ (and indeed, $$^1S$$ and $$^1D$$ too). As a final example, let's look at one of the five $$^1D$$ states: \begin{align} &|S = 0, M_S = 0, L = 2, M_L = 0\rangle \\ &\qquad = |S = 0, M_S = 0\rangle \cdot |L = 2, M_L = 0\rangle \\ &\qquad = \frac{1}{\sqrt 2}\left\{|\alpha(1)\beta(2)\rangle - |\beta(1)\alpha(2)\rangle\right\} \cdot \\ &\qquad\qquad\qquad \left[\frac{1}{\sqrt 6}\left\{|p^+(1)p^-(2)\rangle + |p^-(1)p^+(2)\rangle \right\} + \sqrt{\frac{2}{3}}|p^0(1)p^0(2)\rangle \right] \\ &\qquad = \frac{1}{\sqrt{12}}\left\{ |p^+(1)\alpha(1)p^-(2)\beta(2)\rangle - |p^-(1)\beta(1)p^+(2)\alpha(2)\rangle \right\} \\ &\qquad \qquad + \frac{1}{\sqrt{12}}\left\{ |p^-(1)\alpha(1)p^+(2)\beta(2)\rangle - |p^+(1)\beta(1)p^-(2)\alpha(2)\rangle \right\} \\ &\qquad \qquad + \frac{1}{\sqrt{3}}\left\{ |p^0(1)\alpha(1)p^0(2)\beta(2)\rangle - |p^0(1)\beta(1)p^0(2)\alpha(2)\rangle \right\} \\ \end{align} The first line of this expansion corresponds to $$(\sqrt{1/6}|8\rangle)$$; the second to $$(\sqrt{1/6}|11\rangle)$$; and the third to $$\sqrt{2/3}|14\rangle$$. So, your expansion was correct; however, be careful, because the uncoupled representations of the spatial wavefunctions $$|m_{l_1} = +1, m_{l2} = -1\rangle$$ do not themselves correspond to the microstates. Instead, it is only by multiplying this with the spin wavefunction can you obtain a wavefunction that corresponds to the microstates. • Just one more question - I don't understand, why is there minus in $\left| S=0, M_S=0 \right>$ in the very last example? – Eenoku Jan 15 '19 at 0:35 • @Eenoku that's the singlet spin wavefunction. Atkins MQM 5th ed section 4.12 describes this in detail, and the bottom bit of my previous answer sort of summarises this. But really, any textbook should cover it (sometimes it appears in the context of the helium atom). – orthocresol Jan 15 '19 at 0:44 • No problem. Feel free to ask a new question if you want more detail (I think it’s too long for comments). – orthocresol Jan 15 '19 at 1:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 57, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999209642410278, "perplexity": 1562.7214412416724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880665.3/warc/CC-MAIN-20200714114524-20200714144524-00385.warc.gz"}
http://mathhelpforum.com/differential-equations/213357-need-help-getting-started-print.html
# Need help getting started • Feb 18th 2013, 02:51 PM colerelm1 1 Attachment(s) Need help getting started Can anyone please explain what steps I need to take to solve this problem? I'm having difficulties even beginning it. Attachment 27118 Thanks • Feb 18th 2013, 02:59 PM jakncoke Re: Need help getting started $\frac{x dx}{(x^2+y^2)^{3/2}} = - \frac{y dy}{(x^2+y^2)^{3/2}}$ multiply both sides by $(x^2+y^2)^{3/2}$ $x dx = - y dy$ $- \frac{x^2}{2} + c = \frac{y^2}{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526547193527222, "perplexity": 2028.9428139741458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189472.3/warc/CC-MAIN-20170322212949-00062-ip-10-233-31-227.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/90673/riemann-curvature-tensor-symmetries-confusion
# Riemann curvature tensor symmetries confusion In the context of spacetime, reading Schutz, I'm confused about the symmetries of the Riemann curvature tensor, which I understand are: $$R_{\alpha\beta\gamma\mu}=-R_{\beta\alpha\gamma\mu}=-R_{\alpha\beta\mu\gamma}=R_{\gamma\mu\alpha\beta}.$$ But using the metric to contract the Riemann tensor can't I also say $$R_{\gamma\mu}=g^{\alpha\beta}R_{\alpha\beta\gamma\mu}=g^{\alpha\beta}R_{\alpha\gamma\beta\mu}?$$ Which leads me to think that $$R_{\alpha\beta\gamma\mu}=R_{\alpha\gamma\beta\mu}.$$ But $R_{\alpha\gamma\beta\mu}$ isn't one of the above listed symmetries. Where am I going wrong? • Comment to the question (v2): Also note that different authors may order the four indices of the Riemann curvature tensor differently. – Qmechanic Dec 18 '13 at 19:51 $g^{\alpha\beta}$ is symmetric in $\alpha$ and $\beta$, while $R_{\alpha\beta\gamma\mu}$ is anti-symmetric in $\alpha$ and $\beta$, so the contraction $g^{\alpha\beta}R_{\alpha\beta\gamma\mu}$ is necessarily $0$, and cannot be $R_{\gamma\mu}$. Moreover, it is not correct to say, that if the contraction of $2$ tensors with another tensor (here the metric tensor) are equals, then the $2$ tensors are equal. For instance, if you take a tensor $T_1$, and a tensor $T_2$, with $T_2-T_1$ anti-symmetric in lower indices $\alpha, \beta$ and contract the tensors $T_1$ and $T_2$ with a tensor symmetric in upper indices $\alpha, \beta$, you will get the same result. • Indeed; it may be added that all the metric contractions of the Riemann tensor are either $0$ or $\pm\text{Ric}$, so even the nonzero ones are not necessarily the Ricci tensor. – Stan Liou Dec 18 '13 at 19:59 • Oh, showing my huge ignorance, I thought you could just sort of “cancel” any upper and lower index. I didn't know about symmetric/anti-symmetric. Does that mean that $g^{\alpha\beta}R_{\alpha\beta\gamma\mu}=g^{\alpha\beta}R_{\gamma\mu\alpha\beta}=0$ ? Would that also mean (swapping the metric tensor indices) that $g^{\beta\alpha}R_{\alpha\beta\gamma\mu}=g^{\beta\alpha}R_{\gamma\mu\alpha\beta}=0$ ? So, if I avoid the Riemann tensor having the same 1 and 2 or 3 and 4 indices as the metric, can I state the Ricci tensor as (for example) $R_{\mu\nu}=g^{\sigma\rho}R_{\sigma\mu\rho\nu}$ ? – Peter4075 Dec 18 '13 at 21:02 • @Peter4075: Yes, Ricci tensor is $R_{\mu\nu} = R^\rho{}_{\mu\rho\nu} = g^{\sigma\rho}R_{\sigma\mu\rho\nu}$. You can switch indices on the metric because it's symmetric and $R_{\alpha\beta\gamma\mu} = R_{\gamma\mu\alpha\beta}$ is the pair-interchange symmetry of the Riemann tensor, so your equations here are correct. – Stan Liou Dec 18 '13 at 22:59 • @StanLiou: Thanks. Final question. Can the Ricci tensor therefore be defined using other index permutations that don't involve the Riemann tensor having the same 1 and 2 or 3 and 4 indices as the metric, ie $R_{\mu\nu}=g^{\sigma\rho}R_{\sigma\mu\nu\rho}$ , $R_{\mu\nu}=g^{\sigma\rho}R_{\mu\sigma\rho\nu}$ , $R_{\mu\nu}=g^{\sigma\rho}R_{\mu\sigma\nu\rho}$ ? – Peter4075 Dec 19 '13 at 6:35 • @Peter4075 : Up to a sign, so with your $3$ examples, with signs $(-1,-1,+1)$, which is obvious from the symmetry properties of $R_{\sigma\mu\nu\rho}$ – Trimok Dec 19 '13 at 8:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795405864715576, "perplexity": 274.0493842142585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573385.29/warc/CC-MAIN-20190918234431-20190919020431-00480.warc.gz"}
https://unapologetic.wordpress.com/2007/09/10/functor-categories/
The Unapologetic Mathematician Functor Categories Let’s consider two categories enriched over a monoidal category $\mathcal{V}$$\mathcal{C}$ and $\mathcal{D}$ — and assume that $\mathcal{C}$ is equivalent to a small category. We’ll build a $\mathcal{V}$-category $\mathcal{D}^\mathcal{C}$ of functors between them. Of course the objects will be functors $F:\mathcal{C}\rightarrow\mathcal{D}$. Now for functors $F$ and $G$ we need a $\mathcal{V}$-object of natural transformations between them. For this, we will use an end: $\hom_{\mathcal{D}^\mathcal{C}}(F,G)=\int_{A\in\mathcal{C}}\hom_\mathcal{D}(F(C),G(C))$ This end is sure to exist because of the smallness assumption on $\mathcal{C}$. Its counit will be written $E_C=E_{C,F,G}:\hom_{\mathcal{D}^\mathcal{C}}(F,G)\rightarrow\hom_\mathcal{D}(F(C),G(C))$. An “element” of this $\mathcal{V}$ object is an arrow $\eta:\mathbf{1}\rightarrow\hom_{\mathcal{D}^\mathcal{C}}(F,G)$. Such arrows correspond uniquely to $\mathcal{V}$-natural families of arrows $\eta_C=E_C\circ\eta:\mathbf{1}\rightarrow\hom_\mathcal{D}(F(C),G(C))$, which we know is the same as a $\mathcal{V}$-natural transformation from $F$ to $G$. We also see that at this level of elements, the counit $E_C$ takes a $\mathcal{V}$-natural transformation and “evaluates” it at the object $C$. Now we need to define composition morphisms for these hom-objects. This composition will be inherited from the target category $\mathcal{B}$. Basically, the idea is that at each object a natural transformation gives a component morphism in the target category, and we compose transformations by composing their components. Of course, we have to finesse this a bit because we don’t have sets and elements anymore. So how do we get a component morphism? We use the counit map $E_C$! We have the arrow $E_C\otimes E_C:\hom_{\mathcal{D}^\mathcal{C}}(G,H)\otimes\hom_{\mathcal{D}^\mathcal{C}}(F,G)\rightarrow\hom_\mathcal{D}(G(C),H(C))\otimes\hom_\mathcal{D}(F(C),G(C))$ which we can then hit with the composition $\circ:\hom_\mathcal{D}(G(C),H(C))\otimes\hom_\mathcal{D}(F(C),G(C))\rightarrow\hom_\mathcal{D}(F(C),H(C))$. This is a $\mathcal{V}$-natural family indexed by $C\in\mathcal{C}$, so by the universal property of the end we have a unique arrow $\circ:\hom_{\mathcal{D}^\mathcal{C}}(G,H)\otimes\hom_{\mathcal{D}^\mathcal{C}}(F,G)\rightarrow\hom_{\mathcal{D}^\mathcal{C}}(F,H)$ Similarly we get the arrow picking out identity morphism on $T$ as the unique one satisfying $E_C\circ i_T=i_{T(A)}$ So $\mathcal{D}^\mathcal{C}$ is a $\mathcal{V}$-category whose underlying ordinary category is that of $\mathcal{V}$-functors and $\mathcal{V}$-natural transformations between the $\mathcal{V}$-categories $\mathcal{C}$ and $\mathcal{D}$. That is, it’s the hom-category $\hom_{\mathcal{V}\mathbf{-Cat}}(\mathcal{C},\mathcal{D})$ in the 2-category of $\mathcal{V}$-categories. September 10, 2007 - Posted by | Category theory 1 Comment » 1. […] Categories as Exponentials The notation we used for the enriched category of functors between two enriched categories gives away the game a bit: this will be the exponential between the […] Pingback by Functor Categories as Exponentials « The Unapologetic Mathematician | September 11, 2007 | Reply
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987019896507263, "perplexity": 3501.6927651172}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661364.60/warc/CC-MAIN-20150417045741-00074-ip-10-235-10-82.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/199505
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract The present study investigates the potential of the Largest Lyapunov Exponent (LLE) for the quantification of AF complexity as a marker of antitachycardia pacing (ATP) effectiveness in a biophysical model of the human atria. From ongoing simulated atrial fibrillation, 20 transmembrane potential maps were used as initial conditions for a rapid pacing from the septum area (at pacing cycle length as a percentage of the AF cycle length). The LLE was separately computed during AF and ATP for the transmembrane potential time series recorded at a single site in the right atrial posterior wall. The averaged results over all 20 simulations show that the LLE decreases during ATP relative to AF. These results suggest that LLE may serve as an indicator of AF complexity and also as a discriminating metric in automatic assessment of AF capture during ATP.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696205019950867, "perplexity": 3053.6631443343827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00114.warc.gz"}
http://mathoverflow.net/questions/104080/given-a-vector-field-all-of-whose-integral-curves-are-closed-is-the-period-a-sm?sort=votes
# Given a vector field all of whose integral curves are closed, is the period a smooth function? Disclaimer: The original question consisted of two parts. The first one has been answered negatively (see below the answers of Sam Lisi and Alejandro). It remains the second one. Background I am reading about the energy-period relation for Hamiltonian Systems. In Weinstein's formulation (cf. Abraham, Marsden, Foundations of Mechanics 2nd Ed, page 198) this relation amounts to: $(\ast)$ Given an Hamiltonian system $(M,\omega, X_H)$, let be $\Phi$ the flow of $X_H$ and $\text{per}:=\{(t,x)\in\mathbb R\times M\mid\Phi(t,x)=x\}.$ If $N$ is a smooth submanifold contained in $\text{per},$ then $\left.dt\wedge dH\right|_N=0,$ i.e. $t=t(H)$ on $N,$ (the period depends only on the energy.) Question In Guillemin, Stenberg, Geometric Asymptotics, between pages 170-171, I have additionally found that, when all integral curves of $X_H$ are periodic, we can take $N=\text{per}$ in $(\ast),$ which should mean that in such a case $\text{per}$ is a smooth submanifold of $\mathbb R\times M.$ In order to justify this last point I was wondering myself: 1. If $X$ is a non singular vector field on $M,$ all of whose integral curves are periodic, and $\tau(p)$ denotes the period of the orbit through $p,$ then $\tau:M\to\mathbb R$ is smooth? 2. otherwise, how to prove that in such a case $\text{per}$ is a submanifold? What I have tried about point 2 Probably I am missing something because my guess is that if there were a principal bundle structure $(M,p,X,\mathbb S^1)$ such that the $\mathbb S^1$-orbits are the trajectories of $X$ then the period $\tau:M\to\mathbb R$ should be smooth because of the relation $\zeta=\tau X_H,$ where $\zeta$ is the infinitesimal generator of the action. But I don't know how to proceed without this additional hypothesis. Edit1 (After Sebastian's answer about point 1): As illustration of my difficulties with point 1, I imagine that $M$ is the Moebius band $[0,1]\times\mathbb R/\sim$ and $X=\frac{\partial}{\partial x}$ then the period is $$\tau([(x,y)]_{\sim})=\begin{cases}1&\text{if }y=0\\\2&\text{if }y\neq 0\end{cases}$$ - A random thought: What about a harmonic oscillator? The periods are constant non-zero, except for the equilibria. But maybe you want to avoid this. – Thomas Rot Aug 6 '12 at 8:00 @Thomas Rot thank you, so with the harmonic oscillator, we find a Hamiltonian system all of whose orbits are periodic for which $\tau$ is not smooth and $\text{per}_H$ is not a smooth manifold. But then what hypothesis are behind the statement in Guillemin and Sternberg ("when all orbits are periodic, we can take $N=\text{per{$ in Weinstein's formulation of the energy-period relation)? But probably I should restrict to periodic motions with positive period. – Giuseppe Aug 6 '12 at 8:25 ## 3 Answers Sam is completely right. In general, the period function $\tau\colon M\to\mathbb{R}$ is not even continuous. A very nice reference for a (counter-)example to Giuseppe's question is the paper A counterexample to the periodic orbit conjecture, by Dennis Sullivan. In the paper, Sullivan constructs a singularity-free flow on a compact 5-manifold such that all its orbits are periodic and function $\tau$ is unbounded! - Initially I tried to justify the statement in Sternberg-Guillemin along the lines of Sebastian's answer. But it wasn't conclusive. Then I devised the very elementary counter-example in the Edit after which in general $\tau$ is not even continuous. So, under what hypothesis Guillemin and Sternberg state (between pages 170-171) that $\mathrm{per}_H$ is submanifold of $\mathbb R\times M$? I have thought to the presence on $M$ of a principal $S^1$-bundle structure whose fibers are the orbits of $X.$ But it seems to strong. – Giuseppe Aug 30 '12 at 6:43 Thanks for the answer, Alejandro. – Giuseppe Aug 30 '12 at 6:44 You are confusing minimal period and period. The function $\tau(p)$ you computed on $M$ is the minimal period, which is a well-defined function, but is only lower semi-continuous. The period as discussed by Sebastian is only locally defined, and is actually multivalued if you think of it globally. This multivalued period is what per is about. In your Moebius band example, every point has period 2 (also 4, 6, 8...), though the 0-section has minimal period 1 (I denote the 0 section by $\mathbf{0}$. Then, per consists of $\bigcup_{k \ge 1} \{ 2k-1 \} \times \mathbf{0} \cup \bigcup_{k \ge 1} \{ 2k \} \times M \subset \mathbb{R} \times M$ - Let $p\in M$ be a point such that $\Phi_t(p)=p.$ Let $U\subset M$ be a small neighborhood of $p$ and $N\subset U$ be a hypersurface such that $X$ is transversal to $N.$ Let $f\colon V\subset\mathbb R^{n-1}$ be a local parametrization of $N$ with $f(0)=p.$ Then $$F\colon\mathbb (t-\epsilon,t+\epsilon)\times V\to M;F(t,x)=\Phi_t(f(x))$$ is a local diffeomorphism to an open neighborhood of $p$ in $M.$ The preimage of $N$ by $F$ is a graph of a function from $\mathbb{R}^{n-1},$ the space where x libes in, to $\mathbb R,$ the space where t lives in. This function is exactly the "time period" function you look for. - What if $X(p)=0$? – Liviu Nicolaescu Aug 6 '12 at 14:06 1) @Liviu Nicolaescu I have edited the question to point out that the stationary points aren't allowed, i.e. constant solutions aren't considered periodic 2) @Sebastian I have thought what you say, but as with Poincarè map, if the integral curve $\gamma$ starts at $q\in N,$ then its first return on $N$ could be at another point than $q.$ – Giuseppe Aug 6 '12 at 14:25 For example, if $M$ is the Moebius' band $[0,1]\times\mathbb R/\sim$ and $X=\frac{\partial}{\partial x}$ then the period is$$\tau([x,y]_\sim)=\begin {cases}1&\text{if }y=0\\2&\text{if }y\neq 0\end{cases}.$$ – Giuseppe Aug 6 '12 at 14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657171368598938, "perplexity": 309.9471283959673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277313.92/warc/CC-MAIN-20160524002117-00170-ip-10-185-217-139.ec2.internal.warc.gz"}
http://reciprocalsystem.org/paper/discussion-of-satz-derivation-of-plancks-constant
# Discussion on Satz's Derivation of Planck's Constant (1) The ability to move away from established patterns of thinking and strike a new avenue of approach is a rare characteristic but most desirable for research. In his paper, “A New Derivation of Planck's Constant,”1 Satz comes up with such a fresh approach. He suggests that “frequency in the natural sense is the number of cycles per space-time unit.” This is at variance with Larson’s view that “…the so-called ‘frequency’ is actually a speed. It can be expressed as a frequency only because the space that is involved is always a unit magnitude.”2 I am not in the least maintaining that concurrence with Larson’s views is the criterion of truth. But in this instance, the recognition that frequency is really speed seems nearer the truth, since in the context of the theory of the universe of motion, the criterion that decides the truth of a concept is whether it is explainable in terms of the basic component of that universe, namely, speed. Satz supports his conclusion by the statement: “Photons of all frequencies can be observed in both sectors, and the only way that this could be possible is if the denominator of the natural definition contains both a space unit and a time unit.” In order to see the falsity of this statement it is necessary to remember that a photon has two speeds: the speed of propagation in the forward direction, and the speed of oscillation in the lateral direction. The fact that the speed of propagation is of constant magnitude and unit value (in the natural units) is what makes the photon observable in both sectors, since unit speed is the boundary between these sectors. The frequency, which is the speed in the lateral direction and which is the measure of its energy, is not relevant to its observability from both the sectors. (2) Satz gives in space-time terms the equation $E = h v$ (1) as $\frac{t}{s} = \left [ \frac{t^2}{\frac{t/s}{t/s}} \right ] \frac{1}{s \times t}$ (2) and in the cgs system of units $erg = \left[ \frac{sec^2}{\left(\frac{sec/cm}{erg} \right)} \right ] \frac{1}{cm \times sec}$ (3) It is to be noted that in this, the dimensions of frequency are taken to be cycles/(cm-sec). On this basis only he continues and arrives at the value of h in Equation (5). At this juncture h has the dimensions erg-sec-cm and frequency cycles/(cm-sec). He now adopts the following procedure: he detaches the cm dimension from the frequency and attaches it to h, rendering its dimensions erg-sec. Let us call this procedure of his the “swap” for later reference. This procedure has the effect of insulating this cm term from the effects of any operations that are uniformly carried out on all the terms comprising h, because he “swaps” this cm term into h only after performing those operations on the terms comprising h. (3) After incorporating the interregional factor into the terms contained in the square brackets of Equation (3) he arrives at the Planck’s constant h as $\frac{1}{156.4444} \times \frac{t^2_0}{\left(\frac{cm / sec}{erg} \right )}$ (4) If one compares the terms comprising h respectively in Equation (3) and Equation (4), it becomes apparent that the author unwarrantedly introduces in Equation (4) the term $t^2_0$, replacing the term sec2. I will call this procedure of his the “switch.” All the terms in Equation (3) are in the cgs system of units. The rationale for making this “switch” is not given: only the numerator term is “switched,” retaining the other terms in the cgs units. Further, if the “swap” is not carried out, the cm term we wish to avoid finally in the frequency would find place in h right from the beginning, making it $\frac{1}{156.4444} \times \frac{sec^2 / cm}{\left(\frac{cm / sec}{erg} \right )}$ (5) As such, if he has reasons to “switch” the sec term in the numerator to the natural unit of time, the same reasons would compel the “switching” cm term also to the natural unit of space. This, of course, produces the wrong result. The “swap” serves to avoid just this. (4) At two places Satz comments on my attempt3 to calculate the Planck’s constant. He contends that I “started by setting the dimensions of energy to be space divided by time, which is, of course, the reverse of what they are.” If my Paper is read carefully it would be found that this is not what I did. I have clearly shown in my Equation (l) the relation between energy and speed in space-time terms as $\frac{t}{s} = \frac{t^2}{s^2} \times \frac{s}{t}$ (6) I explained that expressing all the quantities in the natural units we obtain from the above energy in natural units = (1/1)2 speed (in natural units), since the term t2/s2 , the square of the natural unit of inverse speed, is unity. In other words, so far as primary units are concerned, n natural units of speed, say, are equivalent to n natural units of energy. I had even taken care to explicitly mention that the latter is a quantitative relationship. Despite this, Satz has misconstrued it as a dimensional relationship. I had, in addition, pointed out Larson’s account (see Reference 2 of my Paper) for the sake of elucidation. (5) The other place at which Satz contends that I was guilty of a dimensional mistake is in connection with my modification concerning the effect of secondary mass. While deriving the magnitude of the natural unit of energy, I think we should distinguish between the energy equivalents of the speed of a primary motion like the (one-dimensional) photon vibration and the speed of a gravitational entity (like an atom or subatom). This would not have mattered if we could derive the magnitude of the natural unit of energy directly from experimental results. But Larson first derives the magnitude of the natural unit of mass from Avogadro’s constant. The magnitude of the natural unit of energy is derived from the natural unit of mass, thus derived. Therefore, the size of this energy unit is to be adjusted for the secondary mass effects as applicable to the one-dimensional situation. Letting p be the primary mass and s the secondary mass, we have the ratio of the gravitational mass unit to the primary mass unit as (p + s)/p. Remembering that the dimensions of energy are t/s while those of mass are (t/s)3, the ratio of the energy unit derived from the gravitational mass unit to the true one-dimensional energy unit would have to be [(p + s)/p]1/3. Since the primary mass unit, p = 1, this factor turns out to be (1 + s)1/3. It may be noted that this factor is non-dimensional, being a ratio, and its application (my Equation (7)) does not vitiate the dimensions of h as Satz contends. Further, Satz remarks that, “secondary mass varies between the subatoms and the atoms and so cannot be a part of the conversion factor…” But this is not relevant to the situation, since I was concerned with the effect of the secondary mass included in the definition of the gravitational mass unit on the size of the natural unit of energy, insofar as the latter is derived from the unit of gravitational mass. I was not speaking of the secondary mass component included in the mass composition of the material particles at all, since that has no bearing on the present issue, as Satz correctly points out. I was, however, uncertain as to which items make up the secondary mass—like the magnetic mass, the electric mass, etc.—in the situation I was discussing. (6) And a final comment: In Satz’s Equation (4) $h = \frac{1}{156.4444} \times \frac{t^2_0}{\left(\frac{sec / cm}{erg} \right )}$ (7) replacing all the terms with the corresponding natural units we get $h = \frac{1}{156.4444} \times \frac{t^2_0}{\left(\frac{t_0 / s_0}{e_0} \right )} = \frac{1}{156.4444} \times \left(e_0 \times t_0 \times s_0 \right )$ (8) If we now bring in the cm term that has been staying in the denominator of the frequency term, we $h = \frac{\left(e_0 \times t_0 \times s_0 \right )}{156.4444 \times 1 cm}$ (9) This is identical to my Equation (6) (seeing that I there used suffix n instead of suffix 0 and the upper case letters instead of the lower case) and, therefore, there is nothing essentially new in Satz’s derivation excepting the introduction of the “swap” and the “switch.” 1 Satz, Ronald W., “A New Derivation of Planck's Constant,” Reciprocity XVIII, № 3 (Autumn, 1989). 2 Larson, Dewey B., Nothing But Motion, North Pacific Publishers, OR, 1979, p. 152. 3 KVK Nehru, “Theoretical Evaluation of Planck's Constant,” Reciprocity XII, № 3 (Summer, 1983), p 6. International Society of  Unified Science Reciprocal System Research Society Salt Lake City, UT 84106 USA Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528669118881226, "perplexity": 712.294979920588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00453.warc.gz"}
https://proofwiki.org/wiki/Cardinality_of_Set_of_Subsets
Cardinality of Set of Subsets Theorem Let $S$ be a set such that $\card S = n$. Let $m \le n$. Then the number of subsets $T$ of $S$ such that $\card T = m$ is: ${}^m C_n = \dfrac {n!} {m! \paren {n - m}!}$ Proof 1 For each $X \subseteq \N_n$ and $Y \subseteq S$, let $\map B {X, Y}$ be the set of all bijections from $X$ onto $Y$. Let $\Bbb S$ be the set of all subsets of $S$ with $m$ elements. By Cardinality of Power Set of Finite Set and Cardinality of Subset of Finite Set, $\Bbb S$ is finite, so let $s = \card {\Bbb S}$. Let $\beta: \map B {\N_n, S} \to \Bbb S$ be the mapping defined as: $\forall f \in \map B {\N_n, S}: \map \beta f = \map f {\N_m}$ For each $Y \in \Bbb S$, the mapping: $\Phi_Y: \map {\beta^{-1} } Y \to \map B {\N_m, Y} \times \map B {\N_n - \N_m, S - Y}$ defined as: $\map {\Phi_Y} f = \tuple {f_{\N_m}, f_{\N_n - \N_m} }$ is also (clearly) a bijection. $\card {\map B {\N_m, Y} } = m!$ and: $\card {\map B {\N_n - \N_m, S - Y} } = \paren {n - m}!$ $\card {\map {\beta^{-1} } y} = m! \paren {n - m}!$ It is clear that $\set {\map {\beta^{-1} } Y: Y \in \Bbb S}$ is a partition of $\map B {\N_n, S}$. Therefore by Number of Elements in Partition: $\card {\map B {\N_n, S} } = m! \paren {n - m}! s$ Consequently, as $\card {\map B {\N_n, S} } = n!$ by Cardinality of Set of Bijections, it follows that: $m! \paren {n - m}! s = n!$ and the result follows. $\blacksquare$ Proof 2 Let ${}^m C_n$ be the number of subsets of $m$ elements of $S$. From Number of Permutations, the number of $m$-permutations of $S$ is: ${}^m P_n = \dfrac {n!} {\paren {n - m}!}$ Consider the way ${}^m P_n$ can be calculated. First one makes the selection of which $m$ elements of $S$ are to be arranged. This number is ${}^m C_n$. Then for each selection, the number of different arrangements of these is $m!$, from Number of Permutations. So: $\ds m! \cdot {}^m C_n$ $=$ $\ds {}^m P_n$ Product Rule for Counting $\ds$ $=$ $\ds \frac {n!} {\paren {n - m}!}$ Number of Permutations $\ds \leadsto \ \$ $\ds {}^m C_n$ $=$ $\ds \frac {n!} {m! \paren {n - m}!}$ Product Rule for Counting $\blacksquare$ Proof 3 Let $\N_n$ denote the set $\set {1, 2, \ldots, n}$. Let $\struct {S_n, \circ}$ denote the symmetric group on $\N_n$. Let $r \in \N: 0 < r \le n$. Let $B_r$ denote the set of all subsets of $\N_n$ of cardinality $r$: $B_r := \set {S \subseteq \N_n: \card S = r}$ Let $*$ be the mapping $*: S_n \times B_r \to B_r$ defined as: $\forall \pi \in S_n, \forall S \in B_r: \pi * B_r = \pi \sqbrk S$ where $\pi \sqbrk S$ denotes the image of $S$ under $\pi$. From Group Action of Symmetric Group on Subset it is established that $*$ is a group action. The stabilizer of any $U \in B_r$ is the set of permutations on $\N_n$ that fix $U$. Let $U = \set {1, 2, \ldots, r}$. So: $\tuple {a_1, a_2, \ldots, a_r}$ can be any one of the $r!$ permutations of $1, 2, \ldots, r$ $\tuple {a_{r + 1}, a_{r + 2}, \ldots, _n}$ can be any one of the $\paren {n - r}!$ permutations of $r + 1, r + 2, \ldots, n$. Thus: $\order {\Stab U} = r! \paren {n - r}!$ $B_r = \Orb U$ and so: $\card {B_r} = \card {\Orb U}$ From the Orbit-Stabilizer Theorem: $\card {\Orb U} = \dfrac {\order {S_n} } {\order {\Stab U} } = \dfrac {n!} {r! \paren {n - r}!}$ But $\card {B_r}$ is the number of subsets of $\N_n$ of cardinality $r$. Hence the result. $\blacksquare$ Proof 4 The proof proceeds by induction. For all $n \in \Z_{\ge 0}$, let $\map P n$ be the proposition: ${}^m C_n = \dfrac {n!} {m! \paren {n - m}!}$ Basis for the Induction $\map P 1$ is the case: ${}^m C_n = \dfrac {n!} {m! \paren {n - m}!}$ $\ds {}^m C_1$ $=$ $\ds \begin {cases} 1 & : m = 0 \text { or } m = 1 \\ 0 & : \text {otherwise} \end {cases}$ $\ds$ $=$ $\ds \dfrac {1!} {0! \paren {1 - 0}!}$ for $m = 0$ $\ds$ $=$ $\ds \dfrac {1!} {1! \paren {1 - 1}!}$ for $m = 1$ Thus $\map P 1$ is seen to hold. This is the basis for the induction. Induction Hypothesis Now it needs to be shown that, if $\map P k$ is true, where $k \ge 1$, then it logically follows that $\map P {k + 1}$ is true. So this is the induction hypothesis: ${}^m C_k = \dfrac {k!} {m! \paren {k - m}!}$ from which it is to be shown that: ${}^m C_{k + 1} = \dfrac {\paren {k + 1}!} {m! \paren {k + 1 - m}!}$ Induction Step This is the induction step: The number of ways to choose $m$ elements from $k + 1$ elements is: the number of ways to choose $m$ elements elements from $k$ elements (deciding not to select the $k + 1$th element) the number of ways to choose $m - 1$ elements elements from $k$ elements (after having selected the $k + 1$th element for the $n$th selection) $\ds {}^m C_{k + 1}$ $=$ $\ds {}^m C_k + {}^{m - 1} C_k$ $\ds$ $=$ $\ds \binom k m + \binom k {m - 1}$ Induction Hypothesis $\ds$ $=$ $\ds \binom {k + 1} m$ Pascal's Rule So $\map P k \implies \map P {k + 1}$ and the result follows by the Principle of Mathematical Induction. Therefore: $\forall n \in \Z_{>0}: {}^m C_n = \dfrac {n!} {m! \paren {n - m}!}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824346899986267, "perplexity": 105.60623683233615}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00115.warc.gz"}
https://en.wikipedia.org/wiki/Strong_operator_topology
# Strong operator topology In functional analysis, a branch of mathematics, the strong operator topology, often abbreviated SOT, is the locally convex topology on the set of bounded operators on a Hilbert space H induced by the seminorms of the form ${\displaystyle T\mapsto \|Tx\|}$, as x varies in H. Equivalently, it is the coarsest topology such that the evaluation maps ${\displaystyle T\mapsto Tx}$ (taking values in H) are continuous for each fixed x in H. The equivalence of these two definitions can be seen by observing that a subbase for both topologies is given by the sets ${\displaystyle U(T_{0},x,\epsilon )=\{T:\|Tx-T_{0}x\|<\epsilon \}}$ (where T0 is any bounded operator on H, x is any vector and ε is any positive real number). In concrete terms, this means that ${\displaystyle T_{i}\to T}$ in the strong operator topology iff ${\displaystyle \|T_{i}(x)-T(x)\|\to 0}$ for each x in H. The SOT is stronger than the weak operator topology and weaker than the norm topology. The SOT lacks some of the nicer properties that the weak operator topology has, but being stronger, things are sometimes easier to prove in this topology. It can be viewed as more natural, too, since it is simply the topology of pointwise convergence. The SOT topology also provides the framework for the measurable functional calculus, just as the norm topology does for the continuous functional calculus. The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the SOT are precisely those continuous in the WOT. Because of this, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT. This language translates into convergence properties of Hilbert space operators. For a complex Hilbert space, it is easy to verify by the polarization identity, that Strong Operator convergence implies Weak Operator convergence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817719459533691, "perplexity": 146.1517443888449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00250.warc.gz"}
http://math.stackexchange.com/questions/624233/taking-seats-on-a-plane-probability-that-the-last-two-persons-take-their-proper
# Taking seats on a plane: probability that the last two persons take their proper seats 100 men are getting on a plane (containing 100 chairs) one by one. Each one has a seat number but the first one forgot his number. So he randomly chooses a chair and sits on it. Others do know their own number. Therefore if their seat is not occupied, they sit on it and otherwise they randomly choose a chair and sit. What is the probability that the last two persons sit on their own chair !? Edit: As @ByronSchmuland mentioned, a similar but different problem is HERE - We've seen this one before: math.stackexchange.com/questions/5595/taking-seats-on-a-plane It's a classic! –  Byron Schmuland Jan 1 at 19:36 @ByronSchmuland I am afraid not. Here I am asking for the probability of the last two correct sit not the last correct sit. –  user117432 Jan 1 at 19:39 So if there are $n$ seats, the probability of the last $k$ passenger will take their own seats is $P(n,k)$. When the first passenger randomly takes his seat, there are three possible situations. 1) he takes his own seats with probability $1/n$ 2) he takes the seats belongs to one of the last $k$ passengers with probability $k/n$ 3) he takes the seats of the $m$th passenger with probability $1/n$ for each $m$. In case 1, then every one takes their own seat naturally. In case 2, then it is impossible that last $k$ passengers still take all their seats. In case 3, everyone before $m$th passenger take their seats naturally. the $m$ the passenger have to randomly choose one seat from $n-m+1$ seats and the seat assigned to the first passenger can be viewed as his newly assigned seat. So the probability of the last $k$ passengers takes their own seats is $P(n-m+1,k)$. In summary $$P(n,k)=\frac{1}{n}+\frac{1}{n}\sum_{m=2}^{n-k} P(n-m+1,k)$$ It is easily transformed to $$Q(n,k)=\frac{1}{n}\sum_{m=2}^{n-k} Q(n-m+1,k), \quad \text{where } Q(n,k)\equiv P(n,k)-\frac{1}{k+1}$$ For each $k$, it is easy to see that $P(n=k+1,k)=1/(k+1)$. So $Q(n,k)=0$ for any $n\ge k$. So $$P(n\ge k,k)=\frac{1}{k+1}$$ So in your problem, $n=100, k=2$, the answer would be 1/3. In this similar problem, $n=100, k=1$, the answer would be 1/2. - I'm confused how you got the term for case 3, since the "rules" for seating aren't recursive; in particular, if passenger #2 finds his seat empty (passenger #1 did not take it) then #2 will always sit there, so the probability that #2 sits in the correct seat is (n - 1)/n, not 1/(n - 1)... isn't it? –  Michael Edenfield Jan 2 at 1:15 @MichaelEdenfield You are right. I edited my answer accordingly. –  MoonKnight Jan 2 at 2:05 Using the idea on the rephrased solution Byron linked in the comments. Assume the first guy keeps getting evicted from his seat. Then there will come a time when the first guy is sent to one of 3 positions. Of these only 1 leaves the 2 last guy's places available so the probability of this happening is $\frac{1}{3}$ and if he does get the correct one then the last two passengers will naturally take their seats correctly. So the probability is $\frac{1}{3}$ In fact you can generalize this to see the probability the last $n$ persons get in their places is $\frac{1}{n+1}$ when $n<m$ where $m$ is the total number of seats. For $n=m$, it is trivially $\frac{1}{n}=\frac{1}{m}$. I think you mean $\frac 1 {n+1}$. Note: all the passengers after the first get the right seats if and only if the first did, which gives the correct $\frac 1 {99+1}=\frac 1{100}$ probability that the first person got the right seat. –  dfeuer Jan 1 at 20:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684855699539185, "perplexity": 520.3832927522835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00080-ip-10-33-131-23.ec2.internal.warc.gz"}
https://pbelmans.ncag.info/blog/2011/08/24/how-to-use-thanks-in-memoirs-titlingpage/
I was fiddling with memoir's titlingpage and wanted functionality for the \thanks macro. This is normally handled by \maketitle, but we're bypassing this construction. Section 4.2 of the (otherwise excellent) memoir manual doesn't state anything on how to actually implement this behaviour yourself. After a bit of fiddling with memoir.dtx (any .dtx file is the documented source of a LaTeX package or class) I hacked together the desired result. A bit of \makeatletter magic and calling the correct macros did what I wanted. For future reference, here it is. If you want to change the marker using \thanksmarkseries{...}, start your titlingpage environment with the following: The main part of the solution simply is to mention at the end of the titlingpage environment. Note that \usethanksrule is not necessary but it feels better to have it. There is the undocumented \saythanks macro though, but it is buggy: it always uses fnsymbol as its marker for the actual footnote. Behaviour that could be fixed I presume.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718932271003723, "perplexity": 2502.992331293117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221208750.9/warc/CC-MAIN-20180814081835-20180814101835-00115.warc.gz"}
https://quant.stackexchange.com/questions/23291/why-do-we-usually-use-normal-distribution-and-not-laplace-distribution-to-genera
# Why do we usually use normal distribution and not Laplace distribution to generate stochastic process? When working with a stochastic process based on brownian motion, the increments have normal (gaussian) distribution. However, it seems that a Laplace distribution, with density: $$f(t) = \frac{\lambda}{2} e^{-\lambda |t|} \qquad (t \in \mathbb R)$$ would fit much more returns of EUR/USD for example than a normal distribution. (Especially, it has fatter tails than normal distribution, as required). Here in blue is the density of returns, based on 10 years of historical data of 5-minutes chart of EUR/USD. In green, the density of a Laplace distribution. # Question: Are there some financial models, in which the stochastic process used is: $$d \, X_t = ... + c \, d \, W_t$$ where $d\, W_t$ has a Laplace distribution instead of a normal distribution? • Errors in observations are usually either normal or Laplace (Source: Wilmott: FAQ in Quant Finance) – vonjd Feb 14 '16 at 19:26 • @vonjd do you have a link for this topic? – Basj Feb 14 '16 at 20:05 • wilmottwiki.com/wiki/index.php?title=Laplace_distribution – vonjd Feb 15 '16 at 8:27 It is very natural to think that why assumption of Normal distribution is made for stochastic process $W_t$ when other more appropriate and valid distribution is available specially for modelling stock price. Before writing answer to your question explicitly, first look at definition of Wiener process: Wiener Process: A Wiener process $W_t$, relative to family of information set {$\mathscr{F}_t$}, is a stochastic process such that: 1. $W_t$ is square integrable martingale with $W_0=0$ and $$\mathbb{E}\big[(W_t-W_s)^2 \big]=t-s, \quad s\leq t$$ 2. The path of $W_t$ is continuous over $t$. The following definition lead to following characteristics of Wiener process: • $W_t$ has independent increments because it is martingale. • $W_t$ has zero mean and and mean of every increment equals zero. • $W_t$ has variance $t$ • $W_t$ has continuous paths. Note that, in the above definition nothing is said about the distribution of increments. Normal distribution follows from the assumption stated in the definition. This is known as famous Levy Theorem. If assumptions under Wiener process are satisfied then Levy theorem prove that Wiener increments, $W_t - W_s$, are normally distributed with mean zero and variance $|t-s|$. In short, for a stochastic process having continuous path and independent increments (natural properties of stock price), normality is not an assumption but gets derived from the basics assumption of Wiener process. And this is the reason no other assumption about the distribution of $dW_t$is made in literature because normality is not an assumption at all. • Thanks for this point of view. But two remarks : in all documents I have found, gaussian is an assumption (i.e. part of the definition of "Wiener process"), example : here and here. Do you have a reference showing that "normal distribution" is not in the usual definition but follows from other assumptions? – Basj Feb 14 '16 at 19:59 • Second thing @Neeraj : the real historical data for 10 years of EUR/USD shows really how close to Laplace are the increments. Wouldn't make it sense to study a stochastic process for which $d\, W_t \sim Laplace(...)$ ? – Basj Feb 14 '16 at 20:02 • @Basj it is very misleading that various books and website mention Gaussian distribution as assumption of Wiener process. But Neftci in his book "An introduction to mathematics of financial derivative" has explicitly mentioned about this point. I did not find pdf of the book on any site, but if you have physical copy you may read chapter 8 from 3rd edition. – Neeraj Feb 14 '16 at 20:34 • Futher, I have already mentioned in my answer that assumption of Wiener process are natural properties of stock price which are difficult to relax. – Neeraj Feb 14 '16 at 20:48 • @Basj Your plot of historical data assume each day data point is independent and identically distributed. If sample is large, former assumption can be relaxed but not the later. If each day data point is not identically distributed then your distributional plot of historical data is meaningless. – Neeraj Feb 14 '16 at 21:25 If you're willing to drop the requirement to have continuous paths, or rather, if you're willing to relax it, it is possible to have a bigger class of stochastic processes called Lévy processes. The requirement for it to work is that the probability distribution of your variable is infinitely divisible. The easiest way to formulate this is in terms of the characteristic function $$\phi_X(u)=\mathbb{E}[\exp(iuX)]$$ If for any positive integer $n$ the characteristic function $\phi_X(u)$ is the $n$th power of a characteristic function, then we have the infinite divisibility property. For such distributions, it is possible to construct Lévy processes, i.e. a process for which $X_0=0$ and the increments $X_{s+t}-X_s$ have $\phi_X(u)^t$ as their characteristic function. The Laplace distribution possesses the infinite divisibility property, its characteristic function is $$\phi(u)=(1+\lambda^{-2}u^2)^{-1}$$ and there are indeed random variables with as characteristic functions powers of this. Those variables possess a variance gamma or generalized Laplace distribution and the Laplace distribution is of course a special case of those distributions. The associated process is known as a variance gamma process and it was introduced in financial mathematics by Madan, Seneta, Carr and Chang in the 90's. There is also a jump-diffusion model called the Kou model which has jump sizes which are Laplace distributed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887870192527771, "perplexity": 522.5913946414962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571506.61/warc/CC-MAIN-20190915134729-20190915160729-00469.warc.gz"}
http://math.stackexchange.com/questions/290646/quantifiers-predicates-logical-equivalence
# Quantifiers, predicates, logical equivalence I am asked if $(\exists x) (P(x) \rightarrow Q(x))$ and $\forall x P(x) \rightarrow \exists xQ(x)$ are logically equivalent. I don't think they are but how will I prove it? Am I supposed to use either of the direct proof, contrapositive or contradiction proofs? Or give an interpretation? - If you can prove that $(1)$ one statement implies the other AND $(2)$ vice versa, then you prove logical equivalence. That is, we show: $(\exists x)(P(x) \rightarrow Q(x)) \implies (\forall x P(x) \rightarrow \exists x Q(x))\tag{1}$ $\forall x P(x) \rightarrow \exists x Q(x) \implies (\exists x)(P(x) \rightarrow Q(x))\tag{2}$ $(1)\to (2):$ Suppose $(\exists x)(P(x)\to Q(x))$. Then $P(x_0)\to Q(x_0)$ for some $x_0$. Now suppose $\forall xP(x)$. If there are no $x$, then the implication (2) is true. Else, then clearly there is some $x_0$ such that $P(x_0)$. Thus, $Q(x_0)$ and $\exists xQ(x)$. So we have shown $\forall xP(x)\to \exists xQ(x)$. $(2)\to (1):$ Now assume $\forall xP(x)\to \exists xQ(x)$. Either (a) $\forall xP(x)$ or (b) $\lnot \forall x P(x) \equiv \exists x\neg P(x)$. In the case of (a), $\exists xQ(x)$, that is, $Q(x_0)$ for some $x_0$ and so $P(x_0)\to Q(x_0)$. In the case of (b), $\neg P(x_1)$ for some $x_1$ so then $P(x_1)\to Q(x_1)$. Thus in either (a), (b), $(\exists x)(P(x)\to Q(x))$. • That is, you have shown that $(1) \iff (2)$. Either statement is true if and only if the other is true. I.e. $(1) \equiv (2)$. To disprove logical equivalence, it suffices to find a counter example: find any interpretation in which one of the statements is true, but the other is false. Note that $$\forall x P(x) \rightarrow \exists xQ(x) \equiv \lnot\forall x P(x) \lor \exists xQ(x)$$ is false if and only if $\forall xP(x)$ is true, but $\exists x Q(x)$ is false. Put differently, the statement is true whenever $\forall xP(x)$ is false, and/or it is true whenever $\exists Q(x)$ is true. - In general, if you suspect that two statements are not equivalent, try to come up with an interpretation which makes explicit that one can be true while the other is false. If that doesn't seem to be working, then consider that the statements might indeed be equivalent. To show equivalence, see the answer above as to how to prove it. Implications can be proven directly, or indirectly. Note that to show logical equivalence, it is not enough to find an interpretation in which both are true or both are false, since a logical equivalence must hold whatever the interpretation. –  amWhy Jan 30 '13 at 17:26 In "Now suppose $\forall x P(x)$. Then clearly $P(x_0)$", (2nd line in (1)-->(2)), what does the quantifier range over? What if there aren't any $x$? –  alancalvitti Jan 30 '13 at 17:40 @alancalvitti Then the implication is vacuously true. –  amWhy Jan 30 '13 at 17:44 Ok thanks for edit. Terminology/notation Q: why do you say " implication" ($\implies$)? Isn't "$\forall x P(x) \rightarrow \exists x Q(x)$ also an implication? If so why 2 different arrow styles? –  alancalvitti Jan 30 '13 at 18:47 If there are no x such that $P(x)$ is true, (i.e., for all $x$ (if $P(x)$ then $Q(x)$) if there is no $x$ satisfying $P(x)$, then (false) --> implies anything, including statement $\exists x Q(x)$, and whenever the righthand side of $\rightarrow$ (the consequent) is true, the entire implication is true. Which means all of $(2)$ is true, since the right-hand side of $\implies$ is true. It referred to as being "vacuously true". –  amWhy Jan 30 '13 at 22:42 Hint: Use this fact that $P\to Q$ is logically equivalent to $\sim P\vee Q$. More over we know that $\sim (\exists x,~ P(x))\equiv \forall x, \sim P(x)$ and vice versa. - Nice hints, helpful+1 –  amWhy Jan 31 '13 at 0:11 Assume $(\exists x)(P(x)\to Q(x))$. Thus $P(x_0)\to Q(x_0)$ for some $x_0$. Assume $\forall xP(x)$. Then especially $P(x_0)$. Hence $Q(x_0)$ and $\exists xQ(x)$. We have thus shown $\forall xP(x)\to \exists xQ(x)$. Assume $\forall xP(x)\to \exists xQ(x)$. Either $\forall xP(x)$ or $\exists x\neg P(x)$. In the first case $\exists xQ(x)$, i.e. $Q(x_0)$ for some $x_0$ and then $P(x_0)\to Q(x_0)$. In the second case $\neg P(x_1)$ for some $x_1$ and then $P(x_1)\to Q(x_1)$. Thus in both cases $(\exists x)(P(x)\to Q(x))$. - We can think about $P$ and $Q$ as subsets of the universe (an arbitrary universe), which we shall denote as $U$. This is somewhat of a semantic analysis of the sentences, which can be enlightening. The first sentence says that either $P\neq U$ or $P\cap Q\neq\varnothing$. The second says that if $P=U$ then $Q\neq\varnothing$. Now we analyze whether or not these two sentences are equivalent or not. If $P=U$ and $Q\neq\varnothing$ (the second sentence holds) then clearly $P\cap Q\neq\varnothing$, so the first sentence holds. On the other hand if $P\neq U$ then the second sentence is automatically true, and we have some $x$ such that $P(x)$ is false, and so the first sentence is also true. The same arguments also tell us that if the first sentence holds then the second one holds as well, in one case vacuously and in another case less vacuously. - Excellent explanation; exactly what I was hoping for. Thank you! –  suitegamer Jan 30 '13 at 23:34 @suitegamer: Thanks. I need the practice. My students have an exam soon, and tomorrow my office hours will be swamped by similar questions... Even more since we often ask them to give such semantical analysis and not just a formal proof. –  Asaf Karagila Jan 30 '13 at 23:36 +1 for the different, but helpful, take on the question! :-) –  amWhy Jan 31 '13 at 0:29 Here is a late alternative proof (in a slightly different notation that I'm more familiar with): \begin{align} & \langle \forall x :: P(x) \rangle \Rightarrow \langle \exists x :: Q(x) \rangle \\ \equiv & \;\;\;\;\;\text{"expand $\;\Rightarrow\;$ in the simplest way possible"} \\ & \lnot \langle \forall x :: P(x) \rangle \lor \langle \exists x :: Q(x) \rangle \\ \equiv & \;\;\;\;\;\text{"simplify: DeMorgan on left hand side"} \\ & \langle \exists x :: \lnot P(x) \rangle \lor \langle \exists x :: Q(x) \rangle \\ \equiv & \;\;\;\;\;\text{"simplify: $\;\exists\;$ distributes over $\;\lor\;$ (since both are disjunctions)"} \\ & \langle \exists x :: \lnot P(x) \lor Q(x) \rangle \\ \equiv & \;\;\;\;\;\text{"reintroduce $\;\Rightarrow\;$"} \\ & \langle \exists x :: P(x) \Rightarrow Q(x) \rangle \\ \end{align} So these expressions are equivalent. - Yet another way to do this is with the method of analytic tableaux. It's a systematic search for contradictions, so we assume the negation of $$(\exists x(Px\to Qx))\leftrightarrow (\forall xPx\to\exists xQx)\tag{1}$$ and show that no matter what we do, we end up with a contradiction; look: . From left to right: $\neg Pa$ with $Pa$, $\neg Qa$ with $Qa$, $\neg Pb$ with $Pb$, and $\neg Qc$ with $Qc$. Hence $(1)$ is indeed an equivalence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985114336013794, "perplexity": 241.34538088062385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
https://hal-imt.archives-ouvertes.fr/hal-01144319
# Optimal Capacity Relay Node Placement in a Multi-hop Network on a Line 2 RMS - Réseaux, Mobilité et Services LTCI - Laboratoire Traitement et Communication de l'Information Abstract : We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of optimal placement of relay nodes along the straight line joining a source node and a destination node. The achievable rate formulas that we utilize are for full-duplex radios at the relays and decode-and-forward relaying. For the single relay case, and individual power constraints at the source node and the relay node, we provide explicit formulas for the optimal relay location and the optimal power allocation to the source-relay channel, for the exponential and the power-law path-loss channel models. For the multiple relay case, we consider exponential path-loss and a total power constraint over the source and the relays, and derive an optimization problem, the solution of which provides the optimal relay locations. Numerical results suggest that at low attenuation the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that a constant rate independent of the attenuation in the network can be achieved by placing a large enough number of relay nodes uniformly between the source and the destination, under the exponential path-loss model with total power constraint. Document type : Conference papers Cited literature [11 references] https://hal-imt.archives-ouvertes.fr/hal-01144319 Contributor : Admin Télécom Paristech <> Submitted on : Monday, October 14, 2019 - 11:09:43 AM Last modification on : Friday, July 31, 2020 - 11:28:07 AM Long-term archiving on: : Wednesday, January 15, 2020 - 7:15:44 PM ### Identifiers • HAL Id : hal-01144319, version 1 ### Citation Arpan Chattopadhyay, Abhishek Sinha, Marceau Coupechoux, Anurag Kumar. Optimal Capacity Relay Node Placement in a Multi-hop Network on a Line. International workshop on Resource Allocation and Cooperation in Wireless Networks, RAWNET, in conjunction with WiOpt, May 2012, Paderborn, Germany. pp.1-8. ⟨hal-01144319⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840729296207428, "perplexity": 1446.0781868185868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00531.warc.gz"}
http://openturns.github.io/openturns/master/user_manual/_generated/openturns.BinomialFactory.html
# BinomialFactory¶ class BinomialFactory(*args) Binomial factory. Available constructor: BinomialFactory() Notes The estimation is done by maximizing the likelihood of the sample. We initialize the value of to where is the empirical mean of the sample , and its unbiaised empirical variance. Then, we evaluate the likelihood of the sample with respect to the Binomial distribution parameterized with . By testing successively and instead of , we determine the variation of the likelihood of the sample with respect to the Binomial distribution parameterized with and . We then iterate in the direction that makes the likelihood decrease, until the likelihood stops decreasing. The last couple is the one selected. Attributes: thisownThe membership flag Methods build(*args) Build the distribution. buildEstimator(*args) Build the distribution and the parameter distribution. getBootstrapSize() Accessor to the bootstrap size. getClassName() Accessor to the object’s name. getId() Accessor to the object’s id. getName() Accessor to the object’s name. getShadowedId() Accessor to the object’s shadowed id. getVisibility() Accessor to the object’s visibility state. hasName() Test if the object is named. hasVisibleName() Test if the object has a distinguishable name. setBootstrapSize(bootstrapSize) Accessor to the bootstrap size. setName(name) Accessor to the object’s name. setShadowedId(id) Accessor to the object’s shadowed id. setVisibility(visible) Accessor to the object’s visibility state. buildAsBinomial __init__(*args) Initialize self. See help(type(self)) for accurate signature. build(*args) Build the distribution. Available usages: build(sample) build(param) Parameters: sample2-d sequence of floatSample from which the distribution parameters are estimated. paramCollection of PointWithDescriptionA vector of parameters of the distribution. distDistributionThe built distribution. buildEstimator(*args) Build the distribution and the parameter distribution. Parameters: sample2-d sequence of floatSample from which the distribution parameters are estimated. parametersDistributionParametersOptional, the parametrization. resDistDistributionFactoryResultThe results. Notes According to the way the native parameters of the distribution are estimated, the parameters distribution differs: • Moments method: the asymptotic parameters distribution is normal and estimated by Bootstrap on the initial data; • Maximum likelihood method with a regular model: the asymptotic parameters distribution is normal and its covariance matrix is the inverse Fisher information matrix; • Other methods: the asymptotic parameters distribution is estimated by Bootstrap on the initial data and kernel fitting (see KernelSmoothing). If another set of parameters is specified, the native parameters distribution is first estimated and the new distribution is determined from it: • if the native parameters distribution is normal and the transformation regular at the estimated parameters values: the asymptotic parameters distribution is normal and its covariance matrix determined from the inverse Fisher information matrix of the native parameters and the transformation; • in the other cases, the asymptotic parameters distribution is estimated by Bootstrap on the initial data and kernel fitting. Examples Create a sample from a Beta distribution: >>> import openturns as ot >>> sample = ot.Beta().getSample(10) >>> ot.ResourceMap.SetAsUnsignedInteger('DistributionFactory-DefaultBootstrapSize', 100) Fit a Beta distribution in the native parameters and create a DistributionFactory: >>> fittedRes = ot.BetaFactory().buildEstimator(sample) Fit a Beta distribution in the alternative parametrization : >>> fittedRes2 = ot.BetaFactory().buildEstimator(sample, ot.BetaMuSigma()) getBootstrapSize() Accessor to the bootstrap size. Returns: sizeintegerSize of the bootstrap. getClassName() Accessor to the object’s name. Returns: class_namestrThe object class name (object.__class__.__name__). getId() Accessor to the object’s id. Returns: idintInternal unique identifier. getName() Accessor to the object’s name. Returns: namestrThe name of the object. getShadowedId() Accessor to the object’s shadowed id. Returns: idintInternal unique identifier. getVisibility() Accessor to the object’s visibility state. Returns: visibleboolVisibility flag. hasName() Test if the object is named. Returns: hasNameboolTrue if the name is not empty. hasVisibleName() Test if the object has a distinguishable name. Returns: hasVisibleNameboolTrue if the name is not empty and not the default one. setBootstrapSize(bootstrapSize) Accessor to the bootstrap size. Parameters: sizeintegerSize of the bootstrap. setName(name) Accessor to the object’s name. Parameters: namestrThe name of the object. setShadowedId(id) Accessor to the object’s shadowed id. Parameters: idintInternal unique identifier. setVisibility(visible) Accessor to the object’s visibility state. Parameters: visibleboolVisibility flag. thisown The membership flag
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233951330184937, "perplexity": 4699.948086817047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00273.warc.gz"}
https://stats.stackexchange.com/tags/k-means/new
# Tag Info 3 Clustering is descriptive: a central point in each cluster serves as a surrogate, or approximate descriptor of, the points in the cluster. Use the coordinates of these central points for labels. As an idea for consideration--certainly not as the only or even the best approach--you could assess how far each central coordinate is from a center of all the ... 0 In a word: No. You'll need to go through the cluster by hand and try to spot patterns. 1 Scaling is done on the column. It subtracts the mean and divides by the standard deviation , so you should get colMeans(golub) to be around zero. However you don't need to scale, if you check the vignette: library(multtest) ?golub Gene expression data (3051 genes and 38 tumor mRNA samples) from the leukemia microarray study of Golub et al. (1999). Pre-... 0 It just means taking the mean of the data. You can do this by finding the mean of each marginal distribution and putting the marginal means into a vector that is the multivariate mean. Example: Let $X=(X_1, X_2, X_3)$ have $\bar{X}_1=3$, $\bar{X}_2=-13$, and $\bar{X}_3=-5$. The the multivariate mean is $\bar{X} = (3, -13, -5)$. (I think this assumes using \$... 0 I would rather go for Gaussian Mixtures Models, you can think of it like multiple Gaussian distribution based on probabilistic approach, you still need to define the K parameter though, the GMMS handle non-spherical shaped data as well as other forms, here is an example using scikit: https://jakevdp.github.io/PythonDataScienceHandbook/05.12-gaussian-mixtures.... 1 Elbow method is a heuristic. There's no "mathematical" definition and you cannot create algorithm for it, because the point of the method is about visually finding the "breaking point" on the plot. This is subjective criteria and it often happens that different people could end up with different conclusions given same plots. 0 It seems like the Rayleigh Quotient, and you can refer to this article: The Rayleigh’s principle and the minimax principle for the eigenvalues of a self-adjoint matrix. Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989028334617615, "perplexity": 617.6310787116424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00599.warc.gz"}
https://swmath.org/software/13053
# MUSCOD-II MUSCOD-II. A Software Package for Numerical Solution of Optimal Control Problems involving Differential-Algebraic Equations (DAE). The optimization package MUSCOD-II is designed to efficiently and reliably solve optimal control problems for systems described by ordinary differential equations (ODE) or by differential-algebraic equations (DAE) of index one. MUSCOD-II can treat system models formulated either in the gPROMS modeling language (PSE Ltd.), in FORTRAN, or in C, and it has been widely applied to industrial problems, in particular in the field of chemical engineering. The software has the capability to solve highly nonlinear problems with complex equality or inequality constraints on states and controls, as e.g., final state constraints, periodicity conditions, or path constraints. Furthermore, a unique multistage formulation allows to optimize integrated batch processes consisting of several coupled process stages, e.g., a batch reaction step followed by a batch separation step.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8841686248779297, "perplexity": 1119.0511123716255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00513.warc.gz"}
http://www.gradesaver.com/textbooks/science/physics/conceptual-physics-12th-edition/chapter-4-plug-and-chug-page-69/41
## Conceptual Physics (12th Edition) $$0.25 \frac{m}{s^{2}}$$ Calculate the acceleration from the net force. $$a = F/m = (500 N)/(2000 kg) = 0.25 N/kg = 0.25 \frac{m}{s^{2}}$$ This is discussed on pages 63-64.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469226598739624, "perplexity": 1358.237398059722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00102-ip-10-233-31-227.ec2.internal.warc.gz"}
http://talkstats.com/search/3380715/
# Search results 1. ### Little problem that I'm struggling to solve... The proportion of smokers in a certain population is 25%. What is the probability that smokers' proportion in a sample of 40 people randomly selected from this population will be below (strictly less than) 20%. Surely you would use the binomial table N = 40 P = 0.25 X = 8 (since below 20% =...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629951477050781, "perplexity": 626.8776092608766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00793.warc.gz"}
http://math.stackexchange.com/questions/750493/whats-wrong-with-my-proof
# What's wrong with my proof? Let $f:A\to B$ be a function. Let $T_1$ and $T_2$ be subsets of $B$. Show that if $f$ is onto, then $$f^{-1}(T_1)\subset f^{-1}(T_2) \implies T_1\subset T_2$$ I proved it as follows. Let $x\in f^{-1}(T_1)$ => $\{x\in f^{-1}(T_1) \implies x\in f^{-1}(T_2)\}$ => $\{f(x)\in T_1 \implies f(x)\in T_2\}$ If $f$ is not onto, there is $b$ in $T_1$ such that no element in $T_1$ hits $b$. So, not all elements in $T_1$, $T_2$ have inverse image under $f$. But since $f$ is onto, all elements in $B$ have their inverse image under $f$. Thus, we can say that all $f(x)\in T_1$ => all $f(x)\in T_2$. Thus, $T_1\subset T_2$. I think this is incorrect but don't know what is wrong. - To prove your statement, you should start with an element $x \in T_1$ and show it is in $T_2$. Your proof is sort of alright, but your use of the fact that $f$ is onto is somewhat unprecise. –  Hrodelbert Apr 12 '14 at 8:14 Thank you so much. I'll try again! –  luvmathematics Apr 12 '14 at 8:16 @Hrodelbert How about this? I proved it again as follows. Let $y$∈$T$1. Since $f$ is onto, all $y$∈$T$1 have their inverse image under $f$. Then there exists $x$∈$f$-1($T$1) s.t $f(x)$=$y$∈$T$1. $x$∈$f$-1($T$1)⊂$f$-1($T$2) => $x$∈$f$-1($T$2) => $f(x)$=$y$∈$T$2 => $y$∈$T$2 Thus, $T$1⊂$T$2. –  luvmathematics Apr 12 '14 at 8:33 This is exactly right! Well done! –  Hrodelbert Apr 12 '14 at 8:43 Assume $f$ is onto and that $f^{-1}(T_1)\subset f^{-1}(T_2)$. You want to deduce from these assumptions that $T_1\subset T_2$. You have then to take $b\in T_1$ and prove $b\in T_2$. Since $f$ is onto, there is $a\in A$ such that $f(a)=b$. By definition, $a\in f^{-1}(T_1)$ and so, by assumption, $a\in f^{-1}(T_2)$. By definition of $f^{-1}$ this means $f(a)\in T_2$, so $b\in T_2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795187711715698, "perplexity": 238.78991369941946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093400.45/warc/CC-MAIN-20150627031813-00067-ip-10-179-60-89.ec2.internal.warc.gz"}
https://academy.edulabs.org/mod/glossary/view.php?id=677&mode=letter&hook=L&sortkey=&sortorder=
Electronics (Mike Jaroch) Note: You may download the entries for this glossary here. If you wish to use this in your own Moodle course, first make a blank glossary and then follow the instructions for importing glossary entries here. Browse the glossary using this index Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL Page:  1  2  3  4  5  6  7  (Next) ALL L LACING SHUTTLE A device upon which lacing may be wound to prevent fouling the tape or cord and to aid the lacing process. (Usually made from brass, aluminum, fiber, or plastic) [4]. Keyword(s): LACING SHUTTLE LAG The amount one wave is behind another in time; expressed in electrical degrees [2]. Keyword(s): LAG LAMINATED CORE A core built up from thin sheets of metal insulated from each other and used in transformers [2]. Keyword(s): LAMINATED CORE LANDS Conductors or runs on pcbs [14]. Keyword(s): LANDS LAP WINDING An armature winding in which opposite ends of each coil are connected to adjoining segments of the commutator so that the windings overlap [5]. Keyword(s): LAP WINDING LARGE SCALE INTEGRATION An integrated circuit containing 1,000 to 2,000 logic gates or up to 64,000 bits of memory [14]. Keyword(s): LARGE SCALE INTEGRATION, LSI LASER An acronym for light amplification by stimulated emission of radiation [17]. Keyword(s): LASER LAW OF ABSORPTION In Boolean algebra, the law which states that the odd term will be absorbed when a term is combined by logical multiplication with the logical sum of that term and another term, or when a term is combined by logical addition with the logical product of one term and another term (for example, A(A + B) = A + AB = A) [13]. Keyword(s): LAW OF ABSORPTION LAW OF MAGNETISM Like poles repel; unlike poles attract [1]. Keyword(s): LAW OF MAGNETISM LC CAPACITOR-INPUT FILTER This is the most common type of filter. It is used in a power supply where output current is low and load current is relatively constant [7]. Keyword(s): LC CAPACITOR-INPUT FILTER Page:  1  2  3  4  5  6  7  (Next) ALL
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233090043067932, "perplexity": 1626.497973340116}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00080.warc.gz"}
https://www.physicsforums.com/threads/integration-of-odd-power-of-cotangent-multiplied-by-odd-power-of-cosecant.297217/
# Integration of odd power of cotangent multiplied by odd power of cosecant 1. ### gzAbc123 6 1. The problem statement, all variables and given/known data Describe the strategy you would use to (integrate: cot^m x)(csc^n x)dx, if m and n are odd. 2. Relevant equations I know the integral of cosecant is ln |sec x + tan x| + C I also know the integral of cotangent is ln |sinx| + C But I have no clue how this would apply to odd powers and multiplying them together. 3. The attempt at a solution I know how to multiply odd powers of sine and cosine, but for cosecant and cotangent, I have no clue where to get started. The question isn't asking me to actually integrate, but just to describe how I would integrate. Does this integration parallel the corresponding rules for odd powers and multiplication of tanx and secx? Help, please. 2. ### lubuntu 473 Doesn't your book describe how to do this? thing of an identity that relates the two trig functions, also your not worried about the integral of them individually in this case you want to think about their deviates so you can use substitution.... 3. ### gzAbc123 6 Hey, No my book only mentions how to solve this for two even-powered; and one odd and one even. It doesn't even give any hints about how to integrate when there are two odd-powered cosecant and cotangent functions being multiplied. Do you have any other suggestions? 133 5. ### gzAbc123 6 Thanks for the link :). The only problem is not the Steven and Todd rule don't seem to apply for cosecant AND cotangent used in the same equation. Is it there somewhere? 6. ### carlodelmundo 133 Did you see page 343 Problem #2? 7. ### Hurkyl 16,090 Staff Emeritus More than just parallel; armed with the power of trig identities, you can make them the same problem. 8. ### gzAbc123 6 It says pages 343-344 are not part of this book review... what the? 9. ### carlodelmundo 133 My bad. Pp. 323 #2 10. ### gzAbc123 6 But isn't that question for tangent and secant? Is it basically the same set of steps for cotangent or cosecant? Or is there a few steps that would needed to be added? 11. ### carlodelmundo 133 Honestly... it's the same steps. The only differences b/w cot and csc vs. tan and cot is that.... the derivatives/anti derivatives must take into account the negative (-). Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook Similar discussions for: Integration of odd power of cotangent multiplied by odd power of cosecant
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095565676689148, "perplexity": 1246.7754469541808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462313.6/warc/CC-MAIN-20150226074102-00312-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/142093-integration-help.html
1. ## Integration Help! we got a homework question about the take off of a rocket, its acceleration proportional to fuel usage. we are given acceleration as: ((d^2)h) / (d(t^2)) = kb / m(subscript 0) - bt where dm/dt = -b refers to (The rate at which the total mass of the rocket changes with time is equal to the fuel consumption rate) integrating this function (dm/dt = -b) gives m(t) = -bt + constant, whereby the mass at time (0) = constant = m(subscript 0) the initial mass of the rocket is m(subscript 0) the initial mass of the fuel is Xm(subscript 0) where X is an unspecified fraction between 0 and 1 A t some point in time, say at t = t*, the fuel runs out and the mass of the rocket will be its initial mass minus the initial mass of fuel. This is described by m(t*) = -bt* + m0 m(t*) = m0 - X.m0 k is an undefined constant ((d^2)h) / (d(t^2)) = kb / m(subscript 0) - bt and i need to integrate this once, to find the velocity at a specific time, and then integrate again to find the height at a specific time i need to prove: h(t*) = (k.m0 / b) ((1-X)ln(1-X) + X(1-(g/2kb).Xm0)) and then the velocity at the end of the boost phase: v(t*) 2. Originally Posted by mathsismyfriend we got a homework question about the take off of a rocket, its acceleration proportional to fuel usage. we are given acceleration as: ((d^2)h) / (d(t^2)) = kb / m(subscript 0) - bt[/tex] Is all of that in the denominator? Is it $\frac{d^2h}{dt^2}= \frac{kb}{m_0- bt}$? [tex]where dm/dt = -b refers to (The rate at which the total mass of the rocket changes with time is equal to the fuel consumption rate) integrating this function (dm/dt = -b) gives m(t) = -bt + constant, whereby the mass at time (0) = constant = m(subscript 0) the initial mass of the rocket is m(subscript 0) the initial mass of the fuel is Xm(subscript 0) where X is an unspecified fraction between 0 and 1 A t some point in time, say at t = t*, the fuel runs out and the mass of the rocket will be its initial mass minus the initial mass of fuel. This is described by m(t*) = -bt* + m0 m(t*) = m0 - X.m0 k is an undefined constant ((d^2)h) / (d(t^2)) = kb / m(subscript 0) - bt and i need to integrate this once, to find the velocity at a specific time, and then integrate again to find the height at a specific time Okay, what have you done so far? i need to prove: h(t*) = (k.m0 / b) ((1-X)ln(1-X) + X(1-(g/2kb).Xm0)) and then the velocity at the end of the boost phase: v(t*) 3. i have tried rearranging formula's: -bt = Xm0 m - bt = m0 - Xm0 so that v(t) = integral of (kb / m0 - Xm0) - g .dt or i tried: v(t) = integral of (-k(dm/dt) / m0 + t(dm/dt)) - g . dt 4. as a really complex problem i didn't imagine anyone would be able to help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871165752410889, "perplexity": 1798.1706080091626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718284.75/warc/CC-MAIN-20161020183838-00030-ip-10-171-6-4.ec2.internal.warc.gz"}
https://link.springer.com/chapter/10.1007/978-1-4757-3878-0_16
# The $$\overline \partial$$-Operator in Smoothly Bounded Domains • John Wermer Part of the Graduate Texts in Mathematics book series (GTM, volume 35) ## Abstract Let Ω be a bounded open subset of C n . We are essentially concerned with the following problem: Given a form f of type (0,1) on Ω with ∂̄f=0, find a function u on Ω such that ∂̄u=f. ## Keywords Hilbert Space Complex Variable BANACH Algebra Dense Subspace Smooth Positive Function
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933678507804871, "perplexity": 1554.2532380639634}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00156.warc.gz"}
http://ilcdoc.linearcollider.org/collection/Theses?ln=ja
# Theses 2016-07-1208:14 New Physics at the TeV Scale / Chakdar, Shreyashi [arXiv:1604.07358] The Standard Model of particle physics is assumed to be a low-energy effective theory with new physics theoretically motivated to be around TeV scale. [...] External link: http://arxiv.org/pdf/1604.07358.pdf 2015-09-2303:49 Hadron Collider Tests of Neutrino Mass-Generating Mechanisms / Ruiz, Richard E [arXiv:1509.06375] The Standard Model of particle physics (SM) is presently the best description of nature at small distances and high energies. [...] External link: http://arxiv.org/pdf/1509.06375.pdf 2013-12-0218:07 Effective Models for Dark Matter at the International Linear Collider / Schmeier, Daniel [arXiv:1308.4409] Weakly interacting massive particles (WIMPs) form a promising solution to the dark matter problem and many experiments are now searching for these particles. [...] External link: http://arxiv.org/pdf/1308.4409.pdf 2011-06-2500:01 Phenomenology of the minimal $B-L$ Model: the Higgs sector at the Large Hadron Collider and future Linear Colliders / Pruna, Giovanni Marco [arXiv:1106.4691] This Thesis is devoted to the study of the phenomenology of the Higgs sector of the minimal $B-L$ extension of the Standard Model at present and future colliders. [...] External link: http://arxiv.org/pdf/1106.4691.pdf 2010-05-2119:55 Measurements and simulations of MAPS (Monolithic Active Pixel Sensors) response to charged particles - a study towards a vertex detector at the ILC / Maczewski, Lukasz [arXiv:1005.3710] The International Linear Collider (ILC) is a project of an electron-positron (e+e-) linear collider with the centre-of-mass energy of 200-500 GeV. [...] External link: 1005.3710.pdf 2009-02-2023:31 Flavour Changing at Colliders in the Effective Theory Approach / Guedes, Renato Batista [arXiv:0811.2136] In this thesis we discuss the combined effects of strong and electroweak FCNC effective operators in top quark physics at the CERN LHC and lepton flavour violation at the ILC with dimension six effective operators.. External link: 0811.2136.pdf 2009-02-2022:18 Electroweak Contributions to Thermal Gravitino Production / Pradler, Josef [MPP-2006-257] [arXiv:0708.2786] At high temperatures, gravitinos are generated in inelastic scattering processes with particles that are in thermal equilibrium with the hot primordial plasma. [...] External link: 0708.2786.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406203031539917, "perplexity": 3167.3036208546105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00633.warc.gz"}
http://www.physicsforums.com/showthread.php?t=551826
# Focal length eyepiece for telescope by grouper Tags: eyepiece, focal, length, telescope P: 52 1. The problem statement, all variables and given/known data Suppose that you wish to construct a telescope that can resolve features 7.0 km across on the Moon, 384,000 km away. You have a 2.2 m-focal-length objective lens whose diameter is 10.5 cm. What focal-length eyepiece is needed if your eye can resolve objects 0.10 mm apart at a distance of 25 cm? 2. Relevant equations M=f(objective)/f(eyepiece) RP=resolving power=s=fθ=(1.22*λ*f)/D where s=distance between objects, f=focal length, D=diameter, and θ=angle between objects or θ=(1.22*λ)/D 1/f=1/d(object)+1/d(image) m=h(image)/h(object)=-d(image)/d(object) 3. The attempt at a solution I'm sure this problem uses the resolution equations somehow but I'm not sure how to go through two lenses with that. I tried to figure it out another way using the lens and magnification equations above. I used do=3.84e8 m and f=2.2 m to figure out that di≈2.2 m and then using the magnification equation of hi/ho=-di/do to get hi=-4.0104e-5 m. This image can then be used as the object for the eyepiece and since we want hi=0.0001m (the distance apart for the final image is supposed to be 0.10 mm) and di=-.25m (25 cm from the person and negative because the image will be on the same side as the object if do is positive), we can again use hi/ho=-di/do to get do=-0.10026m. With 1/f=1/do+1/di then, I got f=-0.07156 m. Not only did I go about this in a roundabout way that I'm sure the problem writer did not intend, my method also yielded the wrong answer. (I tried a positive value for f as well but that's not correct either.) I'm not concerned with correcting what I did wrong above because I'm sure there's a simpler and more direct way to get the correct answer but I'm not seeing it. Any help would be appreciated, thanks. HW Helper Thanks P: 10,674 Quote by grouper This image can then be used as the object for the eyepiece and since we want hi=0.0001m (the distance apart for the final image is supposed to be 0.10 mm) and di=-.25m (25 cm from the person and negative because the image will be on the same side as the object if do is positive), we can again use hi/ho=-di/do to get do=-0.10026m. The object distance is positive for the eyepiece do=0.10026 m. ehild P: 52 Using do=0.10026 m and di=-0.25 m yields f=0.167m, which is also incorrect. Our AI hinted that all of homework problems in this assignment use Rayleigh's criterion though, so I think this is the wrong way to go about it. I'm not sure how to apply Rayleigh to this situation though. HW Helper Thanks P: 10,674 Focal length eyepiece for telescope You can use the resolution formula for the objective lens to find out if those two points 7 km apart on the Moon are resolved in the first image. As the angle of view (7/3.84)10-5 is greater then the resolution angle of the objective lens for the visible light θ=(1.22*λ)/D, it is resolved and can be magnified further. (For the wavelength of 600 nm it is 7˙10-6). The problem asks the focal length of the eyepiece, and the correct result is f=0.167 m or 16.7 cm. You would need to use the resolution formula to find the proper diameter of the eyepiece lens. The angle of view of the image is about 6˙10-4 radian, so θ=(1.22*λ)/D must be smaller than that: D>(1.22*λ)/6˙10-4=1.22˙10-3 m, that is the diameter of the eyepiece should be greater than 1.22 mm, but it was not the question. I wonder why your teacher said your result was wrong. There can be other approximate) equations for a telescope, but you used the basic equations which are true and give correct result. ehild P: 52 Ok, I understand how you did that and I agree the answer should be f=0.167 m either way. It's an online program though and it keeps telling me that that answer is incorrect. It's due tonight and I'm not sure what else to do with this problem. HW Helper Thanks P: 10,674 It might be also the number of significant digits or the unit. Try 0.17 m or 16.7 cm ... ehild P: 52 No, it's in meters and automatically only grades the first two significant figures. I'm not sure what's wrong. We both must be making an assumption that isn't correct since we both got the same answer. I have until tonight, I'll look around elsewhere on the web for some hints, but if you think of anything else don't hesitate to drop a note here. Thanks for all the help so far as well. P: 52 Luckily we're allowed to guess as many times as we want so I just started plugging in numbers because I figured it had to be around 17 cm. I eventually got the right answer with 10 cm. Not sure how they got that though, I'll have to bring it up with my professor. This online thing we use hasn't always been 100% correct. HW Helper Thanks P: 10,674 http://en.wikipedia.org/wiki/Magnification Angular magnification — For optical instruments with an eyepiece, the linear dimension of the image seen in the eyepiece (virtual image in infinite distance) cannot be given, thus size means the angle subtended by the object at the focal point (angular size). Strictly speaking, one should take the tangent of that angle (in practice, this makes a difference only if the angle is larger than a few degrees). Thus, angular magnification is given by: $$\mathrm{MA}=\frac{\tan \varepsilon}{\tan \varepsilon_0}$$, where ε0 is the angle subtended by the object at the front focal point of the objective and ε is the angle subtended by the image at the rear focal point of the eyepiece. Telescope: The angular magnification is given by $$M= {f_o \over f_e}$$ where fo is the focal length of the objective lens and fe is the focal length of the eyepiece. (It is assumed that the focal point of the objective is at the same place as the focal point of the eyepiece, and one looks the image from the rear focal point of the eyepiece. See picture http://en.wikipedia.org/wiki/File:Kepschem.png) The angle subtended by the object is 7/3.84E5 and the angle subtended by the image at the rear focal point of the eyepiece is 0.01/25 =4E-4. So the angular magnification fo/fe=22, that is fe=0.10 m. Uhhhh... I never look into a telescope from the focal point of the eyepiece. I do not understand why is the magnification defined this way. ehild Attached Thumbnails P: 52 Yeah, I don't know either. I mean, that explanation makes sense in the context of defining everything that way but it goes against what we learned in class about telescope design and use. Oh well. At least I got the point for that problem. Thanks PF Gold P: 12,203 Quote by ehild http://en.wikipedia.org/wiki/Magnification Uhhhh... I never look into a telescope from the focal point of the eyepiece. I do not understand why is the magnification defined this way. Do you not arrange for the image to appear at infinity? It's the least tiring place to put it. HW Helper Thanks P: 10,674 Quote by sophiecentaur Do you not arrange for the image to appear at infinity? It's the least tiring place to put it. Well, you are right that looking at infinity with relaxed eyes is the less trying. But I usually want the image at the distance of my clear vision when reading or looking into a microscope or telescope, wanting to see clear details, and I do not mind if my eyes are not relaxed. And I usually put my eyes very near to the eyepiece of a telescope, and my glasses are very near to my eyes. It is different with a magnifying glass, I put the object near to the focal point when I want maximum magnification, with the image at my far sight. When a problem asks "What focal-length eyepiece is needed if your eye can resolve objects 0.10 mm apart at a distance of 25 cm?" I understand it that the image is at 25 cm distance from my eyes and it is the same distance of the image from the eyepiece lens as I usually put my eyes at the lens. I accept that the magnification of a telescope is defined this way, but I do not use the arrangement shown in the picture when I want to see something clearly. ehild Thanks PF Gold P: 12,203 Quote by ehild When a problem asks "What focal-length eyepiece is needed if your eye can resolve objects 0.10 mm apart at a distance of 25 cm?" I understand it that the image is at 25 cm distance from my eyes and it is the same distance of the image from the eyepiece lens as I usually put my eyes at the lens. I accept that the magnification of a telescope is defined this way, but I do not use the arrangement shown in the picture when I want to see something clearly. ehild Isn't that just a way of describing the angular resolution? I don't think it constitutes a suggestion that anyone would necessarily put the image just 25cm in front of their eye, given the option of a more relaxing distance (for hours of start gazing). Would you have a better suggestion for specifying angular magnification? I can't think of one involving fewer variables. There is a similar problem in specifying the magnification factor of a magnifying glass (loupe). They stamp a number on the side of a loupe but it only refers to a particular eye / lens setup - it's limited by just how good your powers of accommodation are. HW Helper Thanks P: 10,674 You are right, I understood at last how simple it is, and I draw a more clear picture, showing only those rays which go through the centre of the lenses and do not change direction. The angular size of an object is the angle these rays enclose. The focal distance of the objective is fo and that of the eyepiece is fe. The real image of the distant object appears in the focal point of the objective lens and the eyepiece is put with its focal point at the same place, so the virtual image is at infinity. The object of size So is at distance do from the objective lens. The size of the real image is Si. The angular size of the object is the same as that of the image at distance fo from the objective lens. $$\alpha=\frac{S_o}{d_o}=\frac{S_i}{fo}$$ The angular size of the image seen from the centre of the eyepiece is $$\beta=\frac{S_i}{fe}$$, which is the same as the angle subtended by the virtual image at infinity, so the angular magnification is $$M_A= \frac{\beta}{\alpha}=\frac{\frac{S_i}{fe}}{\frac{S_i}{fo}}=\frac{fo}{fe }$$. The angular size of the final image was given as the angle of an object of size S=0.1 mm at 0.25 m distance from the eyepiece: $$\beta=\frac{S}{0.25}=4 \cdot 10^{-4}$$ The angular size of the object on the Moon was $$\alpha=\frac{7000}{3.84 \cdot 10^8}=1.823 \cdot 10^{-5}$$ The angular magnification should be $$M_A=\frac{\beta}{\alpha}=\frac{4 \cdot 10^{-4}}{1.823 \cdot 10^{-5}}=22=\frac{fo}{fe}$$ that means fe=0.1 m. ehild Attached Thumbnails Sci Advisor Thanks PF Gold P: 12,203 That simplified diagram shows it all nicely. Takes me back to Mr. Scales, my hero. HW Helper Thanks P: 10,674 Quote by sophiecentaur That simplified diagram shows it all nicely. Takes me back to Mr. Scales, my hero. Thank you. Who is Mr Scales? ehild Sci Advisor Thanks PF Gold P: 12,203 My legendary A level Physics teacher. respect Related Discussions Introductory Physics Homework 1 Stargazing & Telescopes 4 Introductory Physics Homework 2 Introductory Physics Homework 8 Introductory Physics Homework 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8610343337059021, "perplexity": 508.6561356186334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535925433.20/warc/CC-MAIN-20140901014525-00413-ip-10-180-136-8.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=53469
## Zero Order and k [ENDORSED] $\frac{d[R]}{dt}=-k; [R]=-kt + [R]_{0}; t_{\frac{1}{2}}=\frac{[R]_{0}}{2k}$ Michelle_Nguyen_3F Posts: 40 Joined: Wed Sep 21, 2016 2:59 pm ### Zero Order and k Hello! So I know that if we plot [A] vs time and we get a straight line, then the zero order reaction has a slope of -k. However, is it possible for k to be positive as well? Thank you! Timothy_Yu_Dis3A Posts: 11 Joined: Wed Sep 21, 2016 2:58 pm Been upvoted: 1 time ### Re: Zero Order and k  [ENDORSED] For [A] vs. Time and a zero order reaction, I think k is always equal to negative slope. We only get a positive k when we have a second order reaction for the plot of 1/[A] vs. Time. Ariana de Souza 4C Posts: 99 Joined: Wed Sep 21, 2016 2:56 pm ### Re: Zero Order and k but the slope is -k. So wouldn't k have to be positive, for there to be a negative slope?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429349303245544, "perplexity": 1934.8152013529725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00403.warc.gz"}
https://indico.bnl.gov/event/10782/
# David Kaplan "[HET seminar] Relaxing the Cosmological Constant and Dark Energy Radiation" US/Eastern https://bnl.zoomgov.com/j/1616949369?pwd=dXJzMnlDS0ZPWDJvM0Zyb2ppbjc0UT09 #### https://bnl.zoomgov.com/j/1616949369?pwd=dXJzMnlDS0ZPWDJvM0Zyb2ppbjc0UT09 Description The smallness of the cosmological constant has yet to be understood in our current theories of nature. I will argue that a dynamical (and non-antropic) explanation suggests that today’s dark energy has a dynamical component. I will  show that if dark energy evolves in time, its dynamical component could be dominated by a bath of dark radiation. Within current constraints this radiation could have up to $\sim 10^3$ times more energy density than the cosmic microwave background. I will show models that produce different forms of dark radiation such as hidden photons, milli-charged particles and even Standard Model neutrinos. I will also show that the late-time cosmology is potentially distinguishable from a cosmological constant or normal quintessence.  If the radiation couples to the standard model, it may be directly testable in laboratory experiments! The agenda of this meeting is empty
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057659268379211, "perplexity": 1573.6175975583956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00489.warc.gz"}
https://rd.springer.com/article/10.1007%2Fs11139-018-0034-7
The Ramanujan Journal , Volume 47, Issue 2, pp 427–433 # On the sum of the reciprocals of the differences between consecutive primes • Nian Hong Zhou Article ## Abstract Let $$p_n$$ denote the n-th prime number, and let $$d_n=p_{n+1}-p_{n}$$. Under the Hardy–Littlewood prime-pair conjecture, we prove \begin{aligned} \sum _{n\le X}\frac{\log ^{\alpha }d_n}{d_n}\sim {\left\{ \begin{array}{ll} \quad \frac{X\log \log \log X}{\log X}~\qquad \quad ~ &{}\alpha =-1,\\ \frac{X}{\log X}\frac{(\log \log X)^{1+\alpha }}{1+\alpha }\qquad &{}\alpha >-1, \end{array}\right. } \end{aligned} and establish asymptotic properties for some series of $$d_n$$ without the Hardy–Littlewood prime-pair conjecture. ## Keywords Differences between consecutive primes Hardy–Littlewood prime-pair conjecture Applications of sieve methods ## Mathematics Subject Classification Primary 11N05 Secondary 11N36 11A41 ## Notes ### Acknowledgements The author would like to thank the anonymous referees and the editors for their very helpful comments and suggestions. The author also thank Min-Jie Luo for offering many useful suggestions and help. ## References 1. 1. Erdös, P., Nathanson, M.B.: On the sum of the reciprocals of the differences between consecutive primes. In: Chudnovsky, D.V., Chudnovsky, G.V., Nathanson, M.B. (eds.) Number theory: New York Seminar 1991–1995, pp. 97–101. Springer, New York (1996) 2. 2. Iwaniec, H., Kowalski, E.: Analytic Number Theory, vol. 53. American Mathematical Society Colloquium Publications, American Mathematical Society, Providence (2004) 3. 3. Friedlander, J.B., Goldston, D.A.: Some singular series averages and the distribution of Goldbach numbers in short intervals. Illinois J. Math. 39(1), 158–180 (1995) 4. 4. Goldston, D.A., Ledoan, A.H.: The jumping champion conjecture. Mathematika 61(3), 719–740 (2015) ## Authors and Affiliations 1. 1.Department of MathematicsEast China Normal UniversityShanghaiPeople’s Republic of China
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404762387275696, "perplexity": 3872.627387346037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826715.45/warc/CC-MAIN-20181215035757-20181215061757-00087.warc.gz"}
https://www.physicsforums.com/threads/forces-applying-to-antimatter.38969/
Forces Applying to Antimatter 1. Aug 10, 2004 IooqXpooI I don't know if this is true or not, and I have a feeling that it has already been proven otherwise, but I think that the forces applying to antimatter are flipped. For instance, a positron and an electron would repel, and a positron and a proton would attract(don't mind the inserting of the electron, it was the only negatively charged particle that I could think of that was of the matter family). If this is so, then two like pairs of different matter families would attract, and also, be quite interesting(so interesting, that this is provably proved wrong due to the fact that I would have heard of it no matter how much I miss the Physics news). Imagine this- you have a positron on one side of a box, with that side charged positively with matter, so it is attracted, and a proton in the same state on the other(wall charged negatively, etc.). Now imagine that you insert an electron. If you do so, it will be repelled from the Antimatter and attracted with $$\frac{kQq}{r^2} + \frac{r^2}{kQq}$$ to the proton, due to the repulsion and the skewed logic used by me to find the inverted formula is the Antimatter version(thus stating that they use Reverse Gravity as we use Gravity, etc.). Soo, evaluate this and try not to give too much criticism, for I know that it is hard to hold back with something like this...;) Last edited: Aug 10, 2004 2. Aug 10, 2004 mathman Don't read too much into the name anti-matter. Positrons are positively charged and act that way. They are attracted to electrons. Collisions between electrons and positrons are quite common, producing two gamma rays (511 kev). 3. Aug 10, 2004 Vern Mathman is correct IMHO. Antimatter acts just like matter until it gets close to matter, then every thing comes unglued. Antimatter has normal gravity, same as regular matter. Vern 4. Aug 10, 2004 marlon Besides all these things on matter and anti-matter come from Dirac and are well covered by QFT, i guess marlon 5. Aug 10, 2004 Chronos Anti-matter does not produce anti gravity, just the ordinary attractive version. 6. Aug 10, 2004 sparkster Photons are their own anti-particles, and gravity affects them in the normal way. 7. Aug 12, 2004 IooqXpooI Yes yes, I know this, but I wasn't sure about the forces applying to them...Well, now that it has been confirmed, you may ignore this theory...;) Similar Discussions: Forces Applying to Antimatter
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122134447097778, "perplexity": 969.5748002553999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00270-ip-10-145-167-34.ec2.internal.warc.gz"}
https://en.citizendium.org/wiki/Cauchy-Schwarz_inequality
Cauchy-Schwarz inequality Main Article Discussion Related Articles  [?] Bibliography  [?] Citable Version  [?] This editable Main Article is under development and subject to a disclaimer. In mathematics, the Cauchy-Schwarz inequality is a fundamental and ubiquitously used inequality that relates the absolute value of the inner product of two elements of an inner product space with the magnitude of the two said vectors. It is named in the honor of the French mathematician Augustin-Louis Cauchy and German mathematician Hermann Amandus Schwarz[1]. The inequality for real numbers The simplest form of the inequality, and the first one considered historically, states that ${\displaystyle (x_{1}y_{1}+x_{2}y_{2}+\cdots +x_{n}y_{n})^{2}\leq (x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2})(y_{1}^{2}+y_{2}^{2}+\cdots +y_{n}^{2})}$ for all real numbers x1, …, xn, y1, …, yn (where n is a arbitrary positive integer). Furthermore, the inequality is in fact an equality ${\displaystyle (x_{1}y_{1}+x_{2}y_{2}+\cdots +x_{n}y_{n})^{2}=(x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2})(y_{1}^{2}+y_{2}^{2}+\cdots +y_{n}^{2})}$ if and only if there is a number C such that ${\displaystyle x_{i}=Cy_{i}}$ for all i. The inequality for inner product spaces Let V be a complex inner product space with inner product ${\displaystyle \langle \cdot ,\cdot \rangle }$. Then for any two elements ${\displaystyle x,y\in V}$ it holds that ${\displaystyle |\langle x,y\rangle |\leq \|x\|\|y\|,\quad (1)}$ where ${\displaystyle \|a\|=\langle a,a\rangle ^{1/2}}$ for all ${\displaystyle a\in V}$. Furthermore, the equality in (1) holds if and only if the vectors ${\displaystyle x}$ and ${\displaystyle y}$ are linearly dependent (in this case proportional one to the other). If V is the Euclidean space Rn, whose inner product is defined by ${\displaystyle \langle x,y\rangle =\sum _{i=1}^{n}x_{i}y_{i},}$ then (1) yields the inequality for real numbers mentioned in the previous section. Another important example is where V is the space L2([a, b]). In this case, the Cauchy-Schwarz inequality states that ${\displaystyle \left(\int _{a}^{b}f(x)g(x)\,\mathrm {d} x\right)^{2}\leq \int _{a}^{b}{\big (}f(x){\big )}^{2}\,\mathrm {d} x\cdot \int _{a}^{b}{\big (}g(x){\big )}^{2}\,\mathrm {d} x}$ for all real functions f and g in ${\displaystyle \scriptstyle L^{2}([a,b])}$. Proof of the inequality A standard yet clever idea for a proof of the Cauchy-Schwarz inequality for inner product spaces is to exploit the fact that the inner product induces a quadratic form on V. Let ${\displaystyle x,y}$ be some fixed pair of vectors in V and let ${\displaystyle \phi (x,y)}$ be the argument of the complex number ${\displaystyle \langle x,y\rangle }$. Now, consider the expression ${\displaystyle f(t)=\langle x+te^{i\phi (x,y)}y,x+te^{i\phi (x,y)}y\rangle }$ for any real number t and notice that, by the properties of a complex inner product, f is a quadratic function of t. Moreover, f is non-negative definite: ${\displaystyle f(t)\geq 0}$ for all t. Expanding the expression for f gives the following: {\displaystyle {\begin{aligned}f(t)&=\langle x+te^{i\phi (x,y)}y,x+te^{i\phi (x,y)}y\rangle \\&=\|x\|^{2}+te^{i\phi (x,y)}\langle y,x\rangle +te^{-i\phi (x,y)}\langle x,y\rangle +t^{2}\|y\|^{2}\\&=\|x\|^{2}+2t|\langle x,y\rangle |+t^{2}\|y\|^{2}.\end{aligned}}} Since f is a non-negative definite quadratic function of t, it follows that the discriminant of f is non-positive definite. That is, ${\displaystyle 4|\langle x,y\rangle |^{2}-4\|x\|^{2}\|y\|^{2}=4(|\langle x,y\rangle |^{2}-\|x\|^{2}\|y\|^{2})\leq 0,}$ from which (1) follows immediately. References 1. Biography at MacTutor History of Mathematics, John J. O'Connor and Edmund F. Robertson, School of Mathematics and Statistics, University of St Andrews, Scotland.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879714846611023, "perplexity": 304.8419941949385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00366.warc.gz"}
https://en.wikipedia.org/wiki/Bulk_modulus
# Bulk modulus Illustration of uniform compression The bulk modulus (${\displaystyle K}$ or ${\displaystyle B}$) of a substance is measure of how compressible that substance is. It is defined as the ratio of the infinitesimal pressure increase to the resulting relative decrease of the volume. [1] ## Definition The bulk modulus ${\displaystyle K>0}$ can be formally defined by the equation ${\displaystyle K=-V{\frac {\mathrm {d} P}{\mathrm {d} V}}}$ where ${\displaystyle P}$ is pressure, ${\displaystyle V}$ is volume, and ${\displaystyle dP/dV}$ denotes the derivative of pressure with respect to volume. Equivalently ${\displaystyle K=\rho {\frac {\mathrm {d} P}{\mathrm {d} \rho }}}$ where ρ is density and dP/dρ denotes the derivative of pressure with respect to density (i.e. pressure rate of change with volume). The inverse of the bulk modulus gives a substance's compressibility. Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear, and Young's modulus describes the response to linear stress. For a fluid, only the bulk modulus is meaningful. For an anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law. ## Thermodynamic relation Strictly speaking, the bulk modulus is a thermodynamic quantity, and in order to specify a bulk modulus it is necessary to specify how the temperature varies during compression: constant-temperature (isothermal ${\displaystyle K_{T}}$), constant-entropy (isentropic ${\displaystyle K_{S}}$), and other variations are possible. Such distinctions are especially relevant for gases. For an ideal gas, the isentropic bulk modulus ${\displaystyle K_{S}}$ is given by ${\displaystyle K_{S}=\gamma \,p}$ and the isothermal bulk modulus ${\displaystyle K_{T}}$ is given by ${\displaystyle K_{T}=p}$ where γ is the heat capacity ratio p is the pressure. When the gas is not ideal, these equations give only an approximation of the bulk modulus. In a fluid, the bulk modulus K and the density ρ determine the speed of sound c (pressure waves), according to the Newton-Laplace formula ${\displaystyle c={\sqrt {\frac {K}{\rho }}}.}$ In solids, ${\displaystyle K_{S}}$ and ${\displaystyle K_{T}}$ have very similar values. Solids can also sustain transverse waves: for these materials one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds. ## Measurement It is possible to measure the bulk modulus using powder diffraction under applied pressure. It is a property of a fluid which shows its ability to change its volume under its pressure. ## Selected values Approximate bulk modulus (K) for common materials Material Bulk modulus in GPa Bulk modulus in psi Steel 160 23.2×106 Diamond (at 4K) [2] 443 64×106 Influences of selected glass component additions on the bulk modulus of a specific base glass.[3] A material with a bulk modulus of 35 GPa loses one percent of its volume when subjected to an external pressure of 0.35 GPa (~3500 bar). Water 2.2×109 Pa (value increases at higher pressures) Methanol 8.23×108 Pa (at 20 °C and 1 Atm) Air 1.42×105 Pa (adiabatic bulk modulus) Air 1.01×105 Pa (constant temperature bulk modulus) Solid helium 5×107 Pa (approximate) ## References 1. ^ "Bulk Elastic Properties". hyperphysics. Georgia State University. 2. ^ Page 52 of "Introduction to Solid State Physics, 8th edition" by Charles Kittel, 2005, ISBN 0-471-41526-X 3. ^ Fluegel, Alexander. "Bulk modulus calculation of glasses". glassproperties.com. Conversion formulas Homogeneous isotropic linear elastic materials have their elastic properties uniquely determined by any two moduli among these; thus, given any two, any other of the elastic moduli can be calculated according to these formulas. ${\displaystyle K=\,}$ ${\displaystyle E=\,}$ ${\displaystyle \lambda =\,}$ ${\displaystyle G=\,}$ ${\displaystyle \nu =\,}$ ${\displaystyle M=\,}$ Notes ${\displaystyle (K,\,E)}$ ${\displaystyle K}$ ${\displaystyle E}$ ${\displaystyle {\tfrac {3K(3K-E)}{9K-E}}}$ ${\displaystyle {\tfrac {3KE}{9K-E}}}$ ${\displaystyle {\tfrac {3K-E}{6K}}}$ ${\displaystyle {\tfrac {3K(3K+E)}{9K-E}}}$ ${\displaystyle (K,\,\lambda )}$ ${\displaystyle K}$ ${\displaystyle {\tfrac {9K(K-\lambda )}{3K-\lambda }}}$ ${\displaystyle \lambda }$ ${\displaystyle {\tfrac {3(K-\lambda )}{2}}}$ ${\displaystyle {\tfrac {\lambda }{3K-\lambda }}}$ ${\displaystyle 3K-2\lambda \,}$ ${\displaystyle (K,\,G)}$ ${\displaystyle K}$ ${\displaystyle {\tfrac {9KG}{3K+G}}}$ ${\displaystyle K-{\tfrac {2G}{3}}}$ ${\displaystyle G}$ ${\displaystyle {\tfrac {3K-2G}{2(3K+G)}}}$ ${\displaystyle K+{\tfrac {4G}{3}}}$ ${\displaystyle (K,\,\nu )}$ ${\displaystyle K}$ ${\displaystyle 3K(1-2\nu )\,}$ ${\displaystyle {\tfrac {3K\nu }{1+\nu }}}$ ${\displaystyle {\tfrac {3K(1-2\nu )}{2(1+\nu )}}}$ ${\displaystyle \nu }$ ${\displaystyle {\tfrac {3K(1-\nu )}{1+\nu }}}$ ${\displaystyle (K,\,M)}$ ${\displaystyle K}$ ${\displaystyle {\tfrac {9K(M-K)}{3K+M}}}$ ${\displaystyle {\tfrac {3K-M}{2}}}$ ${\displaystyle {\tfrac {3(M-K)}{4}}}$ ${\displaystyle {\tfrac {3K-M}{3K+M}}}$ ${\displaystyle M}$ ${\displaystyle (E,\,\lambda )}$ ${\displaystyle {\tfrac {E+3\lambda +R}{6}}}$ ${\displaystyle E}$ ${\displaystyle \lambda }$ ${\displaystyle {\tfrac {E-3\lambda +R}{4}}}$ ${\displaystyle {\tfrac {2\lambda }{E+\lambda +R}}}$ ${\displaystyle {\tfrac {E-\lambda +R}{2}}}$ ${\displaystyle R={\sqrt {E^{2}+9\lambda ^{2}+2E\lambda }}}$ ${\displaystyle (E,\,G)}$ ${\displaystyle {\tfrac {EG}{3(3G-E)}}}$ ${\displaystyle E}$ ${\displaystyle {\tfrac {G(E-2G)}{3G-E}}}$ ${\displaystyle G}$ ${\displaystyle {\tfrac {E}{2G}}-1}$ ${\displaystyle {\tfrac {G(4G-E)}{3G-E}}}$ ${\displaystyle (E,\,\nu )}$ ${\displaystyle {\tfrac {E}{3(1-2\nu )}}}$ ${\displaystyle E}$ ${\displaystyle {\tfrac {E\nu }{(1+\nu )(1-2\nu )}}}$ ${\displaystyle {\tfrac {E}{2(1+\nu )}}}$ ${\displaystyle \nu }$ ${\displaystyle {\tfrac {E(1-\nu )}{(1+\nu )(1-2\nu )}}}$ ${\displaystyle (E,\,M)}$ ${\displaystyle {\tfrac {3M-E+S}{6}}}$ ${\displaystyle E}$ ${\displaystyle {\tfrac {M-E+S}{4}}}$ ${\displaystyle {\tfrac {3M+E-S}{8}}}$ ${\displaystyle {\tfrac {E-M+S}{4M}}}$ ${\displaystyle M}$ ${\displaystyle S=\pm {\sqrt {E^{2}+9M^{2}-10EM}}}$ There are two valid solutions. The plus sign leads to ${\displaystyle \nu \geq 0}$. The minus sign leads to ${\displaystyle \nu \leq 0}$. ${\displaystyle (\lambda ,\,G)}$ ${\displaystyle \lambda +{\tfrac {2G}{3}}}$ ${\displaystyle {\tfrac {G(3\lambda +2G)}{\lambda +G}}}$ ${\displaystyle \lambda }$ ${\displaystyle G}$ ${\displaystyle {\tfrac {\lambda }{2(\lambda +G)}}}$ ${\displaystyle \lambda +2G\,}$ ${\displaystyle (\lambda ,\,\nu )}$ ${\displaystyle {\tfrac {\lambda (1+\nu )}{3\nu }}}$ ${\displaystyle {\tfrac {\lambda (1+\nu )(1-2\nu )}{\nu }}}$ ${\displaystyle \lambda }$ ${\displaystyle {\tfrac {\lambda (1-2\nu )}{2\nu }}}$ ${\displaystyle \nu }$ ${\displaystyle {\tfrac {\lambda (1-\nu )}{\nu }}}$ Cannot be used when ${\displaystyle \nu =0\Leftrightarrow \lambda =0}$ ${\displaystyle (\lambda ,\,M)}$ ${\displaystyle {\tfrac {M+2\lambda }{3}}}$ ${\displaystyle {\tfrac {(M-\lambda )(M+2\lambda )}{M+\lambda }}}$ ${\displaystyle \lambda }$ ${\displaystyle {\tfrac {M-\lambda }{2}}}$ ${\displaystyle {\tfrac {\lambda }{M+\lambda }}}$ ${\displaystyle M}$ ${\displaystyle (G,\,\nu )}$ ${\displaystyle {\tfrac {2G(1+\nu )}{3(1-2\nu )}}}$ ${\displaystyle 2G(1+\nu )\,}$ ${\displaystyle {\tfrac {2G\nu }{1-2\nu }}}$ ${\displaystyle G}$ ${\displaystyle \nu }$ ${\displaystyle {\tfrac {2G(1-\nu )}{1-2\nu }}}$ ${\displaystyle (G,\,M)}$ ${\displaystyle M-{\tfrac {4G}{3}}}$ ${\displaystyle {\tfrac {G(3M-4G)}{M-G}}}$ ${\displaystyle M-2G\,}$ ${\displaystyle G}$ ${\displaystyle {\tfrac {M-2G}{2M-2G}}}$ ${\displaystyle M}$ ${\displaystyle (\nu ,\,M)}$ ${\displaystyle {\tfrac {M(1+\nu )}{3(1-\nu )}}}$ ${\displaystyle {\tfrac {M(1+\nu )(1-2\nu )}{1-\nu }}}$ ${\displaystyle {\tfrac {M\nu }{1-\nu }}}$ ${\displaystyle {\tfrac {M(1-2\nu )}{2(1-\nu )}}}$ ${\displaystyle \nu }$ ${\displaystyle M}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 139, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583122730255127, "perplexity": 552.1109485218344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661155.56/warc/CC-MAIN-20160924173741-00142-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-2-solving-equations-chapter-review-page-155/38
## Algebra 1: Common Core (15th Edition) The 7 ounce part is already in ounces, so we do not have to worry about that yet. We first, however, need to put 4 pounds into ounces. Thus, we need to do a unit conversion in this problem. There are 16 ounces in a pound. Thus, we obtain: $\frac{4 pounds}{1} \times \frac{16 ounces}{pound} =64$ ounces. Note, we need to cancel out the unit of pounds, so we multiply 4 pounds by a fraction with pounds in the denominator. We add the additional 7 ounces to obtain a total of 71 ounces.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631851315498352, "perplexity": 589.803918299948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00434.warc.gz"}
https://astarmathsandphysics.com/ib-maths-notes/matrices/1007-simultaneous-equations-with-non-unique-solutions.html
Simultaneous Equations With Non Unique Solutions The set of simultaneous equations has the solutionsandandInfact many values ofandsatisfyboth equations. Notice that if the second equation is divided by 2 throughout, weobtain the first equation, so that these two equations are in factjust one independednt equation. In general, we need two independentlinear equations to solve simultaneous equations with two unknowns tofind unique solutions. If we only have one independent linearequation with two unknowns, then we will be able to find an infinitenumber of solutions. In general, if we have n equations with n unknowns, we can onlyfind unique solutions if the equations are independent, so that noneof the equations can be expressed in terms of the others. Consider the equations The third equation is the sum of the first two, so this system ofequations is not independent and does not have a unique solution. In fact, we can ignore one of the equations, and still solve thesystem. If we ignore the third equation, we have the system Settogive (1) (2) (1)-(2) gives(3) Substitute this expression into (1) to give(4) Then the set of solutions is given by These are parametric coordinates for a line. We can make thesolution more obvious. Eliminate t between (3) and (4) by finding(3)*6+4: isobvuously the equation of a line. If we have three equations in three unknowns and each is amultiple of the other, then we can discard two and the result will beone of the equations which must be the equation of a plane. Example: The second is twice the first, and the third is three times thefirst, so two are redundant. If we discard the last two, the solution set is the set ofsolutions toandthis is a plane.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566430449485779, "perplexity": 359.5036323891187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823284.50/warc/CC-MAIN-20171019122155-20171019142155-00802.warc.gz"}
http://aas.org/archives/BAAS/v36n5/aas205/1618.htm
AAS 205th Meeting, 9-13 January 2005 Session 145 Intergalactic Media Poster, Thursday, January 13, 2005, 9:20am-4:00pm, Exhibit Hall ## [145.10] The Internal Dynamics of Tidal Features Arising from Galaxy Collisions N. Hearn, G. Lake (Washington State Univ.), S. Lamb (Univ. of Illinois) Collisions and mergers between massive galaxies can produce tidal tails and bridges that span several galactic radii in length. Observations of a number of such systems have revealed the presence of {\sc Hii} regions, {\rm H}\alpha emission, and molecular gas concentrations at the extremities of the tidal features despite their 108 -- 109 year dynamical timescales. Typically, prodigious star formation in colliding galaxies occurs within large-scale density enhancements in the ISM that develop throughout the disks over a period of 108 years. However, the molecular gas and ionized regions in the tidal structures provide evidence that enhanced star formation is taking place in lower-density regions long after the initial collision. We present a set of galaxy collision models that have been generated by the n-body SPH code {\sc Tillamook}, focusing on the dynamics of material in the tidal structures. The effects of shock waves and radiative cooling in these regions is studied, and the kinematics of the tidal material are compared with velocity observations. Such investigations could have implications for our understanding of the efficiency of star formation in these limits. Bulletin of the American Astronomical Society, 36 5 © 2004. The American Astronomical Society.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053905010223389, "perplexity": 3171.31678881396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469305.48/warc/CC-MAIN-20150226074109-00328-ip-10-28-5-156.ec2.internal.warc.gz"}
https://mathzsolution.com/trying-to-define-mathbbr0-5-topologically-duplicate/
# Trying to define $\mathbb{R}^{0.5}$ topologically [duplicate] A few days ago, I was trying to generalize the defintion of Euclidean spaces by trying to define $\mathbb{R}^{0.5}$. Question: Is there a metric space $A$ such that $A\times A$ is homeomorphic to $\mathbb{R}$? I am interested also in seeing examples of $A$ which are only topological spaces Edit: If there exists a topological space $A$ such that $A\times A\cong \Bbb R$, then $A\times \{a\}$ is a subspace of $A\times A$ ($a\in A$). Hence $A\times\{a\}$ can be embedded in $\mathbb{R}$, since $A\cong A\times \{a\}$. Thus $A$ can be embedded in $\mathbb{R}$. Therefore $A$ is metrizable. Thank you No, no such space exists. Suppose $A$ is a topological space such that $A\times A\cong \mathbb R$, with the Euclidean induced topology on $\mathbb R$. If $A$ is disconnected then so is $A\times A$, but that would contradict the connectivity of $\mathbb R$, so it follows that $A$ is connected. But, since $A$ is connected it follows that $A\times A$ with a single point removed is still connected. However, $\mathbb R$ with a single point removed is not connected. Contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936684966087341, "perplexity": 65.17926676170481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00207.warc.gz"}
https://k12.libretexts.org/Bookshelves/Mathematics/Geometry/06%3A_Circles/6.13%3A_Segments_from_Chords
# 6.13: Segments from Chords $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ Apply the Intersecting Chords Theorem. When we have two chords that intersect inside a circle, as shown below, the two triangles that result are similar. This makes the corresponding sides in each triangle proportional and leads to a relationship between the segments of the chords, as stated in the Intersecting Chords Theorem. Intersecting Chords Theorem: If two chords intersect inside a circle so that one is divided into segments of length a and b and the other into segments of length $$c$$ and $$d$$ then $$ab=cd$$. What if you were given a circle with two chords that intersect each other? How could you use the length of some of the segments formed by their intersection to determine the lengths of the unknown segments? Example $$\PageIndex{1}$$ Find $$x$$. Simplify any radicals. Solution Use the Intersecting Chords Theorem. $$15\cdot 4=5\cdot x$$ $$60=5x$$ $$x=12$$ Example $$\PageIndex{2}$$ Find $$x$$. Simplify any radicals. Solution Use the Intersecting Chords Theorem. \begin{aligned} 18\cdot x&=9\cdot 3 \\ 18x&=27 \\ x&=1.5\end{aligned} Example $$\PageIndex{3}$$ Find x in each diagram below. Solution 1. Use the formula from the Intersecting Chords Theorem. \begin{aligned}12\cdot 8 &=10\cdot x \\ 96&=10x \\ 9.6&=x\end{aligned} 1. Use the formula from the Intersecting Chords Theorem. \begin{aligned} x\cdot 15&=5\cdot 9 \\ 15x&=45 \\ x&=3 \end{aligned} Example $$\PageIndex{4}$$ Solve for $$x$$ in each diagram below. Solution 1. Use the Intersecting Chords Theorem. \begin{aligned} 8\cdot 24&=(3x+1)\cdot 12 \\192&=36x+12 \\ 180&=36x \\ 5&=x\end{aligned} 1. Use the Intersecting Chords Theorem. \begin{aligned} (x−5)21&=(x−9)24 \\ 21x−105&=24x \\ 111&=3x \\ 37−216&=x \end{aligned} Example $$\PageIndex{5}$$ Ishmael found a broken piece of a CD in his car. He places a ruler across two points on the rim, and the length of the chord is 9.5 cm. The distance from the midpoint of this chord to the nearest point on the rim is 1.75 cm. Find the diameter of the CD. Solution Think of this as two chords intersecting each other. If we were to extend the 1.75 cm segment, it would be a diameter. So, if we find x in the diagram below and add it to 1.75 cm, we would find the diameter. \begin{aligned} 4.25\cdot 4.25&=1.75\cdot x \\ 18.0625&=1.75x\end{aligned} $$x\approx 10.3 cm, making the diameter 10.3+1.75\approx 12 cm, which\: is\: the actual diameter of a CD.$$ ## Review Fill in the blanks for each problem below and then solve for the missing segment. $$20x=_______$$ $$\text{_______}\cdot 4=\text{_______}\cdot x$$ Find x in each diagram below. Simplify any radicals. Find the value of $$x$$. 1. Suzie found a piece of a broken plate. She places a ruler across two points on the rim, and the length of the chord is 6 inches. The distance from the midpoint of this chord to the nearest point on the rim is 1 inch. Find the diameter of the plate. 2. Fill in the blanks of the proof of the Intersecting Chords Theorem. Given: Intersecting chords $$\overline{AC}$$ and $$\overline{BE}$$. Prove: $$ab=cd$$ Statement Reason 1. Intersecting chords $$\overline{AC}$$ and $$\overline{BE}$$ with segments $$a$$, $$b$$, $$c$$, and $$d$$. 1. 2. 2. Congruent Inscribed Angles Theorem 3. $$\Delta ADE\sim \Delta BDC$$ 3. 4. 4. Corresponding parts of similar triangles are proportional 5. $$ab=cd$$ 5. ## Vocabulary Term Definition central angle An angle formed by two radii and whose vertex is at the center of the circle. chord A line segment whose endpoints are on a circle. circle The set of all points that are the same distance away from a specific point, called the center. diameter A chord that passes through the center of the circle. The length of a diameter is two times the length of a radius. inscribed angle An angle with its vertex on the circle and whose sides are chords. intercepted arc The arc that is inside an inscribed angle and whose endpoints are on the angle. radius The distance from the center to the outer rim of a circle. Intersecting Chords Theorem According to the Intersecting Chords Theorem, if two chords intersect inside a circle so that one is divided into segments of length $$a$$ and $$b$$ and the other into segments of length $$c$$ and $$d$$, then $$ab = cd$$. Interactive Element Video: Segments from Chords Principles - Basic Activities: Segments from Chords Discussion Questions Study Aids: Circles: Segments and Lengths Study Guide Practice: Segments from Chords 6.13: Segments from Chords is shared under a CK-12 license and was authored, remixed, and/or curated by CK-12 Foundation via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998372495174408, "perplexity": 654.9884333998093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00780.warc.gz"}
https://math.stackexchange.com/questions/1567292/generic-factorization-of-a-one-parameter-family-of-degree-four-polynomials
# Generic factorization of a one-parameter family of degree four polynomials Let $P$ be a monic polynomial of degree four with integer coefficients. Is it true that there are only finitely many $a$ such that $P(x)-a$ factors (over $\mathbb Q$) as a product of two irreducible quadratic polynomials ? An optimistic guess would be that there is even a universal constant $C$ such that there are at most $C$ such values for $a$. For arbitrary natural $n$ $$x^4+4n^4=(x^2+2nx+2n^2)(x^2-2nx+2n^2)$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553012013435364, "perplexity": 58.73773381860448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00191.warc.gz"}
https://socratic.org/questions/with-the-reaction-determine-the-masses-of-hso-3nh-2-required-to-produce-0-135-g-
Chemistry Topics # With the reaction, determine the masses of HSO_3NH_2 required to produce 0.135 g of N_2(g) collected in a burette above water. How do you calculate the volume that this mass of N_2(g) would occupy at 23.4°C and 7.10*10^2 mmHg barometric pressure? ## $H S {O}_{3} N {H}_{2} \left(a q\right) + N a N {O}_{2} \left(a q\right) \to N a H S {O}_{4} \left(a q\right) + {N}_{2} \left(g\right) + {H}_{2} O \left(l\right)$ Apr 9, 2017 Calculate the masses from the related molar amounts. Calculate the volume of ${N}_{2}$ from the ideal gas laws. #### Explanation: The balanced equation shows that every mole of ${N}_{2}$ requires one mole of $H S {O}_{3} N {H}_{2}$. 0.135g of ${N}_{2}$ is $\left(\frac{0.135}{28}\right) = 0.00482$ moles of ${N}_{2}$. This requires $\left(0.00482 \cdot 97\right) = 0.468 g$ of $H S {O}_{3} N {H}_{2}$ and $\left(0.00482 \cdot 53\right) = 0.42555 g$ of $N a N {O}_{2}$. The volume of ${N}_{2}$ produced is $V = \left(\frac{n \cdot R \cdot T}{P}\right)$ P = 710/760 = 0.934atm, R = 0.0821 L-atm/K-mol, T = 296.6’K, n = 0.00482 moles, V = Liters $V = \left(\frac{0.00482 \cdot 0.0821 \cdot 296.6}{0.934}\right)$ ; $V = 0.126 L$ ##### Impact of this question 241 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596852660179138, "perplexity": 3105.274984432063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00199.warc.gz"}
https://abdul-quader.com/author/ama2115/
# Decidability and Definability This post is an extension of the previous one; I want to address Tarski’s Undefinability Theorem using computability-theoretic methods. Continue reading # Arithmetic, Self-Reference and Truth There is a remarkable theorem in mathematical logic that “Truth is not definable”. This is known as Tarski’s Undefinability Theorem. This result (along with Gödel’s Incompleteness Theorems) has fascinated me ever since I learned about it. This theorem (in a sense) shows us the limits of the ability to study truth within a given formal system. Continue reading # Ramsey’s Theorem and Ultrafilters In this post, I will go through a proof of one of my favorite results in combinatorics using a technique that is not necessarily well known outside of logic. # Ramsey’s Theorem There are many ways to describe (Infinite) Ramsey’s Theorem. One description uses graph theory. A graph is a set of vertices (nodes) and edges (connections between the nodes). A clique in a graph is a subset X of the set of vertices such that all vertices in X have edges between them. An anti-clique is a subset X of the set of vertices such that no two vertices in X have edges between them. Theorem: Every infinite graph has an infinite clique or an infinite anti-clique. We can formalize Ramsey’s Theorem in another way. A two-coloring of a set X is a function $f : X \to \{ 0, 1 \}$; in other words, a partition of X into two sets (the elements 0 and 1 are the “colors”, often representing blue and red). GIven a set X, the set $[X]^2$ is the set of pairs of elements of X. For example, the set $[\mathbb{N}]^2$ is the set of (unordered) pairs of natural numbers, e.g., it’s the set {0, 1}, {0, 2}, {1, 2}, {0, 3}, etc. Theorem (Ramsey’s Theorem for Pairs): If f is a two-coloring of $[\mathbb{N}]^2$, there is an infinite set H such that all pairs of elements of H get the same color. These two theorems are equivalent: given a two-coloring of the set of pairs of natural numbers, you can form an infinite graph by letting the set of vertices just be $\mathbb{N}$, and by putting an edge between numbers n and m if and only if the color $f(n, m) = 1$. You can also go the other way: given a graph, you can form a two-coloring of the set of pairs of vertices of the graph in a canonical way. This second statement, while more difficult to parse, is the one we will focus on for this post. First, let’s prove an easier statement: if $f$ is a two-coloring of $\mathbb{N}$, there is an infinite set X such that every element of X gets the same color. (This would be referred to as “Ramsey’s Theorem for Singletons”.) This result is fairly easy to prove: let $A = \{ n : f(n) = 0 \}$ and $B = \{ n : f(n) = 1 \}$. One of these two sets must be infinite, and every element of A gets color 0 (blue), while every element of B gets 1 (red). If we try to generalize this proof to Ramsey’s Theorem for Pairs, we can look at the sets $A = \{ \{ x, y \} : f(x, y) = 0 \}$ and $B = \{ \{ x, y \} : f(x, y) = 1 \}$; clearly one of these is infinite. But these sets are sets of pairs, and Ramsey’s Theorem states that there is an infinite set of numbers where all the pairs of numbers get the same color (in graph-theoretic terms, it’s easy to find an infinite set of edges or non-edges in an infinite graph; we want an infinite set of vertices which are either all mutually connected or mutually disconnected). The reason Ramsey’s Theorem for Singletons is easy to prove is because we know the color of each number; we know $f(0), f(1),$ etc. But if we are coloring pairs, then perhaps $f(0, 1) = 0, f(0, 2) = 1, f(0, 3) = 0,$. In other words, the color of 0 might be blue infinitely often, and it might be red infinitely often. We need a way to decide the color of each natural number “on average”. That is, if we could say, given a natural number n, “for most numbers m, $f(n, m) = 0$“, or, “for most numbers m, $f(n, m) = 1$“. Of course, it is not obvious that it is possible to make such a claim for each number n. Sometimes it’s clear: if, for example, the color “stabilizes”; ie, maybe $f(0, 1) = 0, f(0, 2) = 1, f(0, 3) = 0,$ and for all $n > 3$, $f(0, n) = 1$. In that case, it is clear that the color of 0 is “usually” 1. But perhaps the color does not stabilize: maybe there are infinitely many numbers m such that $f(0, m) = 0$ and infinitely many n such that $f(0, n) = 1$. So in that case, how would you decide what the color of 0 is on average? # Ultrafilters: “averaging” over infinity This idea of averaging over an infinite set can be studied formally with the concept of an ultrafilter. An ultrafilter is a way of choosing which sets are “large”. Definition: A filter on the natural numbers is a family of sets $\mathcal{F}$ with the following properties: • for any sets A and B, if $A \in \mathcal{F}$ and $A \subseteq B \subseteq \mathbb{N}$, then $B \in \mathcal{F}$ • for any sets $A, B \in \mathcal{F}$, the intersection $A \cap B \in \mathcal{F}$ • $\emptyset \not \in \mathcal{F}$. An ultrafilter is a filter $\mathcal{U}$ with the additional property that for all $X \subseteq \mathbb{N}$, either $X \in \mathcal{U}$ or $\mathbb{N} \setminus X \in \mathcal{U}$. Again, the idea is that the sets in an ultrafilter are considered “large”. Each of these properties represents some “largeness” principle. If a set is large, any set that contains it should also be large; if two sets are large, their intersection is large; the empty set is not large; and if a set is not large, the complement of it should be large. We say that a property “almost always” happens if it happens on a large set, and it “almost never” happens if it happens on a set which is not large. There are some easy examples of ultrafilters: take any number n, and let $\mathcal{U}_n = \{ A \subseteq \mathbb{N} : n \in A \}$. It’s not hard to verify that all the properties are satisfied. Ultrafilters like these (the ones generated in some sense by a single number) are called principal. Non-principal ultrafilters are harder to construct, but given some amount of set theory it is possible to show that they also exist. Non-principal ultrafilters have a crucial property: they contain no finite sets. That means that if A is the complement of a finite set (“cofinite”), then A is in every non-principal ultrafilter. # Proof of Ramsey’s Theorem Let $f : [\mathbb{N}]^2 -> \{0, 1\}$. Let $\mathcal{U}$ be a non-principal ultrafilter. We use the ultrafilter $\mathcal{U}$ to assign colors to each number as follows: $g : \mathbb{N} \to \{ 0, 1 \}$ is defined as $g(n) = 0$ if and only if $\{ x : f(n, x) = 0 \} \in \mathcal{U}$, and $g(n) = 1$ otherwise. Notice that $g(n) = 1$ if and only if $\{ x : f(n, x) = 1 \} \in \mathcal{U}$. In other words, think of $f$ as assigning an infinite sequence of colors to $n$. Then, using the ultrafilter, we pick out the color of $n$ “almost always”, and call that color $g(n)$. We will define a sequence by induction. Let $a_0 = 0$. Given $a_0, \ldots, a_n$, let $a_{n+1}$ be the least $a > a_n$ such that $f(a_i, a) = g(a_i)$ for each $i \leq n$. We must show that such an $a$ exists. The idea here is that the function $g$ assigns the “correct” color according to the ultrafilter; that is, for each $i$, the set of those $x$ such that $f(a_i, x) = g(a_i)$ is large. Since the intersection of finitely many large sets is also large, the set $X = \{ x : f(a_i, x) = g(a_i)$ for all $i \leq n \}$ is large. Furthermore, in a non-principal ultrafilter, large sets are always infinite, so there must be an $a \in X$ greater than $a_n$. Let $Y = \{ a_n : n \in \mathbb{N} \}$, $B = \{ a \in Y: g(a) = 0 \}$ and $R = \{ a \in Y : g(a) = 1 \}$. Clearly $Y$ is infinite, so one of $B$ or $R$ is infinite. Further, for all $x < y \in B, f(x, y) = 0$ and for all $x < y \in R, f(x, y) = 1$, so one of $B$ or $R$ is the infinite set required by the statement of Ramsey’s Theorem. # Other applications of ultrafilters Ultrafilters have applications all throughout mathematics, including in model theory, social choice, and non-standard analysis. I hope to explore non-standard analysis, in particular, in a future post, where I will discuss ideas like formalizing the notion of a limit using infinitesimals (instead of using epsilons and deltas). # Classifying Enayat Models of Peano Arithmetic – ASL Winter Meeting (JMM) Update (1/15): See slides here. I will be contributing a session at the 2018 ASL Winter Meeting at the Joint Mathematics Meetings on Saturday, January 13. This talk is based on a recent paper of mine which can be found on the arxiv here. Abstract: Simpson used arithmetic forcing to show that every countable model $\mathcal{M} \models \mathrm{PA}$ has an undefinable, inductive subset $X \subseteq M$ such that the expansion $(\mathcal{M}, X)$ is pointwise definable. Enayat later showed that there are many models with the property that every expansion upon adding a predicate for an undefinable class is pointwise definable. We refer to models with this property as Enayat models. That is, a model $\mathcal{M} \models \mathrm{PA}$ is Enayat if for each undefinable class $X \subseteq M$, the expansion $(\mathcal{M}, X)$ is pointwise definable. In this talk we show that a model is Enayat if it is countable, has no proper cofinal submodels and is a conservative extension of each of its elementary cuts. # What is Peano Arithmetic? I recently defended my dissertation and I have had a number of non-mathematician friends and family asking me to try to explain what it is I study. In a nutshell, I study models of Peano Arithmetic and their elementary extensions. This post is meant to give an introduction to Peano Arithmetic and in a later post I will go into more details about the problems I have been working on. # Peano Axioms My research is primarily in the first-order theory of Peano Arithmetic (PA). “First-order” refers to first-order logic, which I like to think of as providing a framework for formalizing mathematics. In first-order logic, we can express statements like “multiplication distributes over addition” as “$\forall a \forall b \forall c (a \times (b + c) = (a \times b) + (a \times c))$“. That is, we allow ourselves to use symbols for addition and multiplication, and can build up statements using logical connectives (“and”, “or” and “not”) and quantifiers  (“$\forall$“, meaning “for all”, or “$\exists$“, meaning “there exists”). Peano Arithmetic is a list of axioms written in this first-order language. These axioms include the above statement that multiplication distributes over addition, as well as other elementary statements about arithmetic including that addition and multiplication are commutative and associative. The other main axioms are the induction schema: an infinite list of axioms stating, essentially, for any property $\phi(x)$ that can be expressed in first-order logic, if $\phi(0)$ holds and, $\forall x \phi(x) \rightarrow \phi(x+1)$ holds, then $\forall x \phi(x)$ holds. In other words, for any property expressible by a formula of first-order logic $\phi(x)$, if the number 0 has that property, and, whenever n has that property, n + 1 also has that property, then every number will have that property. The idea behind this axiom is that many facts about numbers can be proved using “proof by induction“. It was originally hoped that all number theoretic facts could be proved in this way, but Peano Arithmetic is famously incomplete. model of Peano Arithmetic is a set M, with operations $+$ and $\times$ (that is, M has to be closed under addition and multiplication, so if we add or multiply two elements of M, we get another element of M), satisfying these axioms. That means that addition and multiplication will need to be commutative and associative, and multiplication should distribute over addition. Every model of PA will have the natural numbers $\mathbb{N}$ as an initial segment, but there are also non-standard models, which contain numbers greater than any standard natural number. In fact, first order logic is not strong enough to categorically axiomatize the standard model $\mathbb{N}$; that is, there is no set of first-order axioms for which the only model satisfying those axioms is the standard model. A careful reader may wonder how it is possible for a non-standard model of arithmetic to satisfy the induction schema. The argument goes: 0 is a natural number, and if n is a natural number then n + 1 is also, so by induction, every element of a model of arithmetic should be a natural number! Of course, this presupposes the idea that the property “x is a natural number” can be expressed in first-order logic, and in fact this is not possible (the proof that this is impossible is actually just the proof that there exist non-standard models). ## Arithmetic and Set Theory A nice result about PA is that it interprets finite set theory. In a model M of PA, if n and m are elements of M, we can define the relation $n \in m$ to mean that the n-th bit of the binary representation of m is 1. This is expressible in in the first order language of arithmetic; that is, there is a formula $\phi(x,y)$ expressing that the x-th bit of the binary representation of y is 1. So in any model M of PA, the set coded by an element m of M is $\{ i : \phi(i, m) \}$ (the set of all those indices at which the binary representation of m has a 1). For example, the number 13, represented in binary as 1101, will code the set { 0, 2, 3 } (count the places with a 1 in them from right to left, starting at 0). It’s not hard to see that every finite set can be coded by some number. Forgetting about arithmetic for a second, we can now think of our model of PA as just containing sets, which are all related to each other by this $\in$ relation. It turns out that for any model of PA, if we think of it as a model of set theory using this relation, it will satisfy all the axioms of set theory with one notable exception: the axiom of infinity. That is, this model really will look like a model of set theory, except that it will not have any infinite sets. Set theory enjoys a special place in the foundations of mathematics: often everything (all mathematical objects, functions, spaces, etc.) is defined in terms of sets, and in particular numbers are just particular kinds of sets. That is, number theory can be formalized in terms of set theory. Here we have the opposite idea: we define sets in terms of numbers, and formalize set theory in terms of number theory (Peano Arithmetic). Anything that can be proven from finite set theory can be formalized and proven within PA. We might hope that any question about finite mathematical objects which has an answer can be answered within PA, but it turns out this isn’t true: some arguments about finite objects really do require infinity. The video below, from the excellent YouTube channel PBS Infinite Series, does a great job explaining a problem where this phenomenon occurs: # Lattices and Coded Sets – AMS Spring Eastern Sectional Meeting I will be speaking at the Special Session on Model Theory at the 2017 AMS Spring Eastern Sectional Meeting on Saturday, May 6, 2017. Abstract: Given an elementary extension $\mathcal{M} \prec \mathcal{N}$ of models of Peano Arithmetic (PA), the set of all $\mathcal{K}$ such that $\mathcal{M} \preceq \mathcal{K} \preceq \mathcal{N}$ forms a lattice under inclusion. If $\mathcal{N}$ is an elementary end extension of $\mathcal{M}$ and $X \subseteq \mathcal{M}$, we say $X$ is coded in $\mathcal{N}$ if there is $Y \in \mathrm{Def}(\mathcal{N} )$ such that $X = Y \cap M$. In this talk, I will discuss the relationship between interstructure lattices and coded sets. Recent work by Schmerl determined those collections of subsets of a model which could be coded in a minimal extension; in this talk, we explore the same question for elementary extensions whose interstructure lattices form a finite distributive lattice. # Ramsey Quantifiers – MOPA Seminar I will be speaking at the Models of Peano Arithmetic seminar on Wednesday, September 21, 2016 on “Ramsey Quantifiers”. The abstract is listed on the NYLogic site, but for context I wanted to provide some thoughts on why I am digging up this older topic. This theory piqued my interest while I was studying the lattice problem for models of PA. In studying this problem, it became apparent that certain combinatorial properties of representations of lattices were important. Let me preface this by saying that much of this information is in The Structure of Models of Peano Arithmetic, by Kossak and Schmerl, in Chapter 4 on Substructure Lattices. A representation of a lattice $L$ on a set $A$ is an injection $\alpha : L \to \textrm{Eq}(A)$ (where Eq(A) is the set of equivalence relations on the set A), such that for each $r, s \in L, \alpha(r \vee s) = \alpha(r) \wedge \alpha(s)$, $\alpha(0)$ is the trivial relation and $\alpha(1)$ is the discrete relation . Given $\alpha : L \to \textrm{Eq}(A)$ and a set $B \subseteq A$, the function $\alpha | B$ is defined by $(\alpha | B)(r) = \alpha(r) \cap B^2$ for each $r \in L$. Two representations of the same lattice, $\alpha : L \to \textrm{Eq}(A), \beta : L \to \textrm{Eq}(B)$ are called isomorphic if there is a bijection $f : A \to B$ respecting the equivalence relations; that is, for each $r \in L$ and $x, y \in A$, $(x, y) \in \alpha(r) \leftrightarrow (f(x), f(y)) \in \beta(r)$. ## The Lattice $B_2$ The lattice $B_2 = \{0, a, b, 1\}$ is the Boolean Algebra on a 2 element set: (with $0 < a < 1$ and $0 < b < 1$). Gaifman proved that every model $\mathcal{M} \models \textsf{PA}$ has an elementary end extension $\mathcal{N}$ such that the interstructure lattice $\textrm{Lt}(\mathcal{N} / \mathcal{M}) = \{ \mathcal{K} : \mathcal{M} \preceq \mathcal{K} \preceq \mathcal{N} \}$ is isomorphic to $B_2$ (in fact, Gaifman’s proof works for every finite Boolean algebra). But we can also form such an elementary extension by studying the appropriate representation of the lattice $B_2$. Given a model $\mathcal{M} \models \textsf{PA}$, there is a particularly simple representation on the set of pairs of elements of $M$, denoted $[M]^2$. This representation is defined by letting $\alpha(a)$ be the equivalence relation determined by equality on the first coordinate, and $\alpha(b)$ be the equivalence relation determined by equality on the second coordinate. This representation is definable in $\mathcal{M}$, by using the normal coding of pairs of numbers (Cantor’s pairing function) and the induced projection functions. The key lemma we need to construct the elementary extension is that for any definable equivalence relation $\Theta$ on $[\mathcal{M}]^2$, there is a definable subset $A$ of $[\mathcal{M}]^2$ such that $\Theta \cap A$ is either discrete, trivial, $\alpha(a) \cap A$ or $\alpha(b) \cap A$, and $\alpha | A \cong \alpha$. The underlying combinatorics here is a generalization of Ramsey’s theorem for pairs, first proved by Erdős and Rado: given any equivalence relation $\Theta$ on $[\mathcal{M}]^2$, there is an infinite set $X \subseteq \mathcal{M}$ such that $\Theta \cap [X]^2$ is either discrete, trivial, $\alpha(a) \cap [X]^2$ or $\alpha(b) \cap [X]^2$. Note that this $X$ is just an infinite set of numbers, not of pairs. This is similar to the key lemma needed when constructing minimal extensions. If $\mathcal{M}$ is a model, a minimal extension $\mathcal{M} \prec \mathcal{N}$ is an elementary extension such that there are no proper intermediate elementary structures (that is, if $\mathcal{M} \preceq \mathcal{K} \preceq \mathcal{N}$, then $\mathcal{M} = \mathcal{K}$ or $\mathcal{K} = \mathcal{N}$). In that case, we consider any infinite definable set $A$ and show that for any definable equivalence relation $\Theta$ on $A$, there is an infinite definable $B \subseteq A$ such that that $\Theta \cap B^2$ is either discrete or trivial. The main difference between these two cases is the first order expressibility of these statements. Stating that there is an infinite subset $B \subseteq A$ on which $\Theta$ is discrete or trivial can be expressed in the language of first order arithmetic: $[\exists x \in A \forall w \exists y \in A (y > w \wedge (x, y) \in \Theta)] \vee [\forall w \exists y > w (y \in A \wedge \forall x < y (x \in A \rightarrow (x, y) \not \in \Theta))]$. Even though it appears, at first, we want to say “There is an infinite set” where something holds (which would appear to be a second-order quantifier), we can state this in first order (because by “infinite” we really mean “unbounded”). In the Erdős-Rado result, something appears to be significantly different, however. We must state: “there is an infinite set $X$ such that either (i) for all $x_1 < x_2, y_1 < y_2 \in X$, $( (x_1, x_2), (y_1, y_2)) \in \Theta$, (ii) for all $x_1 < x_2, y_1 < y_2 \in X$, $( (x_1, x_2), (y_1, y_2)) \in \Theta$ if and only if $x_1 = y_1$, (iii) for all $x_1 < x_2, y_1 < y_2 \in X$, $( (x_1, x_2), (y_1, y_2)) \in \Theta$ if and only if $x_2 = y_2$, or (iv) for all $x_1 < x_2, y_1 < y_2 \in X$, $( (x_1, x_2), (y_1, y_2)) \in \Theta$ if and only if $(x_1, x_2) = (y_1, y_2)$. In this case, we cannot replace the second-order quantifier with first order ones asserting unboundedness of some property, because we wish to quantify over pairs of elements from that (unbounded) set. After thinking about this for awhile, my advisor mentioned a section in Chapter 10 of The Structure of Models of Peano Arithmetic, which discusses an extra quantifier called a “Ramsey quantifier”, denoted $Q^2$. This quantifier extends the language of first order logic by binding two variables. The intended interpretation of $Q^2 x, y \phi(x, y)$ is “There is an infinite set $X$ such that for all $x \neq y \in X$, $\phi(x, y)$ holds.” This is exactly the kind of extension to the language that I needed, and I hope to talk about some of the basic results in the theory of Peano Arithmetic in this augmented language (with induction for all formulas in the extended language). Yesterday I attended the Google I/O Extended event at Google NYC. I used to work there prior to graduate school, so it was great to be back there. I got to see some interesting tech talks, meet up with old friends from Google, and watch a live stream of the keynote. When I got there, they had coffee and breakfast for everyone, so I happily helped myself to some while checking out the swag they gave us and snapping some pictures: # Enayat Models – ASL 2016 I will be contributing a session at the ASL 2016 North American Annual Meeting, on Wednesday, May 25 at 4:45 PM. Abstract: Simpson used arithmetic forcing to show that every countable model $\mathcal{M} \models \textsf{PA}$ has an expansion $(\mathcal{M}, X) \models \textsf{PA}^*$ that is pointwise definable. The natural question then is whether this method can be used to obtain expansions of (countable models) with the property that the definable elements of the expansion coincide with the definable elements of the original model. Enayat later showed that this is impossible. He proved that there are models with the property that every expansion upon adding a predicate for an undefinable class is pointwise definable. We call models with this property Enayat models. It is easy to iterate Enayat’s construction and obtain other models with this property. Elementary submodels of any Enayat model formed in this way are well-ordered by inclusion. I will present a construction of an Enayat model whose elementary substructures form an infinite descending chain. # Pi5NY Math Competition This past weekend, I had the opportunity to volunteer at the AMS Pi5NY Math Tournament. This tournament is held annually, open to middle schoolers in any of the 5 boroughs of NYC. Students worked in teams of 5 on a set of 40 problems. I was a judge for two 8th grade teams. When one of the students finished a problem, they would come to me and I would check their answer, and then send them off to the scorers if their answers were correct. It was a lot of fun and great to see so many young people enjoying math on a Saturday morning!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 183, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229509234428406, "perplexity": 215.82880019183622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00534.warc.gz"}
https://www.physicsforums.com/threads/cantilever-beam-question.159706/
# Cantilever beam question! 1. Mar 7, 2007 ### jhwatts I need some help. I writing up a lab report and im not sure how to define the equtaion i took down in my notes. deflection = (4F/Ebh^3) * L Im not sure if this is the right eqution. If so can some one please tell me what the varibales: E, b, F and h are. Thanks 2. Mar 7, 2007 ### Staff: Mentor I imagine F = force or load, E = Elastic or Young's modulus. L is probably length, but the maximum deflection is proportional to the cube of the length. b might be thickness (lateral), and h might be height. What is the cross-section and how does one calculate the moment of inertia for this cross-section? 3. Mar 7, 2007 ### Mentz114 You're in the wrong place. Delete this and put it in the homework section. 4. Mar 7, 2007 ### Cyrus Come on... jesus, b-base h-height L-length. If you dont know what F and E are.....sad. b.t.w. bh^3 looks alot like MOAI to me. Last edited: Mar 7, 2007 Have something to add? Similar Discussions: Cantilever beam question!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396504163742065, "perplexity": 4118.8973713928635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00321-ip-10-171-10-108.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A0336.46001
× Geometric functional analysis and its applications.(English)Zbl 0336.46001 Graduate Texts in Mathematics. 24. New York-Heidelberg-Berlin: Springer-Verlag. X, 246 p. DM 39.10; \$ 16.80 (1975). MSC: 46-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to functional analysis 46Axx Topological linear spaces and related structures 46Bxx Normed linear spaces and Banach spaces; Banach lattices 46E10 Topological linear spaces of continuous, differentiable or analytic functions 52A05 Convex sets without dimension restrictions (aspects of convex geometry) 41A65 Abstract approximation theory (approximation in normed linear spaces and other abstract spaces)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773190379142761, "perplexity": 2913.365132964171}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00091.warc.gz"}
https://en.wikipedia.org/wiki/Autoregressive_fractionally_integrated_moving_average
# Autoregressive fractionally integrated moving average In statistics, autoregressive fractionally integrated moving average models are time series models that generalize ARIMA (autoregressive integrated moving average) models by allowing non-integer values of the differencing parameter. These models are useful in modeling time series with long memory—that is, in which deviations from the long-run mean decay more slowly than an exponential decay. The acronyms "ARFIMA" or "FARIMA" are often used, although it is also conventional to simply extend the "ARIMA(p,d,q)" notation for models, by simply allowing the order of differencing, d, to take fractional values. ## Basics In an ARIMA model, the integrated part of the model includes the differencing operator (1 − B) (where B is the backshift operator) raised to an integer power. For example $(1-B)^2=1-2B+B^2 \,,$ where $B^2X_t=X_{t-2} \, ,$ so that $(1-B)^2X_t = X_t -2X_{t-1} + X_{t-2}.$ In a fractional model, the power is allowed to be fractional, with the meaning of the term identified using the following formal binomial series expansion \begin{align} (1 - B)^d &= \sum_{k=0}^{\infty} \; {d \choose k} \; (-B)^k \\ & = \sum_{k=0}^{\infty} \; \frac{\prod_{a=0}^{k-1} (d - a)\ (-B)^k}{k!}\\ &=1-dB+\frac{d(d-1)}{2!}B^2 -\cdots \, . \end{align} ## ARFIMA(0,d,0) The simplest autoregressive fractionally integrated model, ARFIMA(0,d,0), is, in standard notation, $(1 - B)^d X_t= \varepsilon_t,$ where this has the interpretation $X_t-dX_{t-1}+\frac{d(d-1)}{2!}X_{t-2} -\cdots = \varepsilon_t .$ ARFIMA(0,d,0) is similar to fractional Gaussian noise (fGn): with d = H−½, their covariances have the same power-law decay. The advantage of fGn over ARFIMA(0,d,0) is that many asymptotic relations hold for finite samples.[1] The advantage of ARFIMA(0,d,0) over fGn is that it has an especially simple spectral density f(λ) = (1/2π) (2sin(λ/2))−2d —and it is a particular case of ARFIMA(p,d,q), which is a versatile family of models.[1] ## General form: ARFIMA(p,d,q) An ARFIMA model shares the same form of representation as the ARIMA(p,d,q) process, specifically: $\left( 1 - \sum_{i=1}^p \phi_i B^i \right) \left( 1-B \right)^d X_t = \left( 1 + \sum_{i=1}^q \theta_i B^i \right) \varepsilon_t \, .$ In contrast to the ordinary ARIMA process, the "difference parameter", d, is allowed to take non-integer values.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859088659286499, "perplexity": 2415.8263948853255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645293619.80/warc/CC-MAIN-20150827031453-00291-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.researchgate.net/publication/278829050_A_new_approach_to_improve_ill-conditioned_parabolic_optimal_control_problem_via_time_domain_decomposition
ArticlePDF Available # A new approach to improve ill-conditioned parabolic optimal control problem via time domain decomposition Authors: ## Abstract and Figures In this paper we present a new steepest-descent type algorithm for convex optimization problems. Our algorithm pieces the unknown into sub-blocs of unknowns and considers a partial optimization over each sub-bloc. In quadratic optimization, our method involves Newton technique to compute the step-lengths for the sub-blocs resulting descent directions. Our optimization method is fully parallel and easily implementable, we first presents it in a general linear algebra setting, then we highlight its applicability to a parabolic optimal control problem, where we consider the blocs of unknowns with respect to the time dependency of the control variable. The parallel tasks, in the last problem, turnon" the control during a specific time-window and turn it off" elsewhere. We show that our algorithm significantly improves the computational time compared with recognized methods. Convergence analysis of the new optimal control algorithm is provided for an arbitrary choice of partition. Numerical experiments are presented to illustrate the efficiency and the rapid convergence of the method. Content may be subject to copyright. A NEW APPROACH TO IMPROVE ILL-CONDITIONED PARABOLIC OPTIMAL CONTROL PROBLEM VIA TIME DOMAIN DECOMPOSITION Mohamed Kamel Riahi To cite this version: Mohamed Kamel Riahi. A NEW APPROACH TO IMPROVE ILL-CONDITIONED PARABOLIC OPTIMAL CONTROL PROBLEM VIA TIME DOMAIN DECOMPOSITION. 2015. <hal-00974285v2> HAL Id: hal-00974285 https://hal.inria.fr/hal-00974285v2 Submitted on 14 Jan 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es. A NEW APPROACH TO IMPROVE ILL-CONDITIONED PARABOLIC OPTIMAL CONTROL PROBLEM VIA TIME DOMAIN DECOMPOSITION Mohamed Kamel RIAHI1 1Department of mathematical science, New Jersey Institute of Technology, University Heights Newark, New Jersey, USA. January 14, 2015 Abstract. In this paper we present a new steepest-descent type algorithm for convex op- timization problems. Our algorithm pieces the unknown into sub-blocs of unknowns and considers a partial optimization over each sub-bloc. In quadratic optimization, our method involves Newton technique to compute the step-lengths for the sub-blocs resulting descent di- rections. Our optimization method is fully parallel and easily implementable, we first presents it in a general linear algebra setting, then we highlight its applicability to a parabolic optimal control problem, where we consider the blocs of unknowns with respect to the time dependency of the control variable. The parallel tasks, in the last problem, turn“on” the control during a specific time-window and turn it “off” elsewhere. We show that our algorithm significantly improves the computational time compared with recognized methods. Convergence analysis of the new optimal control algorithm is provided for an arbitrary choice of partition. Numerical experiments are presented to illustrate the efficiency and the rapid convergence of the method. Steepest descent method, Newton method, ill conditioned Optimal control, time domain de- composition. 1. Introduction Typically the improvement of iterative methods is based on an implicit transformation of the original linear system in order to get a new system which has a condition number ideally close to one see [11,13,25] and references therein. This technique is known as preconditioning. Modern preconditioning techniques such as algebraic multilevel e.g. [20,24] and domain decomposition methods e.g. [23,27,4,15] attempt to produce efficient tools to accelerate convergence. Other techniques have introduced a different definition of the descent directions, for example, CG- method, GMRES, FGMRES, BFGS, or its limited memory version l-BFGS see for instance [25]. Others approaches (e.g. [5],[12] and [28] without being exhaustive) propose different formulas for the line-search in order to enhance the optimization procedure. The central investigation of this paper is the enhancement of the iterations of the steepest descent algorithm via an introduction of a new formulation for the line-search. Indeed, we show how to achieve an optimal vectorized step-length for a given set of descent directions. Steepest descent methods [7] are usually used for solving, for example, optimization problems, control with partial differential equations (PDEs) constraints and inverse problems. Several approaches have been developed in the cases of constrained and unconstrained optimization. It is well-known that the algorithm has a slow convergence rate with ill-conditioned problems because the number of iterations is proportional to the condition number of the problem. The method of J.Barzila and J.Borwein [2] based on two-point step-length for the steepest-descent method for approximating the secant equation avoids this handicap. Our method is very different Mohamed Kamel RIAHI : [email protected] http://web.njit.edu/~riahi 1 2 MULTI-STEEPEST DESCENT ALGORITHM because first, it is based on a decomposition of the unknown and proposes a set of bloc descent directions, and second because it is general where it can be coupled together with any least- square-like optimization procedure. The theoretical basis of our approach is presented and applied to the optimization of a positive definite quadratic form. Then we apply it on a complex engineering problem involving control of system governed by PDEs. We consider the optimal heat control which is known to be ill-posed in general (and well-posed under some assumptions) and presents some particular theoretical and numerical challenges. We handle the ill-posedness degree of the heat control problem by varying the regularization parameter and apply our methods in the handled problem to show the efficiency of our algorithm. The distributed- and boundary-control cases are both considered. This paper is organized as follows: In Section 2, we present our method in a linear algebra framework to highlight its generality. Section 3 is devoted to the introduction of the optimal control problem with constrained PDE on which we will apply our method. We present the Euler-Lagrange-system associated to the optimization problem and give the explicit formulation of the gradient in both cases of distributed- and boundary-control. Then, we present and explain the parallel setting for our optimal control problem. In Section 4, we perform the convergence analysis of our parallel algorithm. In Section 5, we present the numerical experiments that demonstrate the efficiency and the robustness of our approach. We make concluding remarks in Section 6. For completeness, we include calculus results in the Appendix. Let Ω be a bounded domain in R3, and ΩcΩ, the boundary of Ω is denoted by Ω. We denote by Γ Ω a part of this boundary. We denote h., .i2(respectively h., .icand h., .iΓ) the standard L2(Ω) (respectively L2(Ωc) and L2(Γ)) inner-product that induces the L2(Ω)-norm k.k2 on the domain Ω (respectively k·kcon Ωcand k·kΓon Γ). In the case of finite dimensional vector space in Rm, the scalar product aTbof aand b(where aTstands for the transpose of a) is denoted by h., .i2too. The scalar product with respect to the matrix A, i.e. hx, Axi2is denote by hx, xiAand its induced norm is denoted by kxkA. The transpose of the operator Ais denoted by AT. The Hilbert space L2(0, T ;L2(Ωc)) (respectively L2(0, T ;L2(Γ))) is endowed by the scalar product h., .ic,I ( respectively h., .iΓ,I) that induces the norm k.kc,I (respectively k.kΓ,I ). 2. Enhanced steepest descent iterations The steepest descent algorithm minimizes at each iteration the quadratic function q(x) = kxx?k2 A, where Ais assumed to be a symbiotic positive definite (SPD) matrix and x?is the minimum of q. The vector −∇q(x) is locally the descent direction that yields the fastest rate of decrease of the quadratic form q. Therefore all vectors of the form x+θq(x), where θis a suitable negative real value, minimize q. The choice of θis found by looking for the mins<0q(x+sq(x)) with the use of a line-search technique. In the case where qis a quadratic form θis given by −k∇q(x)k2 2/k∇q(x)k2 A. We recall in Algorithm 1the steepest descent algorithm; Convergence is a boolean variable based on estimation of the residual vector rk< , where is the stopping criterion. Our method proposes to modify the stepe 5.of Algorithm 1. It considers the step-length θR\{0}as a vector in Rˆn \{0}where ˆnis an integer such that 1 ˆnsize(x), we shall denote this new vector as Θˆn. In the following, it is assumed that for a giving vector xRm, the integer ˆndivides mwith null rest. In this context, let us introduce the identity operators IRmwitch is an m-by-mmatrix and its partition (partition of unity) given by the projection operators {πn}ˆn n=1 : projectors from MULTI-STEEPEST DESCENT ALGORITHM 3 Algorithm 1: Steepest descent. Input:x0; 1k= 0; 2while Convergence do 3rk=qk:= q(xk); 4Compute Ark; 5Compute θk=−krkk2 2/krkk2 A; 6xk+1 =xk+θkrk; 7k=k+ 1; 8end Rminto a set of canonical basis {ei}i. These operators are defined for 1 nˆnby πn:RmRm ˆn x7→ πn(x) = n×m ˆn X i=(n1)×m ˆn+1 hei, xi2ei. For reading conveniences, we define ˜xna vector in Rmsuch that ˜xn:= πn(x). The concatenation of ˜xnfor all 1 nˆnis denoted by ˆxˆn= ˆn M n=1 πn(x) = ˆn M n=1 ˜xnRm. We remark that πnsatisfy Lˆn n=1 πn=IRm. Recall the gradient x= ( ∂x1,..., ∂xm)T, and define the bloc gradient ˆxˆn=T ˜x1,...,T ˜xˆnT, where obviously T ˜xn= ( ∂x(n1)×m ˆn+1 ,..., ∂xn×m ˆn )T. In the spirit of this decomposition we in- vestigate, in the sequel, the local descent directions as the bloc partial derivatives with respect to the bloc-variables (˜xn)nn n=1 . We aim, therefore, at finding Θˆn= (θ1, . . . , θ ˆn)TRˆnthat ensures the min(θn)n<0qˆxk ˆn+Lˆn n=1 θn˜xnq(ˆxk ˆn). We state hereafter a motivating result, which its proof is straightforward because the spaces are embedded. Let us first, denote by (1) Φˆnˆn) : RˆnR+ Θˆn7→ qˆxˆn+Lˆn n=1 θn˜xnq(ˆxˆn) which is quadratic because qis. Theorem 2.1. According to the definition of Φˆnˆn)(see Eq.(1)) we immediately have min RpΦpp)min RqΦqq)q < p. The new algorithm we discuss in this paper proposes to define a sequence (ˆxk ˆn)kof vectors that converges to x?unique minimizer of the quadratic form q. The update formulae reads: ˜xk+1 n= ˜xk n+θk n˜xnq(ˆxk ˆn), where we recall that ˆnis an arbitrarily chosen integer. Then ˆxk+1 ˆn=Lˆn n=1 ˜xk+1 n. We shall explain now how one can accurately computes the vector step-length Θk ˆnat each iteration k. It is assumed that qis a quadratic form. From Eq.(1) using the chain rule, we obtain 4 MULTI-STEEPEST DESCENT ALGORITHM the Jacobian vector Φ0 ˆnˆn)Rˆngiven by (2) (Φ0 ˆnˆn))j=˜xjq(ˆxk ˆn)T˜xjq ˆxk ˆn+ ˆn M n=1 θn˜xnq(ˆxk ˆn)!R, and the Hessian matrix Φ00 ˆnˆn)Rˆn׈nis given by 00 ˆnˆn))i,j =˜xjq(ˆxk ˆn)T˜xi˜xjq ˆxk ˆn+ ˆn M n=1 θn˜xnq(ˆxk ˆn)!˜xjq(ˆxk ˆn). It is worth noticing that the matrix ˜xi˜xjqˆxk ˆn+Lˆn n=1 θn˜xnq(ˆxk ˆn)is a bloc portion of the Hessian matrix A. However if the gradient ˜xnqRm ˆnassumes an extension by zero (denoted by e ˜xiq) to Rmso the matrix Φ00 ˆnˆn) has therefore the simplest implementable form (3) (Φ00 ˆnˆn))i,j = (e ˜xjq(ˆxk ˆn))TAe ˜xjq(ˆxk ˆn). We thus have the expansion Φˆnk ˆn) = Φˆn(0) + (Θk ˆn)TΦˆn(0) + 1 2k ˆn)TΦ00 ˆn(0k ˆn, with 0:= (0, .., 0)TRˆn. Then the vector Θk ˆnthat annuls the gradient writes: (4) Θk ˆn=Φ00 ˆn(0)1Φ0 ˆn(0). Algorithm 1has therefore a bloc structure which can be solved in parallel. This is due to the fact that partial derivatives can be computed independently. The new algorithm is thus as follows (see Algorithm 2) Algorithm 2: Enhanced steepest descent. k= 0; Input: ˆx0 ˆnRm; 1while Convergence do 2forall the 1nˆndo 3˜xk n=πnxk ˆn); 4rn=˜xk nqxk ˆn); 5resize(rn) (i.e. extension by zero means simply project on Rm); 6end 7Assemble Φ0 ˆn(0) with element (Φ0 ˆn(0))j=rT jrjaccording to Eq.(2); 8Assemble Φ00 ˆn(0) with element (Φ00 ˆn(0))i,j =rT iArjaccording to Eq.(3); 9Compute Θk ˆnsolution of Eq.(4); 10 Update ˆxk+1 ˆn= ˆxk ˆn+Lnθn˜xnq(ˆxk ˆn); 11 k=k+ 1; 12 end 3. Application to a parabolic optimal control problem In this part we are interested in the application of Algorithm 2in a finite element computa- tional engineering problem involving optimization with constrained PDE. In particular, we deal with the optimal control problem of a system, which is governed by the heat equation. We shall present two types of control problems. The first concerns the distributed optimal control and MULTI-STEEPEST DESCENT ALGORITHM 5 the second concerns the Dirichlet boundary control. The main difference from the algorithm just presented in linear algebra is that the decomposition is applied on the time domain when the con- trol. This technique is not classical, we may refer to a similar approaches that has been proposed for the time domain decomposition in application to the control problem, for instance [19,17,18] which basically they use a variant of the parareal in time algorithm [15]. 3.1. Distributed optimal control problem. Let us briefly present the steepest descent method applied to the following optimal control problem: find v?such that (5) J(v?) = min vL2(0,T ;L2(Ωc)) J(v), where Jis a quadratic cost functional defined by (6) J(v) = 1 2ky(T)ytarget k2 2+α 2ZI kvk2 cdt, where ytarget is a given target state and y(T) is the state variable at time T > 0 of the heat equation controlled by the variable vover I:= [0, T ]. The Tikhonov regularization parameter α is introduced to penalize the control’s L2-norm over the time interval I. The optimality system of our problem reads: tyσy=Bv, on I×, y(t= 0) = y0. (7) tp+σp= 0,on I×, p(t=T) = y(T)ytarget . (8) J(v) = αv +BTp= 0,on I×.(9) In the above equations, the operator Bis a linear operator that distributes the control in Ωc, obviously Bstands for the indicator of ΩcΩ, the state variable pstands for the Lagrange multiplier (adjoint state) solution of the backward heat equation Eq.(8), Eq.(7) is called the forward heat equation. 3.2. Dirichlet boundary optimal control problem. In this subsection we are concerned with the PDE constrained Dirichlet boundary optimal control problem, where we aim at minimizing the cost functional JΓdefined by (10) JΓ(vΓ) = 1 2kyΓ(T)ytarget k2 2+α 2ZI kvΓk2 Γdt, where the control variable vΓis only acting on the boundary Γ Ω. Here too, ytarget is a given target state (not necessary equal the one defined in the last subsection ! ) and yΓ(T) is the state variable at time T > 0 of the heat equation controlled by the variable vΓduring the time interval I:= [0, T ]. As before αis a regularization term. The involved optimality system reads tyΓσyΓ=fon I× yΓ=vΓon I×Γ yΓ=gon I× {\Γ} yΓ(0) = y0 (11) tpΓ+σpΓ= 0 on I× pΓ= 0 on I× pΓ(T) = yΓ(T)ytarg et (12) JΓ(vΓ) = αvΓ(pΓ)T~n = 0 on I×Γ,(13) where fL2(Ω) is any source term, gL2(Γ) and ~n is the outward unit normal on Γ. the state variable pΓstands for the Lagrange multiplier (adjoint state) solution of the backward heat 6 MULTI-STEEPEST DESCENT ALGORITHM equation Eq.(12). Both functions fand gwill be given explicitly for each numerical test that we consider in the numerical experiment section. 3.3. Steepest descent algorithm for optimal control of constrained PDE. In the optimal control problem, the evaluation of the gradient as it is clear in Eq.(9) (respectively (13)) requires the evaluation of the time dependent Lagrange multiplier p(respectively pΓ). This fact, makes the steepest descent optimization algorithm slightly differs from the Algorithm 1already presented. Let us denote by kthe current iteration superscript. We suppose that v0is known. The first order steepest descent algorithm updates the control variable as follows: (14) vk=vk1+θk1J(vk1),for k1,for the distributed control respectively as (15) vk Γ=vk1 Γ+θk1 ΓJΓ(vk1 Γ),for k1,for the Dirichlet control The step-length θk1R\{0}in the direction of the gradient J(vk1) = αvk1+BTpk1 (respectively JΓ(vk1 Γ) = αvΓ(pΓ)T~n) is computed as : θk1=−k∇J(vk1)k2 c,I /k∇J(vk1)k2 2Jfor the distributed control. respectively as θk1 Γ=−k∇JΓ(vk1 Γ)k2 c,I /k∇JΓ(vk1 Γ)k2 2JΓfor the Dirichlet control. The above step-length θk1(respectively θk1 Γ) is optimal (see e.g. [8]) in the sense that it minimizes the functional θJ(vk1+θJ(vk1)) (respectively θJΓ(vk1 Γ+θJΓ(vk1 Γ))). The rate of convergence of this technique is κ1 κ+1 2, where κis the condition number of the quadratic form, namely the Hessian of the cost functional J(respectively JΓ). 3.4. Time-domain decomposition algorithm. Consider ˆnsubdivisions of the time interval I=ˆn n=1In, consider also the following convex cost functional J: J(v1, v2, .., vˆn) = 1 2kY(T)ytarget k2 2+α 2 ˆn X n=1 ZIn kvnk2 cdt,(16) JΓ(v1,Γ, v2,Γ, .., vˆn,Γ) = 1 2kYΓ(T)ytarget k2 2+α 2 ˆn X n=1 ZIn kvnk2 Γdt,(17) where vn, n = 1, ..., ˆnare control variables with time support included in In, n = 1, ..., ˆn. The state Y(T) (respectively YGamma) stands for the sum of state variables Yn(respectively Yn,Γ) which are time-dependent state variable solution to the heat equation controlled by the variable vn(respectively vn,Γ). . Obviously because the control is linear the state Ydepends on the concatenation of controls v1, v2, .., vˆnnamely v=Pn= ˆn n=1 vn. Let us define Θˆn:= (θ1, θ2, ..., θ ˆn)Twhere θnR\{0}. For any admissible control w= Pˆn nwn, we also define ϕˆnˆn) := J(v+Pˆn n=1 θnwn), which is quadratic. We have: (18) ϕˆnˆn) = ϕˆn(0)+ΘT ˆnϕˆn(0) + 1 2ΘT ˆn2ϕˆn(0ˆn, where 0= (0, ..., 0)T. Therefore we can write ϕˆnˆn)Rˆnas ϕˆnˆn) = D(v , w) + H(v, wˆn, where the Jacobian vector and the Hessian matrix are given respectively by: D(v, w) := (h∇J(v),π1(w)ic,...,h∇J(v),πˆn(w)ic)TRˆn, H(v, w) := (Hn,m )n,m,for Hn,m =hπn(w),πm(w)i2J. MULTI-STEEPEST DESCENT ALGORITHM 7 Here, (πn) is the restriction over the time interval In, indeed πn(w) has support on Inand assumes extension by zero in I. The solution Θ? ˆnof ϕˆnˆn) = 0can be written in the form: (19) Θ? ˆn=H1(v, w)D(v, w). In the parallel distributed control problem, we are concerned with the following optimality sys- tem: tYnσYn=Bvn,on I×, Yn(t= 0) = δ0 ny0. (20) Y(T) = ˆn X n=1 Yn(T)(21) tP+σP= 0,on I×, P(t=T) = Y(T)ytarget . (22) J( ˆn X n=1 vn) = BTP+α ˆn X n=1 vn= 0,on I×.(23) where δ0 nstands for the function taking value ”1” only if n= 0, else it takes the value ”0”. The Dirichlet control problem we are concerned with: tYn,ΓσYn,Γ=fon I× Yn,Γ=vn,Γon I×Γ Yn,Γ=gon I× {\Γ} Yn,Γ(0) = δ0 ny0. (24) YΓ(T) = ˆn X n=1 Yn,Γ(T)(25) tPΓ+σPΓ= 0 on I× PΓ= 0 on I× PΓ(T) = YΓ(T)ytarget. (26) JΓ( ˆn X n=1 vn,Γ) = ∇PΓT~n +α ˆn X n=1 vn,Γ= 0 on I×Γ.(27) The resolution of Eqs. (20) and (24) with respect to nare fully performed in parallel over the time interval I. It is recalled that the superscript kdenotes the iteration index. The update formulae for the control variable vkis given by: vk n=vk1 n+θk1 nBTPk1+α ˆn X n=1 vk1 n. respectively as vk n,Γ=vk1 n,Γ+θk1 n,Γ∇Pk1 ΓT~n +α ˆn X n=1 vk1 n,Γ. We shall drop in the following the index Γof the cost functional J. This index would be only used to specify which cost function is in consideration. unless the driven formulation apply for distributed as well as boundary control. We show hereafter how to assemble vector step-length Θk ˆnat each iteration. For the purposes of notation we denote by Hkthe k-th iteration of the Hessian matrix H(J(vk),J(vk)) and by Dkthe k-th iteration of the Jacobian vector D(J(vk),J(vk)). The line-search is performed 8 MULTI-STEEPEST DESCENT ALGORITHM with quasi-Newton techniques that uses at each iteration ka Hessian matrix Hkand Jacobian vector Dkdefined respectively by: Dk:= h∇J(vk),π1J(vk)ic, .., h∇J(vk),πˆnJ(vk)icT,(28) (Hk)n,m := hπnJ(vk),πmJ(vk)i2J.(29) The spectral condition number of the Hessian matrix 2Jis denoted as: κ=κ(2J) := λmaxλ1 min, with λmax := λmax(2J) the largest eigenvalue of 2Jand λmin := λmin(2J) its smallest eigenvalue. According to Eq.(19) we have (30) Θk ˆn=H1 kDk. From Eq.(18) we have: (31) J(vk+1) = J(vk) + (Θk ˆn)TDk+1 2k ˆn)THkΘk ˆn. Our parallel algorithm to minimize the cost functional Eq.(16) and (17), is stated as follows (see Algorithm 3). Algorithm 3: Enhanced steepest descent algorithm for the optimal control prob- lem. 0Input:v0 1while Convergence do 2forall the 1nˆndo 3Solve Yn(T)(vk n) of Eq.(20)(respectively Eq.(24)) in parallel for all 1 nˆn; 4end 5Compute P(t) with the backward problem according to Eq.(22) (respectively Eq.(26)) ; 6forall the 1nˆndo 7Compute (Dk)nof Eq.(28)in parallel for all 1 nˆn; 8end 9Gather (Dk)nfrom processor n, 2 nˆnto master processor; 10 Assemble the Hessian matrix Hkaccording to Eq.(29) with master processor; 11 Compute the inversion of Hkand calculate Θk ˆnusing Eq.(30); 12 Broadcast θk nfrom master processor to all slaves processors; 13 Update time-window-control variable vk+1 nin parallel as : vk+1 n=vk n+θk nπnJ(vk)for all 1 nˆn, and go to step 2; 14 k=k+ 1; 15 end Since (vn)nhas disjoint time-support, thanks to the linearity, the notation enJ(vk)is nothing but J(vk n), where vkis the concatenation of vk 1, . . . , vk ˆn. In Algorithm 3steps 9,10,11, 12 and 13 are trivial tasks in regards to computational effort. MULTI-STEEPEST DESCENT ALGORITHM 9 4. Convergence analysis of Algorithm 3 This section provides the proof of convergence of Algorithm 3. In the sequel, we suppose that k∇J(vk)kcdoes not vanish; otherwise the algorithm has already converged. proposition 4.1. The increase in value of the cost functional Jbetween two successive controls vkand vk+1 is bounded below by: (32) J(vk)J(vk+1)1 2κ(Hk) k∇J(vk)k4 c k∇J(vk)k2 2J . Proof. Using Eq.(30) and Eq.(31), we can write: (33) J(vk)J(vk+1) = 1 2DT kH1 kDk. Preleminaries: From the definition of the Jacobian vector Dkwe have kDkk2 2= ˆn X n=1 h∇J(vk),πn(J(vk))i2 c, = ˆn X n=1 hπn(J(vk)),πn(J(vk))i2 c, = ˆn X n=1 ken(J(vk))k4 c, =k∇J(vk)k4 c. Furthermore since Hkis an SPD matrix we have λmin(H1 k) = 1 λmax(Hk),from which we deduce: 1 λmin(Hk)1 1 ˆn1T ˆnHk1ˆn.Moreover, we have: DT kH1 kDk=DT kH1 kDk kDkk2 2 kDkk2 2λmin(H1 k)kDkk2 2 =λmin(H1 k)λmin(Hk)k∇J(vk)k4 c λmin(Hk) λmin(Hk) λmax(Hk) k∇J(vk)k4 c 1 ˆn1T ˆnHk1ˆn =ˆn κ(Hk)k∇J(vk)k2 2Jk∇J(vk)k4 c. Since the partition number ˆnis greater than or equal to 1, we conclude that : (34) DT kH1 kDkk∇J(vk)k2 2Jk∇J(vk)k4 c κ(Hk). Hence, using Eq.(33) we get the stated result. Theorem 4.2. For any partition ˆnof sub intervals, the control sequence (vk)k1of Algorithm 3 converges to the optimal control vkunique minimizer of the quadratic functional J. Furthermore we have: kvkv?k2 2Jrkkv0v?k2 2J, where the rate of convergence r:= 14κ κ(Hk)(κ+1)2satisfies 0r < 1. 10 MULTI-STEEPEST DESCENT ALGORITHM Proof. We denote by v?the optimal control that minimizes J. The equality J(v) = J(v?) + 1 2hvv?, v v?i2J=J(v?) + 1 2kvv?k2 2J, holds for any control v; in particular we have: J(vk+1) = J(v?) + 1 2kvk+1 v?k2 2J, J(vk) = J(v?) + 1 2kvkv?k2 2J. Consequently, by subtracting the equations above, we obtain (35) J(vk+1)J(vk) = 1 2kvk+1 v?k2 2J1 2kvkv?k2 2J. Since Jis quadratic, we have 2J(vkv?) = J(vk), that is vkv?= (2J)1J(vk). Therefore we deduce: kvkv?k2 2J=hvkv?, vkv?i2J (36) =hvkv?,2J, vkv?ic =h(2J)1J(vk),2J, (2J)1J(vk)ic =h∇J(vk),(2J)1,J(vk)ic =k∇J(vk)k2 (2J)1. Because of Eq.(33), we also have J(vk+1)J(vk) = 1 2DT kH1 kDk. Using Eq.(35) and the above, we find that: kvk+1 v?k2 2J=kvkv?k2 2JDT kHT kDk. Moreover, according to Eqs (34)-(36), we obtain the following upper bound: kvk+1 v?k2 2J≤ kvkv?k2 2J1 κ(Hk) k∇J(vk)k4 c k∇J(vk)k2 2J ≤ kvkv?k2 2J11 κ(Hk) k∇J(vk)k4 c k∇J(vk)k2 2Jk∇J(vk)k2 (2J)1.(37) Using the Kantorovich inequality [14,1] (see also The Appendix) : (38) k∇J(vk)k4 c k∇J(vk)k2 2Jk∇J(vk)k2 (2J)1 4λmaxλmin (λmax +λmin)2. Then 11 κ(Hk) k∇J(vk)k4 c k∇J(vk)k2 2Jk∇J(vk)k2 (2J)1 14κ κ(Hk)(κ+ 1)2. Finally we obtain the desired results for any partition to ˆnsubdivision, namely kvkv?k2 2J14κ κ(Hk)(κ+ 1)2kkv0v?k2 2J. The proof is therefore complete. Remark 4.1. Remark that the proof stands correct for the boundary control, need just to change the subscript ”c” indicating the distributed control region c, replace it by ”Γ” to indicate the boundary control on Γ. MULTI-STEEPEST DESCENT ALGORITHM 11 Remark 4.2. Remark that for ˆn= 1, we immediately get the condition number κ(Hk) = 1 and we recognize the serial steepest gradient method, which has convergence rate κ1 κ+1 2. It is difficult to pre-estimate the spectral condition number κ(Hk)(ˆn) (is a function of ˆn) that play an important role and contribute to the evaluation of the rate of convergence as our theoretical rate of convergence stated. We present in what follows numerical results that demonstrate the efficiency of our algorithm, Tests consider examples of well-posed and ill-posed control problem. 5. Numerical experiments We shall present the numerical validation of our method in tow stages. In the first stage, we consider a linear algebra framework where we construct a random matrix-based quadratic cost function that we minimize using Algorithm 2. In the second stage, we consider the two optimal control problems presented in sections 3.1 and in 3.2 for the distributed- and Dirchlet boundary- control respectively. In both cases we minimize a quadratic cost function properly defined for each handled control problem. 5.1. Linear algebra program. This subsection treat basically the implementation of Algo- rithm 2. The program was implemented using the scientific programming language Scilab [26]. We consider the minimization of a quadratic form qwhere the matrix Ais an SPD m-by-mma- trix and a real vector bRmrank(A) are generated by hand (see below for their constructions). We aim at solving iteratively the linear system Ax =b, by minimizing (39) q(x) = 1 2xTAx xTb. Let us denote by ˆnthe partition number of the unknown xRm. The partition is supposed to be uniform and we assume that ˆndivides mwith a null rest. We give in Table 1aScilab function that builds the vector step-length Θk ˆnas stated in Eq. (4). In the practice we randomly generate an SPD sparse matrix A= (α+γm)IRm+R, where 0 < α < 1, γ > 1, IRmis the m-by-midentity matrix and Ris a symmetric m-by-m random matrix. This way the matrix Ais symmetric and diagonally dominant, hence SPD. It is worthy noticing that the role of αis regularizing when rapidly vanishing eigenvalues of A are generated randomly. This technique helps us to manipulate the coercivity of the handled problem hence its spectral condition number. For such matrix Awe proceed to minimize the quadratic form defined in Eq.(39) with several ˆn-subdivisions. The improvement quality of the algorithm against the serial case ˆn= 1 in term of iteration number is presented in Figure. 1. In fact, the left hand side of Figure. 1presents the cost function minimization versus the iteration number of the algorithm where several choices of partition on ˆnare carried out. In the right hand side of the Figure. 1we give the logarithmic representation of the relative error kxkx?k2 kx?k2, where x?is the exact solution of the linear system at hand. 5.2. Heat optimal control program. We discuss in this subsection the implementation results of Algorithm 3for the optimization problems presented in section 3. Our tests deal with the 2D- heat equation on the bounded domain Ω = [0,1]×[0,1]. We consider, three types of test problems in both cases of distributed and Dirichlet controls. Tests vary according to the theoretical difficulty of the control problem [6,3,10]. Indeed, we vary the regularization parameter αand also change the initial and target solutions in order to handle more severe control problems as has been tested for instance in [6]. Numerical tests concern the minimization of the quadratic cost functionals J(v) and JΓ(vΓ) using Algorithm 3. It is well known that in the case αvanishes the control problem becomes 12 MULTI-STEEPEST DESCENT ALGORITHM 1f u n c t i o n [ P] = Bu i l d Hk ( n , A, b , x k , d Jk ) 2m= s i z e (A , 1 ) ; l=m/n ; i i =modulo( m, n ) ; 3i f i i ˜=0 then 4printf(” P l e a s e c h o s e an o t h e r n ! ) ; 5abort ; 6end 7dJkn=z e r o s (m, n ) ; Dk= [ ] ; 8f o r i = 1:n 9dJ kn ( ( i 1)l + 1: i l , i )= dJk ( ( i 1) l +1: i l ) ; 10 Dk( i )=dJ kn ( : , i ) (Axkb) ; 11 end 12 Hk = [ , ] ; 13 f o r i = 1:n 14 f o r j =i : n 15 Hktmp=Ad Jkn ( : , j ) ; 16 Hk( i , j )=dJk n ( : , i ) Hktmp; 17 Hk( j , i )=Hk( i , j ) ; 18 end 19 end 20 theta=Hk\Dk ; 21 P = eye (m,m) ; 22 f o r i = 1:n 23 P( ( i 1)l + 1: i l , ( i 1)l +1 : i l )=t h e t a ( i ) . eye( l , l ) ; 24 end 25 endfunction . Table 1. Scilab function to build the vector step length, for the linear algebra program. Figure 1. Performance in term of iteration number: Several decomposition on ˆn. Results from the linear algebra Scilab program. an ”approximated” controllability problem. Therefore the control variable tries to produce a solution that reaches as close as it ”can” the target solution. With this strategy, we accentuate the ill-conditioned degree of the handled problem. We also consider an improper-posed problems for the controllability approximation, where the target solution doesn’t belong to the space of the reachable solutions. No solution exists thus for the optimization problem i.e. no control exists that enables reaching the given target ! MULTI-STEEPEST DESCENT ALGORITHM 13 For reading conveniences and in order to emphasize the role of the parameter αon numerical tests, we tag problems that we shall consider as Pα iwhere the index irefers to the problem among {1,2,3,4}. The table below resumes all numerical test that we shall experiences Minimize J(v) distributed control Minimize JΓ(vΓ) boundary control Moderate α= 1 ×1002 well-posed problem ill-posed problem corresponding data in (Pα 1), (Pα 2) corresponding data in (Pα 3) Vanishing α= 1 ×1008 ill-posed problem sever ill-posed problem corresponding data in (Pα 1),(Pα 2) corresponding data in (Pα 3) Solution does not exist sever ill-posed problem sever ill-posed problem corresponding data in (Pα 4) corresponding data in (Pα 4) We suppose from now on that the computational domain Ω is a polygonal domain of the plane R2. We then introduce a triangulation Thof Ω; the subscript hstands for the largest length of the edges of the tringles that constitute Th. The solution of the heat equation at a given time tbelongs to H1(Ω). The source terms and other variables are elements of L2(Ω). Those infinite dimensional spaces are therefore approximated with the finite-dimensional space Vh, characterized by P1the space of the polynomials of degree 1 in two variables (x1, x2). We have Vh:= {uh|uhC0(Ω), uh|KP1,for all K∈ Th}. In addition, Dirichlet boundary conditions (where the solution is in H1 0(Ω) i.e. vanishing on boundary Ω) are taken into account via penalization of the vertices on the boundaries. The time dependence of the solution is approximated via the implicit Euler scheme. The inversion operations of matrices is performed by the umfpak solver. We use the trapezoidal method in order to approximate integrals defined on the time interval. The numerical experiments were run using a parallel machine with 24 CPU’s AMD with 800 MHz in a Linux environment. We code two FreeFem++ [22] scripts for the distributed and Dirichlet control. We use MPI library in order to achieve parallelism. Tests that concern the distributed control problem are produced with control that acts on cΩ, with Ωc= [0,1 3]×[0,1 3], whereas Dirichlet boundary control problem, the control acts on Γ Ω, with Γ = {(x1, x2),|x2= 0}. The time horizon of the problem is fixed to T= 6.4 and the small time step is τ= 0.01. In order to have a better control of the time evolution we put the diffusion coefficient σ= 0.01. 5.2.1. First test problem: Moderate Tikhonov regularization parameter α.We consider an opti- mal control problem on the heat equation. The control is considered first to be distributed and then Dirichlet. For the distributed optimal control problem we first use the functions y0(x1, x2) = exp γ2π(x1.7)2+ (x2.7)2 ytarget (x1, x2) = exp γ2π(x1.3)2+ (x2.3)2, (Pα 1) as initial condition and target solution respectively. The real valued γis introduced to force the Gaussian to have support strictly included in the domain and verify the boundary conditions. The aim is to minimize the cost functional defined in Eq. (6). The decay of the cost function with respect to the iterations of our algorithm is presented in Figure. 2on the left side, and the same results are given with respect to the computational CPU’s time (in sec) on the right side. We show that the algorithm accelerates with respect to the partition number ˆnand also preserves the accuracy of the resolution. Indeed, all tests independently of ˆnalways converge to the unique solution. This is in agreement with Theorem (4.2), which proves the convergence of the algorithm to the optimal control (unique if it exists [16]) for an arbitrary partition choice ˆn. 14 MULTI-STEEPEST DESCENT ALGORITHM Figure 2. First test problem, for Pα 1: Normalized and shifted cost functional values versus iteration number (left) and versus computational time (right) for several values of ˆn(i.e. the number of processors used). Figure 3. Snapshots in ˆn= 1,16 of the distributed optimal control on the left columns and its corresponding controlled final state at time T: y(T) on the right columns. The test case corresponds to the control problem Pα 1, where αis taken as α= 1 ×1002 . Same result apply for different choice of ˆn. MULTI-STEEPEST DESCENT ALGORITHM 15 We test a second problem with an a priori known solution of the heat equation. The considered problem has y0(x1, x2) = sin(πx1) sin(πx2) ytarget (x1, x2) = exp(2π2σT ) sin(πx1) sin(πx2), (Pα 2) as initial condition and target solution respectively. Remark that the target solution is taken as a solution of the heat equation at time T. The results of this test are presented in Figure. 4, which shows the decay in values of the cost functional versus the iterations of the algorithm on the left side and versus the computational CPU’s time (in sec) on the right side. Figure 4. First test problem, for Pα 2: Normalized cost functional values versus computational CPU time for several values of ˆn(i.e. the number of processors used). We give in Figure. 3and Figure. 5several rows value snapshots (varying the ˆn) of the control and its corresponding controlled final solution y(T). Notice the stability and the accuracy of the method with any choice of ˆn. In particular the shape of the resulting optimal control is unique as well as the controlled solution y(T) doesn’t depend on ˆn. For the Dirichlet boundary control problem we choose the following functions as source term, initial condition and target solution: f(x1, x2, t)=3π3σexp(2π2σt)(sin(πx1) + sin(πx2)) y0(x1, x2) = π(sin(πx1) + sin(πx2)) ytarget (x1, x2) = πexp(2π2σ)(sin(πx1) + sin(πx2)), (Pα 3) respectively. Because of the ill-posed character of this problem, its optimization leads to results with hight contrast in scale. We therefore preferred to summarize the optimizations results in Table 3instead of Figures. Remark 5.1. Because of the linearity and the superposition property of the heat equation, it can be shown that problems (Pα 2and Pα 3) mentioned above are equivalent to a control problem which has null target solution. 16 MULTI-STEEPEST DESCENT ALGORITHM Figure 5. Snapshots in ˆn= 1,16 of the distributed optimal control on the left columns and its corresponding controlled final state at time T: y(T) on the right columns. The test case corresponds to the control problem Pα 2, where α= 1 ×1002 . Same results apply for different choice of ˆn. Test problem Results Pα 1α= 1 ×1002 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations k100 68 63 49 27 walltime in sec 15311.6 15352.3 14308.7 10998.2 6354.56 kYk(T)ytarg etk2/kytarg etk20.472113 0.472117 0.472111 0.472104 0.472102 R(0,T )kvkk2 cdt 0.0151685 0.0151509 0.0151727 0.0152016 0.015214 Pα 2α= 1 ×1002 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations k60 50 45 40 35 walltime in sec 3855.21 3726.28 4220.92 3778.13 3222.78 kYk(T)ytarg etk2/kytarg etk28.26 ×1008 8.26 ×1008 8.15 ×1008 8.15 ×1008 8.14 ×1008 R(0,T )kvkk2 cdt 1.68 ×1007 1.68 ×1007 1.72 ×1007 1.72 ×1007 1.72 ×1007 Pα 2α= 1 ×1008 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations k60 50 40 30 20 walltime in sec 3846.23 4654.34 3759.98 2835.31 1948.4 kYk(T)ytarg etk2/kytarg etk23.93 ×1008 1.14 ×1008 5.87 ×1009 2.04 ×1009 1.76 ×1009 R(0,T )kvkk2 cdt 5.42 ×1007 4.13 ×1006 2.97 ×1004 3.64 ×1003 2.51 ×1003 Table 2. Results’ summary of Algorithm 3applied on the distributed control problems Pα 1and Pα 2. 5.2.2. Second test problem: vanishing Tikhonov regularization parameter α.In this section, we are concerned with the ”approximate” controllability of the heat equation, where the regulariza- tion parameter αvanishes, practically we take α= 1 ×1008. In this case, problems Pα 2and Pα 3, in the continuous setting are supposed to be well posed (see for instances [9,21]). However, may not be the case in the discretized settings; we refer for instance to [10] (and reference therein) for more details. MULTI-STEEPEST DESCENT ALGORITHM 17 Figure 6. Several rows value snapshots in ˆnof the Dirichlet optimal control on the left columns and its corresponding controlled final state at time T: y(T) on the right columns. The test case corresponds to the control problem Pα 3, where α= 1 ×1002 . 18 MULTI-STEEPEST DESCENT ALGORITHM Figure 7. Normalized and shifted cost functional values versus computational CPU time for several values of ˆn(i.e. the number of processors used), Distributed control problem Pα 2whith α= 1 ×1008 . Test problem Results Pα 3α= 1 ×1002 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations 40 40 30 18 10 walltime in sec 12453.9 12416.1 9184.28 5570.54 3158.97 kYΓ(T)ytarget k2/kytarget k28.54 ×10+06 0.472488 0.0538509 0.0533826 0.0534024 R(0,T )kvk2 Γdt 2.79 ×10+08 1.96 ×10+07 31.4193 138.675 275.08 Pα 3α= 1 ×1008 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations 40 40 30 27 10 walltime in sec 1248.85 1248.97 916.232 825.791 325.16 kYΓ(T)ytarget k2/kytarget k28.85 ×10+06 0.151086 0.0292072 0.0278316 0.0267375 R(0,T )kvk2 Γdt 7.92 ×10+08 2.30 ×10+07 1.27 ×10+07 1.47 ×10+07 1.58 ×10+06 Table 3. Results’ summary of Algorithm 3applied on the Dirichlet boundary control problem Pα 3. Table 2contains the summarized results for the convergence of the distributed control problem. On the one hand, we are interested in the error given by our algorithm for several choices of partition number ˆn. On the other hand, we give the L2(0, T ;L2(Ωc)) of the control. We notice the improvement in the quality of the algorithm in terms of both time of execution and control energy consumption, namely the quantity R(0,T)kvkk2 cdt. In fact, for the optimal control framework (Pα 1 and Pα 2with α= 1 ×1002 ), we see that, for a fixed stopping criterion, the algorithm is faster and consume the same energy independently of ˆn. In the approximate controllability framework (Pα 2with α= 1 ×1008 vanishes), we note first that the general accuracy of the controlled solution (see the error kYk(T)ytarget k2/kytarget k2) is improved as α= 1 ×1008 compered with α= 1 ×1002. Second, we note that the error diminishes when increasing ˆn, the energy consumption rises however. The scalability in CPU’s time and number of iteration shows the enhancement of our method when it is applied (i.e. for ˆn > 1). Table 3contains the summarized results at the convergence of the Dircichlet boundary control problem. This problem is known in the literature for its ill-posedness, where it may be singular MULTI-STEEPEST DESCENT ALGORITHM 19 Figure 8. Several rows value snapshots in ˆnof the distributed optimal control on the left columns and its corresponding controlled final state at time T: (Y(T) on the right columns. The test case corresponds to the control problem Pα 2, where α= 1 ×1008 . 20 MULTI-STEEPEST DESCENT ALGORITHM Figure 9. Several rows value snapshots in ˆnof the Dirichlet optimal control on the left columns and its corresponding controlled final state at time T: YΓ(T) on the right columns. The test case corresponds to the control problem Pα 3, where α= 1 ×1008 .. MULTI-STEEPEST DESCENT ALGORITHM 21 in several cases see [3] and references therein. In fact, it is very sensitive to noise in the data. We show in Table 3that for a big value of the regularization parameter αour algorithm behaves already as the distributed optimal control for a vanishing α, in the sense that it consumes more control energy to produce a more accurate solution with smaller execution CPU’s time. It is worth noting that the serial case ˆn= 1 fails to reach an acceptable solution, whereas the algorithm behaves well as ˆnrises. We give in Figure. 6and Figure. 9several rows value snapshots (varying ˆn) of the Dirichlet control on Γ. We present in the first column its evolution during [0, T ] and on the second column its corresponding controlled final solution y(T) at time T; we scaled the plot of the z-range of the target solution in both Figs.6and 9. In each row one sees the control and its solution for a specific partition ˆn. The serial case ˆn= 1 leads to a controlled solution which doesn’t have the same rank as ytarget, whereas as ˆn rises, we improve the behavior of the algorithm. It is worth noting that the control is generally active only around the final horizon time T. This is very clear in Figure. 6and Figure. 9(see the first row i.e. case ˆn= 1). The nature of our algorithm, which is based on time domain decomposition, obliges the control to act in subintervals. Hence, the control acts more often and earlier in time (before T) and leads to a better controlled solution y(T). 5.2.3. Third test problem: Sever ill-posed problem (no solution). In this test case, we consider a severely ill-posed problem. In fact, the target solution is piecewise Lipschitz continuous, so that it is not regular enough compared with the solution of the heat equation. This implies that in our control problem, both the distributed and the Dirchlet boundary control has no solution. The initial condition and the target solution are given by y0(x1, x2) = π(sin(πx1) + sin(πx2)) ytarget (x1, x2) = min x1, x2,(1 x1),(1 x2), (Pα 4) respectively. A plots of the initial condition and the target solutions are given in Figure. 10. Figure 10. Graph of initial and target solution for both distributed and Dirichlet boundary control problem. In Figures 11 and 12 we plot the controlled solution at time Tfor the distributed and Dirichlet control problems respectively. We remark that for the distributed control problem the controlled solution is smooth except in Ωc, where the control is able to fit with the target solution. 22 MULTI-STEEPEST DESCENT ALGORITHM Figure 11. Several snapshots in ˆnof final state at time T: Y(T). The test case corresponds to Distributed control sever Ill-posed problem Pα 4. Test problem Results Distributed control Pα 4α= 1 ×1008 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations 100 68 60 50 40 walltime in sec 6381.43 6303.67 5548.16 4676.83 3785.97 kY(T)ytarg etk2/kytarg etk28.16 ×1003 5.3×1003 4.74 ×1003 3.95 ×1003 3.76 ×1003 R(0,T )kvk2 cdt 0.34 3.01 52.87 52.77 2660.87 Dirichlet control Pα 4α= 1 ×1008 Quantity ˆn= 1 ˆn= 2 ˆn= 4 ˆn= 8 ˆn= 16 Number of iterations 25 25 20 4 1 walltime in sec 848.58 655.40 655.40 146.19 62.87 kYΓ(T)ytarget k2/kytarget k22.85 ×10+10 3055 39.3 0.2 0.067 R(0,T )kvk2 Γdt 6.73 ×10+08 2.17 ×10+07 141.62 17.84 26758.5 Table 4. Results’ summary of Algorithm 3applied on to both distributed and Dirichlet boundary control for the third test problem Pα 4. MULTI-STEEPEST DESCENT ALGORITHM 23 Figure 12. Several snapshots in ˆnof final state at time T: YΓ(T). The test case corresponds to Dirichlet control sever Ill-posed problem Pα 4. Remark 5.2. Out of curiosity, we tested the case where the control is distributed on the whole domain. We see that the control succeeds to fit the controlled solution to the target even if it is not C1(Ω). This is impressive and shows the impact on the results of the regions where the control is distributed. We note the stability of the method of the distributed test case. However, the Dirichlet problem test case presents hypersensitivity. In fact, in the case of ˆn= 1 the algorithm succeeds to fit an acceptable shape of the controlled solution, although still far in the scale. We note that the time domain decomposition leads to a control which gives a good scale of the controlled solution. In this severely ill-posed problem, we see that some partitions may fail to produce a control that fit the controlled solution to the target. There is an exemption for the case of ˆn= 8 partitions, where we have a good reconstruction of the target. The summarized results are given in Tables 4. 5.2.4. Regularization based on the optimal choice of partition. The next discussion concerns the kind of situation where the partition leads to multiple solutions, which is common in ill-posed 24 MULTI-STEEPEST DESCENT ALGORITHM problems. In fact, we discuss a regularization procedure used as an exception handling tool to choose the best partition, giving the best solution of the handled control problem. It is well known that ill-posed problems are very sensitive to noise, which could be present due to numerical approximation or to physical phenomena. In that case, numerical algorithm may blow-up and fail. We present several numerical tests for the Dirichlet boundary control, which is a non trivial problem numerically. The results show that in general time domain decomposition may improve the results in several cases. But scalability is not guaranteed as it is for the distributed control. We propose a regularization procedure in order to avoid the blow-up and also to guarantee the optimal choice of partition of the time domain. This procedure is based on a test of the monotony of the cost function. In fact, suppose that we possess 64 processors to run the numerical problem. Once we have assembled the Hessian Hkand the Jacobian Dk for the partition ˆn= 64, we are actually able to get for free the results of the Hessian and the Jacobian for all partitions ˆnthat divide 64. Hence, we can use the quadratic property of the cost functional in order to predict and test the value of the cost function for the next iteration without making any additional computations. The formulae is given by: J(vk+1) = J(vk)1 2DT kH1 kDk. We present in Algorithm 4the technique that enables us to reduce in rank and compute a series of Hessians and Jacobians for any partition ˆnthat divide the available number of processors. An exemple of the applicability of these technique, on a 4-by-4 SPD matrix, is given in Appendix. Algorithm 4: Reduce in rank of the partition ˆn 0Input: ˆn, Hk ˆn, Dk ˆn; 1n= ˆn; 2Jk+1 n/2=Jk+1 n; 3while Jk+1 n/2> Jk ndo 4for i= 0; in;i+ 2 do 5Dk n/2i=Dk ni+Dk ni+1; 6for j= 0; jn;j+ 2 do 7Hk n/2i,j =Hk nj+Hk nj+1; 8end 9end 10 Estimation of the cost Jk n/2; 11 n=n/2; 12 end 6. Conclusion We have presented in this article a new optimization technique to enhance the steepest descent algorithm via domain decomposition in general and we applied our new method in particular to time-parallelizing the simulation of an optimal heat control problem. We presented its perfor- mance (in CPU time and number of iterations) versus the traditional steepest descent algorithm in several and various test problems. The key idea of our method is based on a quasi-Newton technique to perform efficient real vector step-length for a set of descent directions regarding the domain decomposition. The originality of our approach consists in enabling parallel computation where its vector step-length achieves the optimal descent direction in a high dimensional space. MULTI-STEEPEST DESCENT ALGORITHM 25 Convergence property of the presented method is provided. Those results are illustrated with several numerical tests using parallel resources with MPI implementation. Appendix A. Kantorovich matrix inequality For the sake of completeness, we give in this appendix the Matrix Kantorovich inequality, that justifies the statement of our convergence proof. Assume that 2Jis symmetric positive definite with smallest and largest eigenvalues λmin and λmax respectively. We give in the following the matrix version of the famous Kantorovich inequality, which reads: Theorem A.1 (see [14] for more details).Assume that Pˆn n=1 αn= 1 where αn0and λn> 0n; we have thus : ˆn X n=1 αnλn ˆn X n=1 αn λn (λmax +λmin)2 4λmaxλmin . By diagonalizing the symmetric positive definite operator Hwe obtain: H=PΛP1, where Pis orthonormal operator (i.e. PT=P1). Recall Eq.(38) that we rewrite as: k∇J(vk)k2 2Jk∇J(vk)k2 (2J)1 k∇J(vk)k4 c (λmax +λmin)2 4λmaxλmin . In order to simplify the expression, we shall use dkinstead of J(vk) so that the equation above reads: dT k(2J)dkdT k(2J)1dk (dT kdk)2=dT kPTΛP dk dT kPTP dk dkPTΛ1P dk dT kPTP dk . Let us define dk:= P dk, consequently the above equality becomes: dT kΛdk dT kdk dT kΛ1dk dT kdk = ˆn X n=1 (dk)2 n dT kdk λn ˆn X n=1 (dk)2 n dT kdk 1 λn . We then denote by αn=(dk)2 n dT kdkso that Pˆn n=1 αn= 1, and finally: dT kAdkdT kA1dk (dT kdk)2= ˆn X n=1 αnλn ˆn X n=1 αn λn . Example 1. Exemple 4-by-4 SPD matrix reduced in rank using the regularization procedure described in Algorithm 4. In order to illustrate the steps of Algorithm 4, we choose a simple example: a matrix 4-by-4 which we are going to reduce recursively in 2-by-2 and in 1-by-1 as follows: 6 1 2 3 1 8 2 4 2 2 12 7 3 4 7 16 7→ (6 1) (2 3) (1 8) (2 4) (2 2) (12 7) (3 4) (7 16) 7→ 7 5 9 6 4 19 7 23 7→ 16 11 11 42 16 11 11 42 7→ 27 53 7→ (80) 26 MULTI-STEEPEST DESCENT ALGORITHM References [1] Jerzy K Baksalary and Simo Puntanen. Generalized matrix versions of the cauchy-schwarz and kantorovich inequalities. Aequationes Mathematicae, 41(1):103–110, 1991. [2] Jonathan Barzilai and Jonathan M Borwein. Two-point step size gradient methods. IMA Journal of Numer- ical Analysis, 8(1):141–148, 1988. [3] Faker Ben Belgacem and Sidi Mahmoud Kaber. On the Dirichlet boundary controllability of the one- dimensional heat equation: semi-analytical calculations and ill-posedness degree. Inverse Problems, 27(5):055012, 19, 2011. [4] Petter Bjorstad and William Gropp. Domain decomposition: parallel multilevel methods for elliptic partial differential equations. Cambridge University Press, 2004. [5] Richard H. Byrd, Gabriel Lopez-Calva, and Jorge Nocedal. A line search exact penalty method using steering rules. Math. Program., 133(1-2, Ser. A):39–73, 2012. [6] C. Carthel, R. Glowinski, and J.-L. Lions. On exact and approximate boundary controllabilities for the heat equation: a numerical approach. J. Optim. Theory Appl., 82(3):429–484, 1994. [7] Augustin-Louis Cauchy. M´ethode g´en´erale pour la r´esolution des systemes d’´equations simultan´ees. Compte Rendu des Scieances de L’Acad´emie des Sciences XXV, S’erie A(25):536–538, October 1847. [8] Philippe G. Ciarlet. Introduction a l’analyse num´erique matricielle et a l’optimisation. Collection Math´ematiques Appliqu´ees pour la Maˆıtrise. [Collection of Applied Mathematics for the Master’s Degree]. Masson, Paris, 1982. [9] Jean-Michel Coron and Emmanuel Tr´elat. Global steady-state controllability of one-dimensional semilinear heat equations. SIAM J. Control Optim., 43(2):549–569, 2004. [10] Sylvain Ervedoza and Enrique Zuazua. The wave equation: Control and numerics. In Control of Partial Differential Equations, Lecture Notes in Mathematics, pages 245–339. Springer Berlin Heidelberg, 2012. [11] David J Evans. Preconditioning Methods: Theory and Applications. Gordon and Breach Science Publishers, Inc., 1983. [12] Luigi Grippo, Francesco Lampariello, and Stephano Lucidi. A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis, 23(4):707–716, 1986. [13] Marcus J Grote and Thomas Huckle. Parallel preconditioning with sparse approximate inverses. SIAM Jour- nal on Scientific Computing, 18(3):838–853, 1997. [14] L. V. Kantorovich. Functional analysis and applied mathematics. NBS Rep. 1509. U. S. Department of Commerce National Bureau of Standards, Los Angeles, Calif., 1952. Translated by C. D. Benster. [15] J Lions, Yvon Maday, and Gabriel Turinici. A”parareal”in time discretization of pde’s. Comptes Rendus de l’Academie des Sciences Series I Mathematics, 332(7):661–668, 2001. [16] J.-L. Lions. Optimal control of systems governed by partial differential equations. Translated from the French by S. K. Mitter. Die Grundlehren der mathematischen Wissenschaften, Band 170. Springer-Verlag, New York, 1971. [17] Y. Maday, J. Salomon, and G. Turinici. Monotonic parareal control for quantum systems. SIAM Journal on Numerical Analysis, 45(6):2468–2482, 2007. [18] Yvon Maday, Mohamed-Kamel Riahi, and Julien Salomon. Parareal in time intermediate targets methods for optimal control problems. In Kristian Bredies, Christian Clason, Karl Kunisch, and Gregory von Winckel, editors, Control and Optimization with PDE Constraints, volume 164 of International Series of Numerical Mathematics, pages 79–92. Springer Basel, 2013. [19] Yvon Maday and Gabriel Turinici. A parareal in time procedure for the control of partial differential equa- tions. C. R. Math. Acad. Sci. Paris, 335(4):387–392, 2002. [20] Tahir Malas and Levent G¨urel. Incomplete lu preconditioning with the multilevel fast multipole algorithm for electromagnetic scattering. SIAM Journal on Scientific Computing, 29(4):1476–1494, 2007. [21] Sorin Micu and Enrique Zuazua. Regularity issues for the null-controllability of the linear 1-d heat equation. Systems Control Lett., 60(6):406–413, 2011. [22] Olivier Pironneau, Fr´ed´eric Hecht, and Jacques Morice. freefem++, www.freefem.org/, 2013. [23] Alfio Quarteroni and Alberto Valli. Domain decomposition methods for partial differential equations. Nu- merical Mathematics and Scientific Computation. The Clarendon Press, Oxford University Press, New York, 1999. Oxford Science Publications. [24] Ulrich R¨ude. Mathematical and computational techniques for multilevel adaptive methods. SIAM, 1993. [25] Yousef Saad. Iterative methods for sparse linear systems. Siam, 2003. [26] Scilab Enterprises. Scilab: Le logiciel open source gratuit de calcul numrique. Scilab Enterprises, Orsay, France, 2012. [27] Andrea Toselli and Olof Widlund. Domain decomposition methods: algorithms and theory, volume 3. Springer, 2005. MULTI-STEEPEST DESCENT ALGORITHM 27 [28] Gonglin Yuan and Zengxin Wei. The Barzilai and Borwein gradient method with nonmonotone line search for nonsmooth convex optimization problems. Math. Model. Anal., 17(2):203–216, 2012. ## Supplementary resource (1) ... Despite this undesirable behavior, gradient descent method is still exploited in a plenty of applications and has the advantages of being adaptive, where it can be coupled with many other global techniques in order to overcome the restriction of local convergence. In addition, gradient methods could benefits from some recently developed acceleration techniques such as [23] or [30] to overcome the slow convergence rate du to the nature of the problem. ... ... where σ ∈ (0, 1 2 ] and 0 ≤ ρ ≤ σ . The Goldstein requirement (30) is often regarded as a relaxed extension of the exact line search since it reduces to the later if σ vanishes. Equation (30) ensures, indeed, that the modulus of the slope is reduced by a factor of σ or less through the line search. ... ... The Goldstein requirement (30) is often regarded as a relaxed extension of the exact line search since it reduces to the later if σ vanishes. Equation (30) ensures, indeed, that the modulus of the slope is reduced by a factor of σ or less through the line search. ... Preprint Full-text available This paper presents a general description of a parameter estimation inverse problem for systems governed by nonlinear differential equations. The inverse problem is presented using optimal control tools with state constraints, where the minimization process is based on a first-order optimization technique such as adaptive monotony-backtracking steepest descent technique and nonlinear conjugate gradient methods satisfying strong Wolfe conditions. Global convergence theory of both methods is rigorously established where new linear convergence rates have been reported. Indeed, for the nonlinear non-convex optimization we show that under the Lipschitz-continuous condition of the gradient of the objective function we have a linear convergence rate toward a stationary point. Furthermore, nonlinear conjugate gradient method has also been shown to be linearly convergent toward stationary points where the second derivative of the objective function is bounded. The convergence analysis in this work has been established in a general nonlinear non-convex optimization under constraints framework where the considered time-dependent model could whether be a system of coupled ordinary differential equations or partial differential equations. Numerical evidence on a selection of popular nonlinear models is presented to support the theoretical results. Nonlinear Conjugate gradient methods, Nonlinear Optimal control and Convergence analysis and Dynamical systems and Parameter estimation and Inverse problem ... because every minimizer y of J(t 0 , t E ; y 0 , y E ) also minimizes the integrals over the sub- and [t, t E ]. Divide-and-conquer strategies use this property by dividing one optimal control problem into several smaller ones [107]. ... Preprint This thesis presents new mathematical algorithms for the numerical solution of a mathematical problem class called \emph{dynamic optimization problems}. These are mathematical optimization problems, i.e., problems in which numbers are sought that minimize an expression subject to obeying equality and inequality constraints. Dynamic optimization problems are distinct from non-dynamic problems in that the sought numbers may vary over one independent variable. This independent variable can be thought of as, e.g., time. This thesis presents three methods, with emphasis on algorithms, convergence analysis, and computational demonstrations. The first method is a direct transcription method that is based on an integral quadratic penalty term. The purpose of this method is to avoid numerical artifacts such as ringing or erroneous/spurious solutions that may arise in direct collocation methods. The second method is a modified augmented Lagrangian method that leverages ideas from augmented Lagrangian methods for the solution of optimization problems with large quadratic penalty terms, such as they arise from the prior direct transcription method. Lastly, we present a direct transcription method with integral quadratic penalties and integral logarithmic barriers. All methods are motivated with applications and examples, analyzed with complete proofs for their convergence, and practically verified with numerical experiments. ... On the other hand, from inequality (2.5) we get Some numerical approaches regarding similar problems are given in [19,20]. ... Article Full-text available In this paper, we study the long time decay of global solution to the 3D incompressible Navier-Stokes equations. We prove that if u ∈ C ( R + , X − 1 , σ ( R 3 ) ) u\in {\mathcal{C}}\left({{\mathbb{R}}}^{+},{{\mathcal{X}}}^{-1,\sigma }\left({{\mathbb{R}}}^{3})) is a global solution to the considered equation, where X i , σ ( R 3 ) {{\mathcal{X}}}^{i,\sigma }\left({{\mathbb{R}}}^{3}) is the Fourier-Lei-Lin space with parameters i = − 1 i=-1 and σ ≥ − 1 \sigma \ge -1 , then ‖ u ( t ) ‖ X − 1 , σ \Vert u\left(t){\Vert }_{{{\mathcal{X}}}^{-1,\sigma }} decays to zero as time goes to infinity. The used techniques are based on Fourier analysis. Article Full-text available Over the last decades, many efforts have been made toward the understanding of the convergence rate of the gradient-based method for both constrained and unconstrained optimization. The cases of the strongly convex and weakly convex payoff function have been extensively studied and are nowadays fully understood. Despite the impressive advances made in the convex optimization context, the nonlinear non-convex optimization problems are still not fully exploited. In this paper, we are concerned with the nonlinear, non-convex optimization problem under system dynamic constraints. We apply our analysis to parameter identification of systems governed by general nonlinear differential equations. The considered inverse problem is presented using optimal control tools. We tackle the optimization through the use of Fletcher-Reeves nonlinear conjugate gradient method satisfying strong Wolfe conditions with inexact line search. We rigorously establish a convergence analysis of the method and report a new linear convergence rate which forms the main contribution of this work. The theoretical result reported in our analysis requires that the second derivative of the payoff functional be continuous and bounded. Numerical evidence on a selection of popular nonlinear models is presented as a direct application of parameter identification to support the theoretical findings. KEYWORDS convergence analysis, dynamical systems, inverse problem, nonlinear conjugate gradient methods, nonlinear optimal control, parameter identification MSC CLASSIFICATION 65K10; 47N40; 45Q05; 65L09; 90C26; 49J15 Article An approach to developing active control strategies for separated flows is presented. The methodology proposed is applied to the incompressible unsteady wake flow behind a circular cylinder at a Reynold's number of 100. Control action is achieved via cylinder rotation. Low-order models which are amenable to control and which incorporate the full non-linear dynamics are developed by applying the proper orthogonal decomposition technique to data provided by numerical simulation. This process involves extensions to the usual POD approach and the results are therefore assessed for two ‘open-loop’ test cases. The predictions are found to be satisfactory for control purposes, assuming the model can be reset periodically. The use of these models for optimal control is discussed in a companion paper, Part II. Copyright © 1999 John Wiley & Sons, Ltd. Chapter In these Notes we make a self-contained presentation of the theory that has been developed recently for the numerical analysis of the controllability properties of wave propagation phenomena and, in particular, for the constant coefficient wave equation. We develop the so-called discrete approach. In other words, we analyze to which extent the semidiscrete or fully discrete dynamics arising when discretizing the wave equation by means of the most classical scheme of numerical analysis, shear the property of being controllable, uniformly with respect to the mesh-size parameters and if the corresponding controls converge to the continuous ones as the mesh-size tends to zero. We focus mainly on finite-difference approximation schemes for the one-dimensional constant coefficient wave equation. Using the well known equivalence of the control problem with the observation one, we analyze carefully the second one, which consists in determining the total energy of solutions out of partial measurements. We show how spectral analysis and the theory of non-harmonic Fourier series allows, first, to show that high frequency wave packets may behave in a pathological manner and, second, to design efficient filtering mechanisms. We also develop the multiplier approach that allows to provide energy identities relating the total energy of solutions and the energy concentrated on the boundary. These observability properties obtained after filtering, by duality, allow to build controls that, normally, do not control the full dynamics of the system but rather guarantee a relaxed controllability property. Despite of this they converge to the continuous ones. We also present a minor variant of the classical Hilbert Uniqueness Method allowing to build smooth controls for smooth data. This result plays a key role in the proof of the convergence rates of the discrete controls towards the continuous ones. These results are illustrated by means of several numerical experiments. Conference Paper In this paper we present a time adaptive technique for the solution of optimal control problems where the dynamic is given by an evolutive semi linear PDE. The method is based on a model reduction using a POD approximation coupled with a Hamilton-Jacobi equation characterizing the value function of the corresponding control problem for the reduced system. The choice of the POD basis is updated according to the evaluation of a numerical indicator in order to guarantee a global accurate solution. This is obtained via a sub-division of the time horizon into sub-intervals where the residual is below a given threshold. Some numerical tests illustrate the main features of this approach. Article Optimal control theory is concerned with finding control functions that minimize cost functions for systems described by differential equations. The methods have found widespread applications in aeronautics, mechanical engineering, the life sciences, and many other disciplines. This book focuses on optimal control problems where the state equation is an elliptic or parabolic partial differential equation. Included are topics such as the existence of optimal solutions, necessary optimality conditions and adjoint equations, second-order sufficient conditions, and main principles of selected numerical techniques. It also contains a survey on the Karush-Kuhn-Tucker theory of nonlinear programming in Banach spaces. The exposition begins with control problems with linear equation, quadratic cost function and control constraints. To make the book self-contained, basic facts on weak solutions of elliptic and parabolic equations are introduced. Principles of functional analysis are introduced and explained as they are needed. Many simple examples illustrate the theory and its hidden difficulties. This start to the book makes it fairly self-contained and suitable for advanced undergraduates or beginning graduate students. Advanced control problems for nonlinear partial differential equations are also discussed. As prerequisites, results on boundedness and continuity of solutions to semilinear elliptic and parabolic equations are addressed. These topics are not yet readily available in books on PDEs, making the exposition also interesting for researchers. Alongside the main theme of the analysis of problems of optimal control, Troltzsch also discusses numerical techniques. The exposition is confined to brief introductions into the basic ideas in order to give the reader an impression of how the theory can be realized numerically. After reading this book, the reader will be familiar with the main principles of the numerical analysis of PDE-constrained optimization. Article Part I. Finite Dimensional Control Problems: 1. Calculus of variations and control theory 2. Optimal control problems without target conditions 3. Abstract minimization problems: the minimum principle for the time optimal problem 4. Abstract minimization problems: the minimum principle for general optimal control problems Part II. Infinite Dimensional Control Problems: 5. Differential equations in Banach spaces and semigroup theory 6. Abstract minimization problems in Hilbert spaces: applications to hyperbolic control systems 7. Abstract minimization problems in Banach spaces: abstract parabolic linear and semilinear equations 8. Interpolation and domains of fractional powers 9. Linear control systems 10. Optimal control problems with state constraints 11. Optimal control problems with state constraints: The abstract parabolic case Part III. Relaxed Controls: 12. Spaces of relaxed controls: topology and measure theory 13. Relaxed controls in finite dimensional systems: existence theory 14. Relaxed controls in infinite dimensional spaces: existence theory. Article We consider a time-dependent PDE-constrained optimization problem of the form min y∈Y,u∈U J(y,u)subjecttoC(y,u)=0·(1) Here u∈U is the control living in a Banach space U, y∈Y⊂C([0,T];B) is a time-dependent state with Banach spaces B and Y, where B⊂L 2 (Ω) with a domain Ω⊂ℝ n . The state equation C(y,u)=0 is the appropriate formulation of a time-dependent PDE (or a system of time-dependent PDEs) y t +A(t,x,y,u)=0,(t,x)∈(0,T)×Ω,y(0,x)=y 0 (x),x∈Ω,(2) with initial data y 0 ∈B. For convenience we assume that boundary conditions are incorporated into the state space Y. This chapter is organized as follows. In Section 7.2 we call the parareal time-domain decomposition technique and state known convergence results. In Section 7.3 we perform a time-domain decomposition of the optimal control problem (7,1) by applying the parareal technique to the state equation. Moreover, we derive optimality conditions for the decomposed problem and show that the parareal technique can also be applied to the adjoint system. In Section 7.4 we propose a generalized SQP method that allows the solution of (7.1) for an arbitrary user-provided state solver and an adjoint solver, where we use parareal solvers in the present chapter. We prove the global convergence of the method. In Section 7.5 we use the generalized SQP method with the specific choice of parareal solvers for state and adjoint equations. We demonstrate the efficiency of the approach by numerical results for the optimal control of a semilinear parabolic equation in two dimensions. Article We present an approach to develop active control strategies for separated flows, which is applied to incompressible unsteady wake behind a circular cylinder at a Reynolds number of 100. Control action is achieved via cylinder rotation. Low-order models which are amenable to control and which incorporate the full nonlinear dynamics are developed by applying the proper orthogonal decomposition (POD) technique to data provided by numerical simulation. This process involves extensions to the usual POD approach, and the results are therefore assessed for two ‘open-loop’ test cases. The predictions are found to be satisfactory for control purposes, assuming the model can be reset periodically. In part II, optimal control theory is used to implement the model-based control. It is found that the level of wake unsteadiness can be reduced, even when the low-order model is reset on the basis of limit flow field information. The degree of reduction is dependent on the accuracy of the low-order model, and ways of refining it in the light of control simulations are considered. However, results from two straightforward approaches to this problem suggest that it is easy to ‘over-tune’ the model, resulting in less successful control.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100653529167175, "perplexity": 2688.0833136894953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00794.warc.gz"}
http://mathhelpforum.com/trigonometry/93917-trig-problem.html
# Math Help - Trig problem 1. ## Trig problem Hi guys. I need some help to solve this problem. a) A body of mass m kg is attached to a point by string of length 1.25 m. If the mass is rotating in a horizontal circle 0.75 m below the point of attachment, calculate its angular velocity. (b) If the mass rotates on a table, calculate the force on the table when the speed of rotation is 25 rpm and the mass is 6 kg 2. Originally Posted by jo74 Hi guys. I need some help to solve this problem. a) A body of mass m kg is attached to a point by string of length 1.25 m. If the mass is rotating in a horizontal circle 0.75 m below the point of attachment, calculate its angular velocity. (b) If the mass rotates on a table, calculate the force on the table when the speed of rotation is 25 rpm and the mass is 6 kg part (a) $r = \sqrt{1.25^2 - .75^2}$ $\theta = \arcsin\left(\frac{.75}{1.25}\right)$ if $T$ is the tension in the string ... $T\sin{\theta} = mg$ $T\cos{\theta} = F_c$ , where $F_c$ is the centripetal force. eliminate $T$ in the two equations above and solve for $F_c$ ... then use the equation below to determine $\omega$ $F_c = mr\omega^2$ part (b) is too easy ... think about the net force in the vertical direction. 3. Hi it's me again. I tried to find the answer but just got my paper back but got it completely wrong again. I already had the first part answered, it's the second part I'm struggling with. 4. Originally Posted by jo74 Hi guys. I need some help to solve this problem. a) A body of mass m kg is attached to a point by string of length 1.25 m. If the mass is rotating in a horizontal circle 0.75 m below the point of attachment, calculate its angular velocity. (b) If the mass rotates on a table, calculate the force on the table when the speed of rotation is 25 rpm and the mass is 6 kg is the string still attached as in part (a) ? 5. Yeah, it is still attached to the string at the same angle as in part 1. 6. $T\cos{\theta} = m r \omega^2$ $T = \frac{m r \omega^2}{\cos{\theta}}$ sub in your known values and calculate $T$ let $F$ = force that the table exerts upward on the mass. $T\sin{\theta} + F = mg$ $F = mg - T\sin{\theta}$ calculate $F$ 7. Thanks for that one, much appreciated 8. Hi I am also having problems with this exact question, I have the value for the radius and the angle but I am completly lost how to find the other values, all formulas contain m but it is an unknown value? Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758698344230652, "perplexity": 450.19126032211494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00210-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/72891/computing-the-pdf-of-a-product-of-the-sum-of-2-nakagmi-m-r-v-s-with-a-normal-r-v
# Computing the PDF of a product of the sum of 2 Nakagmi-m R.V.s with a Normal R.V I really have two questions: One is about computing a PDF and the second is about how to sum a series involving $K_v(x)$ that the PDF in question seems to contain. I have come across the following problem related to my research: Compute the PDF of $(X_1+X_2)\times N$ where $X_1$ and $X_2$ are Nakagami-$m$ R.V.'s and $N$ is a zero -mean Gaussian R.V. with variance $\sigma^2$, $$f_{X_i}(x) = \frac {2 m^mx^{2m-1}}{\Gamma(m)\Omega^m} \exp \left ( - \frac{mx^2}{\Omega} \right) ~~\text{for}~i=1,2$$ and $$f_N(x)=\frac{1}{\sqrt{2 \pi}\sigma}\exp\left (-\frac{x^2}{2\sigma^2} \right ).$$ A number of research articles discuss the probability distribution for $X_1+X_2$, for example here. The PDF is given by $$f_{X_1+X_2}(x) = \frac {4\sqrt{\pi}\Gamma(m)m^{2m}x^{4m-1}}{\Gamma^2(m)\Gamma\left(2m+\frac12\right)2^{4m-1}\Omega^{2m}} \exp \left ( - \frac{mx^2}{\Omega} \right)\times{}_1F_1\left(2m;2m+\frac12;\frac{mr^2}{2\Omega}\right)$$ All random variables are independent I do know the general strategy for computing the PDF/CDF of the product of two independent R.V.'s its just that computation here becomes very tedious. I have obtained one expression but am not sure of its validity. It has an infinite series containing the modified Bessel function of the 2nd kind: $$\sum_{n=0}^\infty\frac{(2m)_n m^n}{(2m+1/2)_n n!}\left[\left(\frac{m}{2\Omega}\right)^{3/2}\frac{|z|}{\sigma}\right]^n K_{2m+n-1/2}\left(\sqrt{\frac{2m}\Omega}\frac{|z|}\sigma\right)$$ I stongly suspect that the above series can be simplified to the generalized Hypergeometric Function or some expression thereof. I searched a number of handbooks of special functions but none have quite the same expression. I would greatly appreciate any pointers or identities to attack the problem of summing this series. And to begin with, does the PDF look like this? - Just to demystify things a little bit, I obtained the the last expression through a straight forward application of $\int_0^\infty\frac1xf_{X_1+X_2}(x)f_{N_1}(\frac{z}{x})dx$, a well-known result. Bessel function enter the picture by replacing ${}_1F_1(\cdot)$ by its series representation and interchanging integration and summation (assuming its valid). The identity $\Gamma_b(\alpha)=2b^{\alpha/2}K_\alpha(2\sqrt{b})$ where $\Gamma_b(\alpha)=\int_0^\infty t^{\alpha-1} e^{-t-b/t} dt$ was used. –  Iconoclast Oct 16 '11 at 3:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607558250427246, "perplexity": 249.47402117857558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00114-ip-10-180-206-219.ec2.internal.warc.gz"}