url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://researchportal.hw.ac.uk/en/publications/boundary-value-problems-for-the-elliptic-sine-gordon-equation-in-
# Boundary value problems for the elliptic sine-Gordon equation in a semi-strip A. S. Fokas, J. Lenells, Beatrice Pelloni Research output: Contribution to journalArticle 7 Citations (Scopus) ### Abstract We study boundary value problems posed in a semistrip for the elliptic sine-Gordon equation, which is the paradigm of an elliptic integrable PDE in two variables. We use the method introduced by one of the authors, which provides a substantial generalization of the inverse scattering transform and can be used for the analysis of boundary as opposed to initial-value problems. We first express the solution in terms of a 2 by 2 matrix Riemann-Hilbert problem whose \jump matrix" depends on both the Dirichlet and the Neumann boundary values. For a well posed problem one of these boundary values is an unknown function. This unknown function is characterised in terms of the so-called global relation, but in general this characterisation is nonlinear. We then concentrate on the case that the prescribed boundary conditions are zero along the unbounded sides of a semistrip and constant along the bounded side. This corresponds to a case of the so-called linearisable boundary conditions, however a major difficulty for this problem is the existence of non-integrable singularities of the function q_y at the two corners of the semistrip; these singularities are generated by the discontinuities of the boundary condition at these corners. Motivated by the recent solution of the analogous problem for the modified Helmholtz equation, we introduce an appropriate regularisation which overcomes this difficulty. Furthermore, by mapping the basic Riemann-Hilbert problem to an equivalent modified Riemann-Hilbert problem, we show that the solution can be expressed in terms of a 2 by 2 matrix Riemann-Hilbert problem whose jump matrix depends explicitly on the width of the semistrip L, on the constant value d of the solution along the bounded side, and on the residues at the given poles of a certain spectral function denoted by h. The determination of the function h remains open. Original language English 241-282 42 Journal of Nonlinear Science 23 2 https://doi.org/10.1007/s00332-012-9150-5 Published - Apr 2013
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285698533058167, "perplexity": 311.76716804329595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221980.49/warc/CC-MAIN-20200925021647-20200925051647-00245.warc.gz"}
http://www.payne.org/index.php/Calculating_Voronoi_Nodes
# Calculating Voronoi Nodes This note documents a general approach to calculating the location of Voronoi nodes for point, segment, and arc geometric entities. This is a part of my work for developing CAM software for my CNC machine. In a Voronoi diagram, nodes (or vertices) are points that are equidistant from three or more entities (points, lines, arcs, etc.). In most Voronoi literature, these entities are typically called "sites" or "generators". A bisector edge connecting two nodes separates two sites, and all points on that bisector are equidistant from the two sites. In implementations, nodes may be restricted to 3 bisector edges. In cases where the node actually has more bisectors, the diagram may be represented by nodes connected with zero-length segment. At output time, the implementation may then collapse these coincident nodes into nodes with an arbitrary number of edges. For best numerical stability, Voronoi nodes are calculated based on the three defining geometric sites, not by attempting to interset bisectors. For implementing a node solver, there are four cases to consider: • Line, line, line • Line, line, arc • Line, arc, arc • Arc, arc, arc (Note that points are merely zero-radius arcs). However, there are a number of degenerate cases to consider. When each degenerate case is considered for each of the four cases above, the implementation combinatorics can get daunting. This note describes a general node-solving approach, that handles all degenerate cases. ## Line and Arc Equations For segments, we use the approach of first adding all segment end-points, then the supporting line is added and used to calculate node distances. In this case, a line can be defined as: aix + biy + ci = 0 And an offset of the line may be defined as: aix + biy + ci + kit = 0 Where ki is the offset direction (1 or -1) and t is the offset distance. Note that the line coefficients must be normalized such that $a_i^2 + b_i^2 = 1$ or kit will not represent the correct offset distance. Similarly, an arc (circle) of radius r, centered on (xi,yi) is defined by: $\left(x - x_i\right)^2 + \left( y - y_i \right)^2 - r^2 = 0$ Note that a point is just a zero-radius arc. GIven this equation, an offset arc can be defined as: $\left(x - x_i\right)^2 + \left( y - y_i \right)^2 - (r + k_i t)^2 = 0$ Where ki is the offset direction (1 or -1) and t is the offset distance. ## Generalized Equation System Given the line and circle equations above, any site (segment, arc, or point) can be represented in a general form: q0(x2 + y2t2) + a0x + b0y + k0t + c0 = 0 Where q0 is 0 or 1. For lines, the generalized coefficients are simply the line coefficients: \begin{align} q_0 &= 0 \\ a_0 &= a_i \\ b_0 &= b_i \\ k_0 &= k_i \\ c_0 &= c_i \\ \end{align} For arcs, the generalized coefficients are: \begin{align} q_0 &= 1 \\ a_0 &= -2x_k \\ b_0 &= -2y_k \\ k_0 &= -2 k r_k \\ c_0 &= x_k^2 + y_k^2 - r_k^2 \\ \end{align} And points are merely zero-radius arcs: \begin{align} q_0 &= 1 \\ a_0 &= -2x_k \\ b_0 &= -2y_k \\ k_0 &= 0 \\ c_0 &= x_k^2 + y_k^2 \\ \end{align} Given this, a Voronoi node is found by solving a three-equation quadratic system of the form: \begin{align} q_0 (x^2 + y^2 - t^2) + a_0 x + b_0 y + k_0 t + c_0 &= 0 \\ q_1 (x^2 + y^2 - t^2) + a_1 x + b_1 y + k_1 t + c_1 &= 0 \\ q_2 (x^2 + y^2 - t^2) + a_2 x + b_2 y + k_2 t + c_2 &= 0 \\ \end{align} Where q0, q1, and q2 are each 0 or 1. This system can be stored in a 3x5 numeric array. ## Solving the System The generalized system can be solved as follows: 1. If the system is linear (all q values are zero), solve the 3x3 system using linear methods 2. Otherwise, reduce the system to a three-equation system with one quadratic equation and two linear equations 3. Check determinants of that system, and select a substitution 4. Evaluate the resulting system (2 linear, one quadratic) for the solution The general three-equation system above can be transformed to a system with one quadratic equation and two linear equations. In one case, the system is already in that form. If there are two or three quadratic equations, one can be subtracted from the other one or two to yield a quadratic-linear-linear system. At this point, the two linear equations form 2 equation system over 3 variables, and we can solve the system for any two variables in terms of a third, using one of three possible cases: • x and y in terms of t • y and t in terms of x • t and x in terms of y However, this is where we need to consider the degeneracies: any degeneracies in the original system will cause degeneracies in the 2-equation linear system. We need to check the determinants of the 2-equation linear system to pick one of the three substitutions, above. Now, we can substitute the linear variables and combine that with the quadratic equation to form a system of three variables (here, u, v and w), with u and v solved for in terms of w: \begin{align} a_0 u^2 + b_0 u + c_0 v^2 + d_0 v + e_0 w^2 + f_0 w + g_0 = 0 \\ u = a_1 w + b_1 \\ v = a_2 w + b_2 \\ \end{align} This system has a closed-form solution: \begin{align} a &= a_0 a_1^2 + c_0 a_2^2 + e_0 \\ b &= 2 a_0 a_1 b_1 + 2 a_2 b_2 c_0 + a_1 b_0 + a_2 d_0 + f_0 \\ c &= a_0 b_1^2 + c_0 b_2^2 + b_0 b_1 + b_2 d_0 + g_0 \\ \end{align} where w can be found as the roots of this quadratic equation: \begin{align} a w^2 + b w + c = 0 \end{align} Finally, u and v can be calculated from w. Substituting back to x, y and t yields the one or two solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999830722808838, "perplexity": 745.1458063244631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861027.55/warc/CC-MAIN-20150124161101-00207-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cauchy-schwarz-equality-implies-parallel.842107/
# Cauchy Schwarz equality implies parallel 1. Nov 8, 2015 ### Bipolarity I'm learning about Support Vector Machines and would like to recap on some basic linear algebra. More specifically, I'm trying to prove the following, which I'm pretty sure is true: Let $v1$ and $v2$ be two vectors in an inner product space over $\mathbb{C}$. Suppose that $\langle v1 , v2 \rangle = ||v1|| \cdot ||v2||$, i.e. the special case of Cauchy Schwarz when it is an equality. Then prove that $v1$ is a scalar multiple of $v2$, assuming neither vector is $0$. I've tried using the triangle inequality and some other random stuff to no avail. I believe there's some algebra trick involved, could someone help me out? I really want to prove this and get on with my machine learning. Thanks! BiP 2. Nov 8, 2015 ### Staff: Mentor How <v1,v2> is defined? 3. Nov 8, 2015 ### Bipolarity Proving this should not require the definition of the inner product, only the properties. 4. Nov 8, 2015 ### Staff: Mentor What's the difference? Which properties do you mean? 5. Nov 8, 2015 ### Bipolarity Conjugate symmetry, linearity in the first argument, and positive-definiteness. 6. Nov 8, 2015 ### Staff: Mentor Looks to me as another version of the cosine formula if applied to v1+v2 7. Nov 9, 2015 ### rs1n By definition, $\langle v_1, v_2 \rangle = \| v_1 \| \cdot \| v_2 \| \cdot \cos(\theta)$ where $\theta$ is the angle between vectors $v_1$ and $v_2$. If you also additionally know that $\langle v_1, v_2 \rangle = \| v_1 \| \cdot \| v_2 \|$, then the angle between the two vectors must either be 0 or 180 degrees. So they are parallel; hence one is a scalar multiple of the other. 8. Nov 9, 2015 ### zinq That's the definition? It would be true in a real inner product space, but this one is over ℂ. 9. Nov 9, 2015 ### rs1n You are absolutely right! My eyes failed me, somehow. 10. Nov 9, 2015 ### PeroK One way to do it is to consider the vector $u = v_2 - \frac{<v1, v2>}{<v1, v1>} v_1$ Look at $<u, u>$ and show that it's zero when you have C-S equality. This also leads to a proof of the C-S inequality. 11. Nov 9, 2015 ### rs1n To get back to the problem, though... over the complex numbers, the inner product is presumably a Hermitian inner product. So \begin{align*} \| u + v \|^2 & = \langle u + v, u+v \rangle = \langle u,u \rangle + \langle u,v \rangle + \langle v,u \rangle + \langle v, v \rangle\\ & = \langle u,u \rangle + \langle u,v \rangle + \overline{\langle u,v \rangle} + \langle v, v \rangle \\ & = \langle u,u \rangle + 2 \mathrm{Re}(\langle u,v \rangle) + \langle v, v \rangle\\ & = \| u\|^2 + 2 \mathrm{Re}(\langle u,v \rangle) + \| v\|^2 \end{align*} Similarly, $0 \le \| u + \lambda v \|^2 = \| u\|^2 + 2 \mathrm{Re}(\overline{\lambda} \langle u,v \rangle) + |\lambda|^2 \| v\|^2$ Let $$\lambda = -\frac{\langle u, v\rangle }{\|v \|^2}$$ and the right hand side (above) will simplify to the C.S. inequality. Equality occurs if $$\| u + \lambda v \| = 0$$ 12. Nov 9, 2015 ### Hawkeye18 There are few possible ways of doing that. The first one is just to follow the proof of the Cauchy--Schwarz. Namely, for real $t$ consider $$\|v_1 - t v_2\|^2 = \|v_1\|^2 +t^2\|v_2\|^2 - 2t (v_1, v_2) = \|v_1\|^2 +t^2\|v_2\|^2 - 2t \|v_1\|\cdot \|v_2\| = (\|v_1\|-t\|v_2\|)^2.$$ The right hand side of this chain of equations is $0$ when $t=\|v_1\|/\|v_2\|$. So for this $t$ you get that $v_1-tv_2=0$, which is exactly what you need. Another way is more geometric and probably more intuitive. You define $w$ to be the orthogonal projection of $v_2$ onto the one dimensional subspace spanned by $v_1$, $w= \|v_1\|^{-2} (v_2, v_1) v_1$. Then $(v_1, v_2)= (v_1, w)$ (checked by direct calculation) and $v_2-w$ is orthogonal to $v_1$ (and so to $w$). Therefore $\|v_2\|^2 =\|w\|^2+\|v_2-w\|^2$. By Cauchy--Schwarz $(v_1, w) \le \|v_1\|\cdot \|w\|$, but on the other hand $(v_1, w) = (v_1, v2) = \|v_1\|\cdot \|v_2\|$, so $\|v_1\|\cdot \|v_2\| \le \|v_1\|\cdot \|w\|$ and therefore $\|v_2\|\le \|w\|$. Comparing this with $\|v_2\|^2 =\|w\|^2+\|v_2-w\|^2$ we conclude that $v_2-w=0$. The second proof is a bit longer, but it is more intuitive, in a sense that it is a pretty standard reasoning used when one works with orthogonal projections. 13. Nov 9, 2015 ### PeroK The second method is what I suggested in post #10. And, in fact, you can prove Cauchy Schwartz more intuitively this way. 14. Nov 9, 2015 ### Bipolarity I see! Thank you all for your replies! I knew I had seen it somewhere, little did I know it was right there in the proof of the C-S inequality itself! BiP
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837754368782043, "perplexity": 367.4714451189513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589470.9/warc/CC-MAIN-20180716213101-20180716233101-00620.warc.gz"}
https://proofwiki.org/wiki/Subtract_Half_is_Replicative_Function
# Subtract Half is Replicative Function ## Theorem Let $f: \R \to \R$ be the real function defined as: $\forall x \in \R: f \left({x}\right) = x - \dfrac 1 2$ Then $f$ is a replicative function. ## Proof $\displaystyle \sum_{k \mathop = 0}^{n - 1} f \left({x + \frac k n}\right)$ $=$ $\displaystyle \sum_{k \mathop = 0}^{n - 1} \left({x - \frac 1 2 + \frac k n}\right)$ $\displaystyle$ $=$ $\displaystyle n x - \frac n 2 + \frac 1 n \sum_{k \mathop = 0}^{n - 1} k$ $\displaystyle$ $=$ $\displaystyle n x - \frac n 2 + \frac 1 n \frac {n \left({n - 1}\right)} 2$ Closed Form for Triangular Numbers $\displaystyle$ $=$ $\displaystyle n x - \frac n 2 + \frac n 2 - \frac 1 2$ $\displaystyle$ $=$ $\displaystyle n x - \frac 1 2$ $\displaystyle$ $=$ $\displaystyle f \left({n x}\right)$ Hence the result by definition of replicative function. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520912766456604, "perplexity": 179.75378319663162}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00293.warc.gz"}
https://mathoverflow.net/questions/70022/sh-sh-map-represents-the-category-of-sheaves-on-a-stack
# (Sh,Sh-map) represents the category of sheaves on a stack. I'm trying to understand the following theorem, but I don't think I'm reading it correctly. Let $(\mathcal{C},J)$ be a site (with a subcanonical topology). Write $\mathcal{C}/X$ for the groupoid of objects over $X\in \mathcal{C}$. Let $\mbox{Sh}:\mathcal{C}^{op} \rightarrow \mbox{Gpds}$ be the functor taking $X$ to the category of sheaves on $\mathcal{C}/X$ and isomorphisms of sheaves, and let $\mbox{Sh-map}:\mathcal{C}^{op} \rightarrow \mbox{Gpds}$ be the functor taking $X$ to the category whose objects are sheaf morphisms $\mathscr{F} \rightarrow \mathscr{G}$ and whose morphisms are commuting squares of sheaves determined by isomorphisms $\mathscr{F}_1 \stackrel{\sim}{\rightarrow} \mathscr{F}_2$ and $\mathscr{G}_1 \stackrel{\sim}{\rightarrow} \mathscr{G}_2$. These are in fact both stacks on $\mathcal{C}$, and moreover they determine a category-object $(\mbox{Sh},\mbox{Sh-map})$ in the category of stacks. Theorem: The category of sheaves on a stack $\mathscr{M}$ is equivalent to the category of morphisms of stacks $\mathscr{M} \rightarrow (\mbox{Sh,Sh-map})$. That is, the objects are the 1-morphisms and the morphisms are the 2-morphisms. I'd like to interpret this to mean that the objects of $Shv(\mathscr{M})$ are associated to 1-morphisms $\mathscr{M} \rightarrow \mbox{Sh}$, and that the morphisms of $Shv(\mathscr{M})$ are associated to 2-morphisms in $Hom_{Stacks}(\mathscr{M},\mbox{Sh})$, which in turn should be the same as 1-morphisms $\mathscr{M} \rightarrow \mbox{Sh-map}$. But there a number of problems with this. First, given a sheaf $\mathcal{F} \in Shv(\mathscr{M})$ I'm having trouble constructing a natural transformation $\mathscr{M} \rightarrow \mbox{Sh}$. Perhaps I shouldn't, but to check this I'm using a test object $X\in \mathcal{C}$. By Yoneda, an object of $\mathscr{M}(X)$ is the same as a 1-morphism of stacks $f:X\rightarrow \mathscr{M}$, and so I obtain an object of $Sh(X)$ (i.e. a sheaf on $\mathcal{C}/X$) via $(\alpha:Y\rightarrow X) \mapsto \mathcal{F}(f\alpha:Y \rightarrow X \rightarrow \mathscr{M})$. That's natural enough. Again by Yoneda, a morphism in $\mathscr{M}(X)$ is a 2-morphism between maps $f,g:X\rightarrow \mathscr{M}$ of stacks, i.e. a section $s:X\rightarrow X\times_\mathscr{M} X$ of the projection from the 2-category fiber product. Out of this, I'm supposed to construct a natural transformation from the sheaf $(\alpha:Y\rightarrow X) \mapsto \mathcal{F}(f\alpha:Y \rightarrow X \rightarrow \mathscr{M})$ to the sheaf $(\alpha:Y\rightarrow X) \mapsto \mathcal{F}(g\alpha:Y \rightarrow X \rightarrow \mathscr{M})$. But the only structure in place to give me such a thing is a morphism in $Stacks/\mathscr{M}$ between $f\alpha$ and $g\alpha$, and I don't see how to construct this. Second, a 2-morphism between 1-morphisms $f,g\in Hom_{Stacks}(\mathscr{M},\mbox{Sh})$ is a section $s:\mathscr{M} \rightarrow \mathscr{M} \times_{\mbox{Sh}} \mathscr{M}$. Thus for any $(\alpha:X\rightarrow \mathscr{M})\in \mathscr{M}(X)$, we get an object $(\alpha,\beta:X \rightarrow \mathscr{M},\varphi:f\alpha \stackrel{\sim}{\rightarrow} g\alpha)\in (\mathscr{M}\times_{\mbox{Sh}}\mathscr{M})(X)$. On the other hand, a 1-morphism $\mathscr{M} \rightarrow \mbox{Sh-map}$ is for each $\alpha:X \rightarrow \mathscr{M}$ an arbitrary morphism on sheaves on $\mathcal{C}/X$. These can't be the same. By the way, I've tried to do (what I think is) the right thing and work out the sheaf in $Shv(\mbox{Sh})$ associated to the 1-morphism $\mbox{Id}:\mbox{Sh} \rightarrow \mbox{Sh}$, following Yoneda and all. From the above, it's easy to see what this sheaf should do to morphisms $X\rightarrow \mbox{Sh}$ from a representable stack. But it appears that I need to make choices if I want to say what it does to arbitrary morphisms of stacks $\mathscr{N} \rightarrow \mbox{Sh}$. Perhaps instead I should take a limit or colimit over its application to the full subcategory of representable stacks over $\mathscr{N}$? The notes you are reading seem to disagree with more commonly accepted language (cf. SGA1 Exp 13, Vistoli's notes, or the Stacks project). Some of this seems to be an attempt at expository ease, e.g., the parenthetical remark in example 8.2 ("We will mention the following technical difficulties but will ignore them for now:") where "for now" really means forever. Oddly enough, one of the mentioned technical difficulties is more or less what prevents $\text{Sh}$ and $\text{Sh-map}$ from having natural stack structures in the sense of the notes - pullback is not strictly functorial. This un-naturality is why the common definition of stack is different - the notion of stack in the notes corresponds to the usual notion of stack in groupoids equipped with a splitting (or cleavage). The use of the category object $(\text{Sh}, \text{Sh-map})$ is a kludge to replace the usual stack $Sh/\mathcal{C}$ (in categories rather than groupoids) whose objects are sheaves over comma categories, and whose morphisms over any $f: U \to V$ in $\mathcal{C}$ are $f$-maps of sheaves - see Examples 3.20 and 4.11 in Vistoli. The author of the notes employs $\text{Sh-map}$ in order to add non-invertible sheaf maps, because the 2-morphisms in $Hom_{Stacks}(\mathcal{M}, \text{Sh})$ are all invertible. In other words, you have to throw away the 2-morphisms that are given to you by $\text{Sh}$, and use the larger collection of possibly non-invertible two-morphisms afforded by $\text{Sh-map}$. Once you have done that, I think your main problems are resolved. You've already worked out the object part of getting from a sheaf on $Stacks/\mathcal{M}$ to a natural transformation from $\mathcal{M}$ to $\text{Sh}$. If you have a morphism $\beta: X \to Z$ in $\mathcal{C}$, and $f: Z \to \mathcal{M}$, then $\beta$ induces a morphism of stacks over $\mathcal{M}$. If I'm not mistaken, the sheaf $\mathcal{F}$ takes this to the map in $\text{Sh}$ given by base change: $$\left( (\alpha: Y \mapsto Z) \mapsto \mathcal{F}(f \circ \alpha) \right) \mapsto \left( \beta^* \alpha: Y \times_Z X \to X) \mapsto \mathcal{F}(f \circ \beta \circ \beta^*\alpha) \right)$$ Similarly, you can get from a sheaf map on $Stacks/\mathcal{M}$ to a natural transformation from $\mathcal{M}$ to $\text{Sh-map}$. There seems to be a lot of additional checking necessary for proving the equivalence, which I don't feel like doing for you (sorry). Let me try to strip off all the stack language, which confuses me, and recast what I think is your question just in terms of category theory. (If I have misinterpreted which part is your question, I apologize. The only question mark in your post is in the very last line, but I don't think that's the main question.) You are, I believe, in the following situation. You have some ambient Cartesian category ($Stacks(M)$). You have a test object $M$ in your category, which happens to be the terminal object, if I'm not mistaken, but I don't think this matters. You have a category object $C_1 \rightrightarrows C_0$ internal to your category. Then in particular for any test object $M$, there is a category (in SET) whose morphisms are the arrows $M \to C_1$, and whose objects are the arrows $M \to C_0$ --- indeed, being a category object is equivalent to being a representable presheaf valued in categories. Then you also have a theorem that recognizing this category as some other more interesting category ($Sheaves(M)$). This almost sums up your paragraph after the theorem. But complicating the matter is that you don't have some ambient Cartesian category. Rather, $Stacks(M)$ is a (Cartesian) 2-category. So now we have two separate notions. Indeed, as a 2-category, it is among other things enriched in categories, so that you have in fact a category of morphisms $M \to C_0$, for example. In the previous paragraph, I was only considering this as a set. So then perhaps your question is the following. You have an ambient (Cartesian) 2-category, and let's assume it to be strict, a test object $M$, and an object $C_0$. Then $\hom(M,C_0)$ is a 1-category. The objects of this 1-category is the unenriched hom, which I will denote as $|\hom(M,C_0)|$. You'd like to recognize the morphisms of $\hom(M,C_0)$ as the set $|\hom(M,C_1)|$ for some particular $C_1$. Suppose that you have good Cartesian-closedness conditions, and an "inner hom" $\underline{\hom}$. Then what you'd like is a "walking arrow object" $Arr$ (which you do have with even stronger conditions, with buzzwords like "tensored over CAT"), and then $C_1 = \underline{\hom}(Arr,C_0)$. My guess is that the category of stacks on $M$ does have all of these strong closedness and tensored conditions. Moreover, my guess is that the correctly-implemented inner-hom construction in the previous paragraph, applied when $C_0 = Sh(M)$, yields $C_1 = ShMap(M)$. Maybe these are originally the versions with target CAT, not GPD, but then the trick is to post-compose with the 2-functor CAT$\to$GPD that keeps only the invertible morphisms, and make sure that this doesn't change too much. I hope this recasting helps, and that I didn't utterly misinterpret your question. This is about the limit of my category theory, and well past what I know about stacks properly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9178980588912964, "perplexity": 202.56363743558362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00508.warc.gz"}
https://math.stackexchange.com/questions/2956529/problem-defining-root-automorphisms-on-galois-group
# Problem defining root automorphisms on Galois Group I'm trying find the Galois Group of $$f(x)=x^4+5x^2+5$$. I've finded the roots: $$\alpha_1=i \sqrt{\frac{5-\sqrt5}{2}}$$; $$\alpha_2=-i \sqrt{\frac{5-\sqrt5}{2}}$$; $$\alpha_3=i \sqrt{\frac{5+\sqrt5}{2}}$$; $$\alpha_4=-i \sqrt{\frac{5+\sqrt5}{2}}$$; And i've finded that: $$\alpha_1=i \sqrt{\frac{5-\sqrt5}{2}}$$; $$\alpha_2=- \alpha_1$$; $$\alpha_3=\frac{\sqrt5}{\alpha_1}$$; $$\alpha_4=-\frac{\sqrt5}{\alpha_1}$$; But, the problem is defining the automorphisms. If i define the automorphisms like this: $$\sigma_1(\alpha_1)=\alpha_1$$; $$\sigma_2(\alpha_1)=-\alpha_1$$; $$\sigma_3(\alpha_1)=\frac{\sqrt5}{\alpha_1}$$; $$\sigma_4(\alpha_1)=-\frac{\sqrt5}{\alpha_1}$$; The solution of the exercise says that I should get $$\mathbb{Z_4}$$, but none of the automorphisms give me a generator of all group. Am I defining automorphisms well? The automorphisms is well defined, but i ignored that$$\sqrt5 \in \mathbb{Q}(\alpha_1)$$ and $$\sqrt5=2 \alpha_1^2+5$$, then the automorphisms $$\sigma_3, \sigma_4$$ have order $$4$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756403565406799, "perplexity": 1157.067917370327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00251.warc.gz"}
https://jp.maplesoft.com/support/help/maple/view.aspx?path=StringTools/Sentences&L=J
StringTools - Maple Programming Help Home : Support : Online Help : Programming : Names and Strings : StringTools Package : English Text : StringTools/Sentences StringTools Sentences approximate segmentation of a string of English text into sentences Calling Sequence Sentences( s ) Parameters s - Maple string Description • The Sentences(s) command attempts to split a string, presumed to be composed of English language text, into its constituent sentences. It does this by recognizing sentence boundaries. The beginning and the end of the input string are regarded as sentence boundaries in all cases. Internal sentence boundaries are recognized by the presence of a sentence terminator, which is one of the following: Period . Exclamation point ! Question mark ? Colon : • A small number of built-in patterns are used to recognize some exceptions. • Note that you can also use the RegSplit command with the fixed string "\n\n" as the splitting pattern to segment English text into paragraphs. • All of the StringTools package commands treat strings as (null-terminated) sequences of $8$-bit (ASCII) characters.  Thus, there is no support for multibyte character encodings, such as unicode encodings. Examples > $\mathrm{with}\left(\mathrm{StringTools}\right):$ > $\mathrm{Sentences}\left("This is a\nsentence. Can we have another? Yes, here\text{'}s one more."\right)$ ${"This is a sentence."}{,}{"Can we have another?"}{,}{"Yes, here\text{'}s one more."}$ (1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759593963623047, "perplexity": 2282.397958913221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00301.warc.gz"}
http://math.stackexchange.com/questions/182889/two-problems-with-prime-numbers
# Two problems with prime numbers Problem 1. Prove that there exists $n\in\mathbb{N}$ such that in interval $(n^2, \ (n+1)^2)$ there are at least $1000$ prime numbers. Problem 2. Let $s_n=p_1+p_2+...+p_n$ where $p_i$ is the $i$-th prime number. Prove that for every $n$, there exists $k\in\mathbb{N}$ such that $s_n<k^2<s_{n+1}$. I've found these two a while ago and they interested me. But don't have any ideas. - Problem 2: For any positive real $x$, there is a square between $x$ and $x+2\sqrt{x}+2$. Therefore it will suffice to show that $p_{n+1}\geq 2\sqrt{s_n}+2$. We have $s_{n}\leq np_n$ and $p_{n+1}\geq p_n+2$, so we just need to show $p_n\geq 2\sqrt{np_n}$, i.e., $p_n\geq 4n$. That this holds for all sufficiently large $n$ follows either from a Chebyshev-type estimate $\pi(x)\asymp\frac{x}{\log(x)}\,$ (we could also use PNT, but we don't need the full strength of this theorem), or by noting that fewer than $\frac{1}{4}$ of the residue classes mod $210=2\cdot3\cdot5\cdot7$ are coprime to $210$. We can check that statement by hand for small $n$. There have already been a couple of answers, but here is my take on problem 1: Suppose the statement is false. It follows that $\pi(x)\leq 1000\sqrt{x}$ for all $x$. This contradicts Chebyshev's estimate $\pi(x)\asymp \frac{x}{\log(x)}$ - For the first one, you can prove that there is a positive integer $n$ such that $\pi((n+1)^2 - 1) - \pi(n^{2}) \geqslant 1000$, where $\pi$ is the prime counting function, using the Prime Number Theorem. For the second one, I believe Bertrand's Postulate may be useful. - Solution to the first: By inspection of the primes, pick $n = 8715$. Note that $n^2 = 75951225 < 75951233$, and $(n+1)^2 = 75968656 > 75968723$. Now $75951233$ and $75968723$ are primes, with $\ge 1000$ primes between them so we're done. [1] [1] The $4446857$th prime is $75951233$ and the $4447859$th prime is $75968723$ (source: http://primes.utm.edu/nthprime/index.php). Further, $4447859 - 4446857 = 1002$. Exercise: Show that $n = 8715$ is the minimum $n$ satisfying the claim. - Here's my attempt on the first part of the question, I'm not sure how valid it is, though; so I'd appreciate feedback/corrections: It is known that: $\pi(n)\sim\frac{n}{\ln{n}}$. Where $\pi(n)$ is the prime counting function on $n$ (i.e. the number of primes less than or equal to $n$). Therefore, we are looking to show that there exists some $n$ such that: $$\frac{(n+1)^{2}}{2\ln{(n+1)}}-\frac{n^{2}}{2\ln{n}}\geqslant1000$$ Let us call $\pi_{d}(n)=\frac{(n+1)^{2}}{2\ln{(n+1)}}-\frac{n^{2}}{2\ln{n}}$. Therefore, we have: $$\frac{d\pi_{d}}{dn}=\left(\frac{n}{2\left(\ln{n}\right)^{2}}+\frac{n+1}{\ln{(n+1)}}\right)-\left(\frac{n}{\ln{n}}+\frac{n+1}{2\left(\ln{(n+1)}\right)^{2}}\right)$$ Therefore $\frac{d\pi_{d}}{dn}\gt0$, $\forall n\gt0$. Therefore, as $\pi_{d}(n)$ is monotonically increasing: $\exists n:\pi((n+1)^{2})-\pi(n^{2})\geqslant1000$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882581830024719, "perplexity": 85.02102140386813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701314683/warc/CC-MAIN-20130516104834-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.teachask.com/2020/11/cbse-class-10-maths-exam-2020-chapter.html
# CBSE Class 10 Maths Exam 2020: Chapter-wise Important Formulas, Theorems & Properties for Last Minute Revision We have collated all important formulas from all chapters of CBSE Class 10 Maths. Important terms and properties which are useful in the Class 10 Maths calculations are also provided in this article for quick revision before the exam. Mathematics is one such subject which often gives nightmares to students. While Maths is a little tricky, it is not difficult. It just requires a thorough understanding of the concepts, regular practice and a good hold on all important formulas to score high in the Maths subject. To help students get all important formulas, theorems and properties at one place, we have collated the chapter-wise formulas along with important terms & properties occurring in Class 10 Maths.  Students must grasp all the formulas and theorems included in chapters like Triangles, Polynomials, Coordinate Geometry, Trigonometry and Mensuration as these chapters carry high weightage for Maths Board Exam 2020. Check below the important formulas, terms and properties for CBSE Class 10 Maths Exam 2020: 1. Real Numbers: Euclid’s Division Algorithm (lemma): According to Euclid’s Division Lemma if we have two positive integers a and b, then there exist unique integers q and r  such that a = bq + r, where 0 ≤ r ≤ b. (Here, a = dividend, b = divisor, q = quotient and r = remainder.) 2. Polynomials: (i) (a + b)2 = a2 + 2ab + b2 (ii) (a – b)2  = a2 – 2ab + b2 (iii) a2 – b= (a + b) (a – b) (iv) (a + b)3  = a3 + b3 + 3ab(a + b) (v) (a – b)3  =  a3 – b3 – 3ab(a – b) (vi) a3 + b3 = (a + b) (a– ab + b2) (vii) a3 – b3  = (a – b) (a+ ab + b2) (viii) a4 – b4 = (a2)2 – (b2)2 = (a2 + b2) (a2 – b2) = (a2 + b2) (a + b) (a – b) (ix) (a + b + c) 2  = a2 + b2 + c2 + 2ab + 2bc + 2ac (x) (a + b – c) 2  =  a2 + b2 + c2 + 2ab – 2bc – 2ca (xi) (a – b + c)2  = a2 + b2 + c2 – 2ab – 2bc + 2ca (xii) (a – b – c)2  = a2 + b2 + c2 – 2ab +  2bc – 2ca (xiii) a3 + b3 + c3 – 3abc  = (a + b + c)(a2 + b2 + c2 – ab – bc – ca) 3. Linear Equations in Two Variables: For the pair of linear equations a1 + b1y + c1 = 0  and  a2 + b2y + c2 = 0, the nature of roots (zeroes) or solutions is determined as follows: (i) If a1/a2 ≠ b1/b2 then we get a unique solution and the pair of linear equations in two variables are consistent. Here, the graph consists of two intersecting lines. (i) If a1/a2 ≠ b1/b2 ≠ c1/c2, then there exists no solution and the pair of linear equations in two variables are said to be inconsistent. Here, the graph consists of parallel lines. (iii) If a1/a2 = b1/b2 = c1/c2, then there exists infinitely many solutions and the pair of lines are coincident and therefore, dependent and consistent. Here, the graph consists of coincident lines. For a quadratic equation, ax+ bx + c = 0 • Sum of roots = –b/a • Product of roots = c/a • If roots of a quadratic equation are given, then the quadratic equation can be represented as: x2 – (sum of the roots)x + product of the roots = 0 • If Discriminant > 0, then the roots the quadratic equation are real and unequal/unique. • If Discriminant = 0, then the roots the quadratic equation are real and equal. • If Discriminant < 0, then the roots the quadratic equation are imaginary (not real). • Important Formulas - Boats and Streams (i) DownstreamIn water, the direction along the stream is called downstream.(ii) UpstreamIn water, the direction against the stream is called upstream.(iii) Let the speed of a boat in still water be u km/hr and the speed of the stream be v km/hr, then Speed downstream = (u + v) km/hr Speed upstream = (u - v) km/hr. 5. Arithmetic Progression: • nth Term of an Arithmetic Progression: For a given AP, where a is the first term, d is the common difference, n is the number of terms, its nth term (an) is given as a= a + (n−1)×d • Sum of First n Terms of an Arithmetic Progression, Sn is given as: 6. Similarity of Triangles: • If two triangles are similar then ratio of their sides are equal. • Theorem on the area of similar triangles: If two triangles are similar, then the ratio of the area of both triangles is proportional to the square of the ratio of their corresponding sides. 7. Coordinate Gemetry: • Distance Formulae: Consider a line having two point A(x1, y1) and B(x2, y2), then the distance of these points is given as: • Section Formula: If a point p divides a line AB with coordinates A(x1, y1) and B(x2, y2), in ratio m:n, then the coordinates of the point p are given as: • Mid Point Formula: The coordinates of the mid-point of a line AB with coordinates A(x1, y1) and B(x2, y2), are given as: • Area of a Triangle: Consider the triangle formed by the points A(x1, y1) and B(x2, y2) and C(x3, y3) then the area of a triangle is given as- 8. Trigonometry: In a right-angled triangle, the Pythagoras theorem states (perpendicular )+ ( base )2 = ( hypotenuse )2 Important trigonometric properties: (with P = perpendicular, B = base and H = hypotenuse) • SinA = P / H • CosA = B / H • TanA = P / B • CotA = B / P • CosecA = H / P • SecA = H/B Trigonometric Identities: • sin2A + cos2A=1 • tan2A +1 = sec2A • cot2A + 1= cosec2A Relations between trigonometric identities are given below: Trigonometric Ratios of Complementary Angles are given as follows: • sin (90° – A) = cos A • cos (90° – A) = sin A • tan (90° – A) = cot A • cot (90° – A) = tan A • sec (90° – A) = cosec A • cosec (90° – A) = sec A Values of Trigonometric Ratios of 0° and 90° are tabulated below: 9. Circles: Important properties related to circles: • Equal chord of a circle are equidistant from the centre. • The perpendicular drawn from the centre of a circle, bisects the chord of the circle. • The angle subtended at the centre by an arc = Double the angle at any part of the circumference of the circle. • Angles subtended by the same arc in the same segment are equal. • To a circle, if a tangent is drawn and a chord is drawn from the point of contact, then the angle made between the chord and the tangent is equal to the angle made in the alternate segment. • The sum of opposite angles of a cyclic quadrilateral is always 180o. Important formulas related to circles: • Area of a Segment of a Circle: If AB is a chord which divides the circle into two parts, then the bigger part is known as major segment and smaller one is called minor segment. Here, Area of the segment APB = Area of the sector OAPB – Area of ∆ OAB 10. Mensuration: Check below the important formulas for areas and volumes of solids: 11. Statistics: For Ungrouped Data: Mean: The mean value of a variable is defined as the sum of all the values of the variable divided by the number of values. Median: The median of a set of data values is the middle value of the data set when it has been arranged in ascending order.  That is, from the smallest value to the highest value. Median is calculated as Where n is the number of values in the data. If the number of values in the data set is even, then the median is the average of the two middle values. Mode: Mode of a statistical data is the value of that variable which has the maximum frequency For Grouped Data: Mean: If x1, x2, x3,......xn are observations with respective frequencies f1, f2, f3,.....fn then mean is given as: Median: For the given data, we need to have class interval, frequency distribution and cumulative frequency distribution. Then, median is calculated as Where l = lower limit of median class, n = number of observations, cf = cumulative frequency of class preceding the median class, f = frequency of median class, h = class size (assuming class size to be equal) Mode: Modal class: The class interval having highest frequency is called the modal class and Mode is obtained using the modal class. Where l = lower limit of the modal class, h = size of the class interval (assuming all class sizes to be equal), f1 = frequency of the modal class, f0 = frequency of the class preceding the modal class, f2 = frequency of the class succeeding the modal class. 12. Probability: Understanding the basic concepts and learning all the important formulas is extremely sufficient to pass the Maths exam with flying colours. If you know the formulas very well then it will not take much time for you to solve questions in exam paper. So, keep practicing with the list of important formulas given above in this article.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101865887641907, "perplexity": 1291.0907637346704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00763.warc.gz"}
http://math.stackexchange.com/questions/883890/can-all-integration-be-thought-of-as-projections
# Can all integration be thought of as projections? For example, the integral of the function f(x) could be thought of the projection of f on the function g, where g is identically 1. Following this logic, can we think of the multiplication of f and g as the area between f and g, no matter how complicated f or g is? But then this is a bit more if we think of the area between f and g as the projection of f on g, but how do we explain negative area i.e. integrating sin(x). I don't know if I was taught the right calculus but I've never seen a textbook that introduces the notion of the integral of f and g as the projection of f on g. Why is that? And if possible, can someone please provide me a good notes on this subject? Treating integration from a projection perspective. - This is one of the basic ideas behind Riesz representation. –  Adam Hughes Jul 31 '14 at 17:36 I won't do that. Integrals are more basic than projections. I mean, if somebody wakes me up at 3 a.m. and abruptly asks me what an integral is, I would probably blabber something about areas under curves, not about projections. This said, it surely is a nice train of ideas that is interesting to explore once. –  Giuseppe Negro Jul 31 '14 at 18:07 Just to expand a little, the Riesz representation theorem has these ideas embedded into it. Generically, on a Hilbert space of $\Bbb C$-valued functions, you will see $$\int f(x)\overline{g}(x)\,d\mu(x)$$ (if they have values in $\Bbb R$ the complex conjugation over $g$ disappears) as an inner product of two functions, which--you may remember from basic vector calculus--was how you talked about projecting one vector onto another $$\text{proj}_{\mathbf{v}}(\mathbf{u})={\mathbf{u}\cdot\mathbf{v}\over\lVert\mathbf{v}\rVert^2}\mathbf{v}$$ which (if you go back even further) you may remember you had the inner product come in by drawing triangles with the law of cosines. Albeit, there is no notion of "area" going on with this sort of thing. It measures more just ideas of "amount of one vector in the direction of another." However, you do have a nice interpretation of negatives, since we have the formula $$\cos(\theta)={\mathbf{u}\cdot\mathbf{v}\over \lVert\mathbf{u}\rVert\lVert\mathbf{v}\rVert}$$ where $\theta$ is the angle between the two vectors, so that negative numbers just mean the angle is not in the range $-{\pi\over 2}\le\theta\le {\pi\over 2}$. It's quite natural that you would not have found this in a calculus class, it requires some good amount of linear algebra and proving that that formula gives an inner-product on a space of square-integrable functions is much harder mathematics than just calculus. Any book on Hilbert spaces which mentions a suitably generalized version of the Riesz representation theorem should be sufficiently satisfying for someone seeking to pursue this line of thought, and any good mathematics library should have $n+1$ books on the subject if you just look in the catalog for "Hilbert spaces." A quick googling of "Hilbert space" yields many results, such as these notes. A proof that $L^2$ of a measure space is a Hilbert space is a classical result in any functional analysis textbook, and follows from the Minkowski inequality. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489361643791199, "perplexity": 290.68926003081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00225-ip-10-236-191-2.ec2.internal.warc.gz"}
https://community.jmp.com/t5/Discussions/Augmenting-a-screening-DoE/td-p/11984
## Augmenting a screening DoE Community Trekker Joined: Apr 18, 2015 I’d like to explore beyond the lower limit of a previous DoE – is it acceptable to augment that old DoE with a new one that has all the original factors, but one or more of the factors has a new limit? 1 ACCEPTED SOLUTION Accepted Solutions Solution Yes, in JMP, select DOE > Augment Design. Identify the responses and factors from the initial experiment that you want to carry forward into the next design. Click Augment (last of five buttons at the bottom of the dialog). At this point, you can change the factor range, among other things. Learn it once, use it forever! 2 REPLIES Solution Yes, in JMP, select DOE > Augment Design. Identify the responses and factors from the initial experiment that you want to carry forward into the next design. Click Augment (last of five buttons at the bottom of the dialog). At this point, you can change the factor range, among other things. Learn it once, use it forever! Community Trekker Joined: Apr 18, 2015 Thanks, MarkBailey!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008604288101196, "perplexity": 2661.949249707336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690035.53/warc/CC-MAIN-20170924152911-20170924172911-00047.warc.gz"}
https://cerncourier.com/a/neutrino-physics-shines-bright-in-heidelberg/
# Neutrino physics shines bright in Heidelberg 9 July 2018 The 28th International Conference on Neutrino Physics and Astrophysics took place in Heidelberg, Germany, on 4–9 June. It was organised by the Max Planck Institute for Nuclear Physics and the Karlsruhe Institute of Technology. With 814 registrations, 400 posters and the presence of Nobel laureates, Art McDonald and Takaaki Kajita, it was the most attended of the series to date – showcasing many new results. Several experiments presented their results for the first time at Neutrino 2018. T2K in Japan and NOvA in the US updated their results, strengthening their indication of leptonic CP violation and normal-neutrino mass ordering, and improving their precision in measuring the atmospheric oscillation parameters. Taken together with the Super-Kamiokande results of atmospheric neutrino oscillations, these experiments provide a 2σ indication of leptonic CP violation and a 3σ indication of normal mass ordering. In particular, NOvA presented the first 4σ evidence of ν̅μν̅e transitions compatible with three-neutrino oscillations. The next-generation long-baseline experiments DUNE and Hyper-Kamiokande in the US and Japan, respectively, were discussed in depth. These experiments have the capability to measure CP violation and mass ordering in the neutrino sector with a sensitivity of more than 5σ, with great potential in other searches like proton decay, supernovae, solar and atmospheric neutrinos, and indirect dark-matter searches. All the reactor experiments – Daya Bay, Double Chooz and Reno – have improved their results, providing precision measurements of the oscillation parameter θ13 and of the reactor antineutrino spectrum. The Daya Bay experiment, integrating 1958 days of data taking, with more than four million antineutrino events on tape, is capable of measuring the reactor mixing angle and the effective mass splitting with a precision of 3.4% and 2.8%, respectively. The next-generation reactor experiment JUNO, aiming at taking data in 2021, was also presented. The third day of the conference focused on neutrinoless double-beta decay (NDBD) experiments and neutrino telescopes. EXO, KamLAND-Zen, GERDA, Majorana Demonstrator, CUORE and SNO+ presented their latest NDBD search results, which probe whether neutrinos are Majorana particles, and their plans for the short-term future. The new GERDA results pushed their NDBD lifetime limit based on germanium detectors to 0.9 × 1026 years (90% CL), which represents the best real measurement towards a zero-background next-generation NDBD experiment.  CUORE also updated its results based on tellurium to 0.15 × 1026 years. Neutrino telescopes are of great interest for multi-messenger studies of astrophysical objects at high energies. Both IceCube in Antarctica and ANTARES in the Mediterranean were discussed, together with their follow-up IceCube Gen2 and KM3NeT facilities. IceCube has already collected 7.5 years of data, selecting 103 events (60 of which have an energy of more than 60 TeV) and a best-fit power law of E–2.87. IceCube does not provide any evidence for neutrino point sources and the measured νe:νμτ neutrino-flavour composition is 0.35:0.45:0.2. A recent development in neutrino physics has been the first observation of coherent elastic neutrino–nucleus scattering as discussed by the COHERENT experiment (CERN Courier October 2017 p8), which opens the possibility of searches for new physics. A very welcome development at Neutrino 2018 was the presentation of preliminary results from the KATRIN collaboration about the tritium beta-decay end-point spectrum measurement, which allows a direct measurement of neutrino masses. The experiment has just been inaugurated at KIT in Germany and aims to start data taking in early 2019 with a sensitivity of about 0.24 eV after five years. The strategic importance of a laboratory measurement of neutrino masses cannot be overestimated. A particularly lively session at this year’s event was the one devoted to sterile-neutrino searches. Five short-baseline nuclear reactor experiments (DANSS, NEOS, STEREO, PROSPECT and SoLid) presented their latest results and plans regarding the so-called reactor antineutrino anomaly. These are experiments aimed at detecting the oscillation effects of sterile neutrinos at reactors free from any assumption about antineutrino fluxes. There was no reported evidence for sterile oscillations, with the exception of the DANSS experiment reporting a 2.8σ effect, which is not in good agreement with previous measurements of this anomaly. These experiments are only at the beginning of data taking and more refined results are expected in the near future, even though it is unlikely that any of them will be able to provide a final sterile-neutrino measurement with a sensitivity much greater than 3σ. Further discussion was raised by the results reported by MiniBooNE at Fermilab, which reports a 4.8σ excess of electron-like events by combining their neutrino and antineutrino runs. The result is compatible with the 3.8σ excess reported by the LSND experiment about 20 years ago in an experiment taking data in a neutrino beam created by pion decays at rest at Los Alamos. Concerns are raised by the fact that even sterile-neutrino oscillations do not fit the data very well, while backgrounds potentially do (and the MicroBooNE experiment is taking data at Fermilab with the specific purpose of precisely measuring the MiniBooNE backgrounds). Furthermore, as discussed by Michele Maltoni in his talk about the global picture of sterile neutrinos, no sterile neutrino model can, at the same time, accommodate the presumed evidence of νμνe oscillations by MiniBooNE and the null results reported by several different experiments (among which is MiniBooNE itself) regarding νμ disappearance at the same Δm2. The lively sessions at Neutrino 2018, summarised in the final two beautiful talks by Francesco Vissani (theory) and Takaaki Kajita (experiment), reinforce the vitality of this field at this time (see A golden age for neutrinos).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484721779823303, "perplexity": 2846.437055993255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00504.warc.gz"}
https://www.jiskha.com/questions/1821849/a-radio-tower-is-located-450-feet-from-a-building-from-a-window-in-the-building-a-person
# precalculus A radio tower is located 450 feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is 24 and that the angle of depression to the bottom of the tower is 21 . How tall is the tower? 1. 👍 2. 👎 3. 👁 4. ℹ️ 5. 🚩 1. 450 tan(24º) + 450 tan(21º) 1. 👍 2. 👎 3. ℹ️ 4. 🚩 2. Tan21 = 450/x X = 1172 Ft. = hor. distance between bldg. and tower. Tan24 = h1/1172 h1 = 1172*Tan24 = 522 Ft. ht. = 522 + 450 = 1. 👍 2. 👎 3. ℹ️ 4. 🚩 ## Similar Questions 1. ### Pre calc A water tower is located x = 375 ft from a building (see the figure). From a window in the building, an observer notes that the angle of elevation to the top of the tower is 39° and that the angle of depression to the bottom of 2. ### Calculus A ladder 15 feet long is leaning against a building so that end X is on level ground and end Y is on the wall. Point 0 is where the wall meets the ground. X is moving away from the building at a constant rate of 1/2 foot per 3. ### precalculus A radio tower is located 450 feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is 29 degrees and that the angle of depression to the bottom of the tower is 4. ### MATH A radio tower 500 feet high is located on the side of a hill with an inclination to the horizontal of 5 degrees. How long should two guy wires be if they are to connect to the top of the tower and be secured at two points 100 feet 1. ### Trigonometry A building is 50 feet high. At a distance away from the building, an observer notices that the angle of elevation to the top of the building is 41 degrees. How far is the observer from the base of the building? 2. ### precal A contractor needs to know the height of a building to estimate the cost of a job. From a point 84 feet away from the base of the building, the angle of elevation to the top of the building is found to be 44 degrees and 23'. Find 3. ### MATH the figure shows that the angle of elevation to the top of the building changes from 20 degree to 40 degree as an observer advances 75 feet toward the building. Find the height of the building to the nearest feet. 4. ### trigonometry If the height of the building is 250 feet, what is the distance from the top of a building to the tip of its shadow? A. 75.4 feet B. 113.1 feet C. 0.003 feet D. 331.3 feet 1. ### Math HELP!!!!!!!! A superhero is trying to leap over a tall building. The function f(x)=-16x^2+200x gives the superheroes height in feet as a function of time. The building is 612 feet high. Will the superhero make it over the building? Explain. 2. ### calculus A ladder 13 feet long is leaning against the side of a building. If the foot of the ladder is pulled away from the building at a constant rate of 8 inches per second, how fast is the area of the triangle formed by the ladder, the 3. ### math a window in one building, the angle of elevation to the top of a second, taller building is 38°. The angle of depression to the base of the taller building is 51°. Determine the height of the second, taller building to the 4. ### TRIG WORD PROBLEMS from 25 feet away from the base of a building, the angle of elevation from the ground to the top of a building is measured 38 degrees. how tall is the building. WHat i did was put 25 feet on the base of the triangle the angle
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426117300987244, "perplexity": 568.3149035248537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00082.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=915726
MathSciNet bibliographic data MR915726 32A10 (30E99 43A85) Grinberg, Eric L. A boundary analogue of Morera's theorem in the unit ball of ${\bf C}\sp n$${\bf C}\sp n$. Proc. Amer. Math. Soc. 102 (1988), no. 1, 114–116. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960771203041077, "perplexity": 4745.34658107488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701174607.44/warc/CC-MAIN-20160205193934-00160-ip-10-236-182-209.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0DW5
Lemma 47.20.3. Let $A$ be a Noetherian ring. If there exists a finite $A$-module $\omega _ A$ such that $\omega _ A[0]$ is a dualizing complex, then $A$ is Cohen-Macaulay. Proof. We may replace $A$ by the localization at a prime (Lemma 47.15.6 and Algebra, Definition 10.104.6). In this case the result follows immediately from Lemma 47.20.2. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9936076402664185, "perplexity": 265.95507859564515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00361.warc.gz"}
https://fabricebaudoin.wordpress.com/category/uncategorized/
# Category Archives: Uncategorized ## Lecture 6. Rough paths Fall 2017 In the previous lecture we defined the Young’s integral when and with . The integral path has then a bounded -variation. Now, if is a Lipschitz map, then the integral, is only defined when , that is for . With … Continue reading ## MA5311. Take home exam Exercise 1. Solve Exercise 44 in Chapter 1 of the book. Exercise 2.  Solve Exercise 3 in Chapter 1 of the book. Exercise 3.  Solve Exercise 39 in Chapter 1 of the book. Exercise 4. The heat kernel on is given by . By … Continue reading ## MA5161. Take home exam Exercise 1. The Hermite polynomial of order is defined as Compute . Show that if is a Brownian motion, then the process is a martingale. Show that   Exercise 2. (Probabilistic proof of Liouville theorem) By using martingale methods, prove that if … Continue reading ## MA5311. Take home exam due 03/20 Solve Problems 1,2,8,9,10,11 in Milnor’s book. (Extra credit for problem 6) ## MA5161. Take home exam. Due 03/20 Exercise 1. Let . Let be a continuous Gaussian process such that for , Show that for every , there is a positive random variable such that , for every and such that for every , \textbf{Hint:} You may use … Continue reading
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994641900062561, "perplexity": 1108.5546518384474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686169.5/warc/CC-MAIN-20170920033426-20170920053426-00611.warc.gz"}
https://codedump.io/share/bZ3lTYbsbdkT/1/php-pregmatch-exact-match-and-get-whats-inside-brackets
user7133318 - 8 months ago 49 PHP Question # PHP preg_match exact match and get whats inside brackets Lets say I have the following string: ``````"**link(http://google.com)*{Google}**" `````` And I want to use preg_match to find the EXACT text `**link(http://google.com)` but the text inside the brackets changes all the time. I used to use: ``````preg_match('#\((.*?)\)#', \$text3, \$match2); `````` Which would get what is inside the brackets which is good but if I had: `*hwh(http://google.com)**` it would get whats inside of that. So how can i get whats inside the brackets if, in front of the brackets has `**link` ? `~(?:\*\*link\(([^\)]+)\))~` will match contents in the brackets for all inputs that look like `**link(URL)` but do not contain extra `)` inside URLs. See the example on Regexr: http://regexr.com/3en33 . The whole example: ``````\$text = '"**link(http://google.com)*{Google}**"
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677642345428467, "perplexity": 1979.339833165394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102819.58/warc/CC-MAIN-20170817013033-20170817033033-00667.warc.gz"}
http://www.obs-hp.fr/www/preprints/pp133/PP133.HTML
[ Other OHP preprints ] # THE RICH SPECTROSCOPY OF REFLECTION NEBULAE ## C. Moutou 1, L. Verstraete 2, K. Sellgren 3, A. Léger 2 1Observatoire de Haute Provence, St Michel, France 2Institut d'Astrophysique Spatiale, Orsay, France 3Ohio State University, Columbus, USA ### Abstract: The ISO-SWS spectra of two bright reflection nebulae, NGC 7023 and NGC 2023, are presented. We discuss the emission of molecular hydrogen from these photodissociated interfaces. Details of the aromatic infrared band profiles as well as the continuum emission are also analysed. The ISO-LWS spectrum of NGC 7023 is also presented, at two positions in the nebula. The dust temperature at the brightest far-infrared position of NGC 7023 is estimated to be 45 K. Key words: ISO; reflection nebula; dust emission bands; molecular hydrogen emission. # INTRODUCTION ISO is providing us with high-resolution spectra of the Aromatic Infrared Bands between 3 and 13 µm (hereafter AIBs) in a wide variety of galactic environments. At the spectral resolution of ISOCAM-CVF (= 40), the AIBs have been shown to be similar in interstellar regions with effective radiation fields ranging from 1 to 104 times the interstellar radiation field (Boulanger et al. 1998 and references therein). Using SWS data at higher spectral resolution, we show here that there are differences between the AIB spectra of two reflection nebulae, NGC 2023 and NGC 7023. The physical conditions of the gas, associated with the AIBs, are discussed through the analysis of the molecular hydrogen emission. # COMPARING THE SWS SPECTRA ## Observations The SWS spectrum of NGC 7023 was obtained with the AOT01 observing mode in 1996 and is fully described in Moutou et al. (1998) and Sellgren et al. (1999, in preparation). Its mean spectral resolution is = 1000 (speed 4). A restricted part of this spectrum has been observed at higher spectral resolution (Moutou et al. 1999). The SWS spectrum of NGC 2023 was observed at a spectral resolution of = 500 (AOT01, speed 3) in 1998. Data reduction has been done within the SWS-Interactive Analysis environment. Our observations have higher spectral resolution than previously published mid-IR spectra of NGC 7023 and NGC 2023 from the Kuiper Airborne Obsevatory (Sellgren et al. 1985) and of NGC 7023 with ISOPHOT-S and ISOCAM (Laureijs et al. 1996; Cesarsky et al. 1996). NGC 7023 and NGC 2023 are two bright reflection nebulae irradiated by hot stars. The SWS aperture for NGC 7023 was centered 27'' W 34'' N (position 1) of HD 200775 (Teff=17,000 K). For NGC 2023, the SWS aperture was 60'' S of HD 37903 (Teff=22,000 K). Figure 1 shows the mid-IR spectra of the two nebulae, with an offset added to NGC 7023 for clarity (see caption of Fig.1). The relative contribution of continuous emission and AIBs to the total emitted energy is comparable in both objects. More than 60% of the 3 - 20µm energy is emitted in the AIBs. Reflection nebulae have a high feature-to-continuum ratio: this makes the study of faint details in the AIB profiles much easier than in more strongly irradiated sources where the AIBs are drowned by a strong mid-IR continuum (e.g., planetary nebula, H II region interfaces). ## Molecular hydrogen emission We detect many pure H2 rotational lines in both nebulae as well as some ro-vibrational (or fluorescent) lines in NGC 7023. This emission comes from photodissociated gas lumped into filaments (Lemaire et al. 1996, Field et al. 1998). We derive excitation diagrams (Fig.2) from the line fluxes. In the case of NGC 7023, our upper level column densities (from the S(0) line at 28.22µm through to the S(4) line at 8.02µm) are well-fitted by a single rotational temperature of 411 K if one adopts an ortho-to-para ratio (hereafter Rop) of 1. These results, although derived from observations with a much larger beamsize, are consistent with the findings of Lemaire et al. (1996). Indeed, the warm gas emission we detect is probably dominated by the filaments shown in the Lemaire et al. map. For NGC 2023, using the S(1) at 17.03µm through to the S(3) at 9.66µm lines, we find a rotational temperature of 333 K also with Rop=1. While determining the rotational temperature, we have left aside higher levels (Tu > 5000 K) because their populations are strongly affected by the UV pumping (and subsequent radiative decay) of the H2 molecule. Densities of about 105 cm-3 have been inferred for both nebulae (Lemaire et al. 1996; Field et al. 1998; Martini et al. 1997). At these high densities, the low rotational transitions (from upper levels with Tu < 5000 K) we are discussing here are collisionally thermalized and the rotational temperature is equal to the gas temperature, Tgas. Our low ortho-to-para ratio confirms earlier work (Chrysostomou et al. 1993 and references therein) on photodissociated interfaces. The H2 molecule forms on the surface of dust grains and is expected to be rejected in the gas with a high vibration-rotation energy content and a Rop value close to 3. After formation, Rop can be changed by gas-phase spin exchange reactions of H2 with atomic hydrogen and protons for Tgas>300 K or by H2-grain collisions for Tgas<300 K. A value of Rop=1 corresponds to an equilibrium temperature Teq 80 K (Burton et al. 1992). The fact that the dust temperature ( 40 K, see §3) and the gas temperature (300 - 400 K) are both very different from Teq points at the importance of out-of-equilibrium effects; this point is also highlighted by the high Rop values (between 2 and 3) predicted by recent stationary models (Draine & Bertoldi 1996). We note that H2 rotational populations in equilibrium at 40 K and 300 - 400 K correspond to Rop=0.15 and 3 respectively. Two scenarios have been proposed by Chrysostomou et al. to explain the observed low Rop values. First, if the newly formed molecular hydrogen resides long enough on the surface of the dust grain, Rop will be between 3 and the equilibrium value of 0.15 set by the grain temperature Tdust. To get Rop=1 , the residence time of H2 on the dust grain should be approximately half the time required for the H2 rotational populations to reach equilibrium at Tdust after formation (see Fig.9 in Chrysostomou et al.). Alternatively, in cold gas, Rop is fixed at a low value by gas-grain interactions (0.15 for Tdust=40 K). As the photodissociation front propagates into the molecular cloud at velocities of the order of 1 km/s, cold gas is advected through the hot (Tgas of a few 100 K) interface. In the hot gas, spin exchange reactions can drive the low Rop to the observed value. The residence time of H2 on the grain goes as exp (450K/Tdust) (Tielens & Allamandola 1987) while the rate for spin exchange reactions goes as exp(-3200K/Tgas). From photodissociation region models, variations of Tgas are expected to be much larger than those of Tdust: Rop should thus vary much more rapidly in the second scenario (cold gas advection) than in the former (modified H2 formation). High spatial resolution observations yielding Rop profiles across photodissociated interfaces should help discriminate the 2 scenarios. ## Band Profiles We compare here qualitatively the AIB profiles of both nebulae. More quantitative results will be presented in a forthcoming paper (Sellgren et al. 1999). We remark that the AIB spectrum of NGC 2023 multiplied by 2.2 superimposes nicely to that of NGC 7023. As the AIB flux is proportional to the radiation field intensity (Sellgren et al. 1985), this suggests that the radiation field in NGC 7023 is twice as strong as that of NGC 2023 (at the positions given in §1). The two AIB spectra look very similar (width, position of the AIBs) as expected in view of the similar physical conditions (density, radiation field) derived for NGC 7023 (Lemaire et al. 1996) and NGC 2023 (Steinman-Cameron et al. 1997; Field et al. 1998). There are, however, significant spectral differences in the AIB profiles which we detail below. These differences reflect changes in the physico-chemical state of the AIB carriers. Figure 3a shows that the 3.3 and the 3.4µm AIBs have identical profiles in both nebulae. The 3.4/3.3µm band ratio remains thus the same while the radiation field intensity is multiplied by a factor 2. Figure 3b shows that the 6.2 µm AIB is asymmetrical and vary slightly between the two nebulae. Both profiles show a pronounced wing towards long wavelengths, which is interpreted as the consequence of anharmonic couplings during the cooling of the molecule (Barker et al. 1987; Joblin et al. 1995). In NGC 2023 the local continuum is more important, with respect to the 6.2µm band, and has a steeper rise than in NGC 7023. This suggests that the underlying continuum has a different origin from the AIBs. Figure 3c demonstrates that the 7 - 9µm range is the most complex. The main AIBs are at 7.6, 7.8, and 8.6µm. The intensity of the 7.6µm-component is stronger in NGC 7023 than in NGC 2023. Furthermore, the blue shoulder at 7.45µm observed in NGC 7023 (see also Moutou et al. 1998) is not seen in NGC 2023. Roelfsema et al. and Verstraete et al. (1996) have shown that the profile of the 7.7''µm AIB and the distribution of energy between the 7.6, 7.8 and 8.6µm AIBs varies with the radiation field. In the PAH model, the 7.7''µm AIB falls in the spectral range where the effects of ionisation are the most dramatic (Pauzat et al. 1997; Langhoff 1996). Such a prominent 7.6µm feature as well as the 7.45µm blue wing are unusual and have only been seen towards compact H II regions (Roelfsema et al. 1996) and towards the post-AGB star HR 4049 (Molster et al. 1996). These latter authors attributed the 7.45 and 7.6µm new features to small ionized PAHs. Ionized PAHs have a strong 7.7/11.3µm band ratio (Langhoff 1996). Since the AIB spectrum in NGC 2023 is scalable to that of NGC 7023 (see Fig.3), the 7.7/11.3µm band ratio is the same for both nebula implying that the fraction of ionized PAHs is the same in both cases. >From the absence of the 7.45µm-band and the weaker 7.6µm feature in NGC 2023, we conclude that the carriers of these bands are produced by other processes than photoionization (photochemical evolution, fragmentation...). Figure 3d shows that the 11.3µm AIB has a profile similar to the 6.2µm band. A red asymmetry is observed, which again suggests strong anharmonic effects. We note that the red wing of the 11.3µm band is more pronounced in NGC 2023. The 11.3µm band also shows some weak sub-features within its profile. In particular, the 11.0µm band (previously detected by Witteborn et al. 1989 and Roche et al. 1991) is clearly seen in both nebulae. The 11.0µm band also appears in many objects with a similar ratio to the 11.3µm band (Molster et al. 1996; Roelfsema et al., Verstraete et al. 1996) while the 7.7/11.3µm band ratio presents large variations (compare for instance the AIB spectrum of the M17-SW interface and that of NGC 7023). This again rules out photoionization as the cause of the 11.0µm-band in moderatly excited sources. # LWS SPECTRA IN NGC 7023 Finally, we show the complete LWS spectrum (AOT01 observing mode) of NGC 7023, at two nebular positions labeled 1 and 2. Unfortunately, we did not obtain any LWS spectrum of NGC 2023. The LWS position 1 in NGC 7023 is the same as for SWS (see §1) and is close to the far-infrared peak of Whitcomb et al. (1981). Position 2 is located 100''N of the star. The LWS beam is 80'' in diameter, so the fields slightly overlap. Position 1 has been observed in the guaranteed time program of J.P. Baluteau and Position 2 was part of our open-time program on reflection nebulae. Both spectra are shown in Figure 4. The data reduction was done with the LWS-Interactive Analysis and ISAP softwares. The LWS continuum emission is due to big dust grains in thermal equilibrium with the radiation field. We estimate the dust temperature by fitting a modified blackbody emission curve to the spectra with a dust emissivity law proportional to the frequency . The fits are shown on Figure 4. For Position 1, we also used the SWS spectrum of NGC 7023 (not shown here) to further constrain the fit. The effective temperature of dust grains at Position 1 is Tdust = 45 ± 2K, while at Position 2, the temperature drops to Tdust = 30 ± 2K. These temperatures are compatible with the temperature map obtained by Whitcomb et al. (1981). Poorer fits to the LWS spectra were obtained with a dust emissivity proportional to 2 (in this latter case, the above temperatures would drop by approximately 6 K). For a dust emissivity ~ , the radiation field goes as T5 and so the change in Tdust implies that the radiation field is stronger at Position 1 by a factor of 8 than at Position 2. Atomic emission lines accounting for the cooling of the gas are visible on both spectra, namely CII (158µm), OI (63µm) and NI (145µm). They will be discussed elsewhere. # ACKNOWLEDGMENTS We are very grateful to J.P. Baluteau for providing us the LWS spectra of NGC 7023. We acknowledge NATO Collaborative Research Grant 951347. ## References Barker J.R., Allamandola L.J., Tielens AGGM, 1987, ApJ 315, L61 Boulanger, F. et al., 1998, A&A 339, 194 Burton, M.G., Hollenbach, D.J., Tielens, A.G.G.M., 1992: ApJ 399, 563 Cesarsky D. et al., 1996, A&A 315, L305 Chrysostomou, A., Brand, P.W.J.L, Burton, M.G., Moorhouse, A., 1993: MNRAS 265, 329 Draine,B.T., Bertoldi,F., 1996: ApJ 468, 269 Field D., Lemaire J. L., Pineau des Forêts G., Gerin M., Leach S., Rostas F. & Rouan D., 1998, A&A 333, 280 Joblin, C., Salama, F., Allamandola, L., 1995, J. Chem Phys. 102 (24), 9743 Langhoff S.R, 1996, J. Phys. Chem. 100, 8, 2819 Laureijs R. et al., 1996, A&A 315, L313 Lemaire J.L., Field D., Gerin M., Leach S., Pineau des Forêts G., Rostas F. & Rouan D., 1996, A&A 308, 895 Martini P., Sellgren K. & Hora J.L., 1997, ApJ 484, 296 Molster F.J. et al. 1996, A&A 315, L373 Moutou C. et al., 1998, in Star formation with the Infrared Space Observatory, ASP Conf. Ser. 132, eds. Joao L. Yun and René Liseau, p. 47 Moutou C., Sellgren K., Verstraete L., Léger A., 1999, submitted to A&A Pauzat F., Talbi, D. & Ellinger, Y., 1997, A&A 319, 318 Roche P.F., Aitken D.K. & Smith C.H., 1991, MNRAS 252, 282 Roefselma et al., 1996, A&A 315, L289 Sellgren K., Allamandola, L.J., Bregman, J.D., Werner, M.W. & Wooden, D.H., 1985, ApJ 299, 416 Steinman-Cameron,T.Y., Haas,M.R., Tielens,A.G.G.M., 1997: ApJ 478, 261 Tielens, AGGM, & Allamandola, LJ 1987, in Interstellar Processes, ed. D. J. Hollenbach & H. A. Thronson (Dordrecht: Reidel), p. 397 Verstraete L., Puget J. L., Falgarone E., et al., 1996, A&A 315, L337 Whitcomb S.E., Gatley I., Hildebrand R.H., Keene J., Sellgren K. & Werner M.W., 1981, ApJ 246, 416 Witteborn F.C., Sandford S.A., Bregman J.D., Allamandola L.J., Cohen M., Wooden D.H. & Graps A.L., 1989, ApJ 341, 270 THE RICH SPECTROSCOPY OF REFLECTION NEBULAE This document was generated using the LaTeX2HTML translator Version 97.1 (release) (July 13th, 1997) Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds. The command line arguments were: latex2html -split +0 -local_icons pp133.tex. The translation was initiated by on 1/15/1999 #### Footnotes ...NEBULAE ISO is an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. ...well-fitted We did not include the S(5) line at 6.91µm in the fit because its profile is much broader than the other pure H2 rotational lines suggesting contamination by another line. ...fragmentation...) We also note that the 7.7/11.3 µm band ratio in HR 4049 is 4 times that of NGC 7023: such a variation points at very different fractions of ionized PAHs. The fact that the 7.45 and 7.6 µm features are seen in both spectra reinforces our conclusion that ionization cannot be the major way of producing these bands. ...band This is not true for strongly irradiated regions where the AIB spectrum is deeply modified (strong 8.6 and 11.0µm-bands). 1/15/1999
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132683277130127, "perplexity": 4738.238153107616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527566.44/warc/CC-MAIN-20190419101239-20190419123239-00086.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ex/0606034/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Compact storage ring to search for the muon electric dipole moment A. Adelmann, K. Kirch, G.J.G. Onderwater, T. Schietinger Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Kernfysisch Versneller Instituut and University of Groningen, NL-9747AA Groningen, The Netherlands ###### Abstract We present the concept of a compact storage ring of less than 0.5 m orbit radius to search for the electric dipole moment of the muon () by adapting the “frozen spin” method. At existing muon facilities a statistics limited sensitivity of can be achieved within one year of data taking. Reaching this precision would demonstrate the viability of this novel technique to directly search for charged particle EDMs and already test a number of Standard Model extensions. At a future, high-power muon facility a statistical reach of seems realistic with this setup. ###### keywords: Electric and magnetic moments, Muons, Storage rings ###### Pacs: 13.40.Em, 14.60.Ef, 29.20.Dh ## 1 Motivation The observed matter-antimatter asymmetry of the Universe is not accounted for by the known extent of CP violation present in the Standard Model (SM) of particle physics. The search for permanent electric dipole moments (EDM) of fundamental particles is regarded as one of the most promising avenues for finding manifestations of additional CP violation, see, e.g., [1]. As these EDMs violate time reversal invariance (T) and parity (P), they also violate CP, if CPT invariance is assumed. Various systems have been under investigation for a long time with limits becoming more and more restrictive, often challenging models beyond the SM. Of particular interest are the searches for the EDM of the neutron [2], the electron [3] and the Hg atom [4]. The muon is the only elementary particle for which the EDM () has been measured directly. Existing limits have been obtained parasitically at storage rings designed to measure the muon’s anomalous magnetic dipole moment. Currently the best limit is at 95% confidence level [5]. This leaves the muon EDM as one of the least tested observables in the realm of the SM, which predicts a negligibly small value  [6]. For the muon there is no ongoing competitive dedicated search for its EDM. Lepton-universality, together with the best current limit on the electron EDM,  [3], suggests a stringent limit on the muon EDM, . There are, however, a number of models in which flavor-violating effects lead to a significant modification of this naive mass scaling (see, e.g., [7, 8, 9, 10, 11, 12, 13]). While the new limits on the branching fraction set by the BABAR and Belle collaborations [14] call for a reappraisal of the predictions of these models, a muon EDM in the range still seems possible. Additional strong motivation to search for a muon EDM in the range to arises from the result of the Brookhaven muon experiment, which challenges the SM prediction with a deviation of about three standard deviations [15, 16]. First of all, it is well known (see, e.g. [17]) and has been re-emphasized [9, 18] that the muon experiment by itself cannot exclude a contribution of to the observed precession frequency. Amazingly, if the experiment observes a beyond-SM effect due to new physics, parameterized as , with , and , it could be entirely due to and . Although a muon EDM as large as seems very unlikely, there is no solid theoretical argument against it either, and only an improvement of the experimental limit can settle the issue. Secondly, it has been pointed out [9] that if indeed , as suggested by the Brookhaven measurement, one should generically also expect , which for would result in , assuming the present values for . This situation calls for a dedicated experimental search for the muon EDM with a sensitivity of or better. In this Letter we introduce an almost table top setup to perform such a search. It is based on a storage ring with an orbit of less than 0.5 m radius, and employs the so-called “frozen spin” technique introduced by Farley, et al. [19]. We show that this experiment would have an intrinsic sensitivity comparable to that of a 7 m radius ring proposed in the past [20, 21]. We address injection into such a small ring and evaluate those systematic errors which depend on the muon momentum. We find that at existing muon beam facilities a sensitivity of can be reached in a year with “one-muon-at-a-time”. This experiment would be statistics limited and could therefore be further improved by one or more orders of magnitude once new strong pulsed muon sources become available. ## 2 Method and sensitivity The basic idea of the “frozen spin” method [19] is to cancel the regular () spin precession in a magnetic storage ring by the addition of a radial electric field. In the presence of a non-zero EDM, , the spin will precess around the direction of the (motional) electric field, →ωe=η2em(→β×→B+→E). (1) In the absence of longitudinal magnetic fields, the precession due to the anomalous magnetic moment is given by →ωa=em[a→B−(a−1γ2−1)→β×→E] (2) with . One can reduce to zero by choosing E=aBβ1−(1+a)β2≃aBβγ2, (3) for . Then, the only precession is the one of Eq. (1) and the spin is “frozen” in case . For , the muon spin, initially parallel to the muon momentum, is moving steadily out of the plane of the orbit. The observable in the experiment is the up–down counting asymmetry due to the muon decay asymmetry. In the following discussion “positron” will refer to both electrons and positrons originating from muon decays. With polarization , lifetime and the number of detected decay positrons , the uncertainty in is to good approximation given by ση=√2γτ(e/m)βBAP√N, (4) which suggests that to obtain the best accuracy it is desirable to use a high magnetic field and high energy muons. But according to Eq. (3) this would require impractically large electric fields. Expressing Eq. (4) in terms of from Eq. (3), ση=√2acγτ(e/m)EAP√N, (5) we see that the boundary condition of a practically limited electric field strength actually favors low values of . Consequently, we consider the use of a low-momentum muon beam and, as a concrete example, the PSI µE1 beamline [22]. Depending on the mode of operation, one obtains up to  s with  MeV/ (, ) from backward decaying pions with  MeV/. The muons arrive in bunches every 19.75 ns (corresponding to the accelerator frequency) with a burst width slightly below 4 ns [23]. The muon polarization is , for the decay asymmetry we use . We consider a scenario with magnetic and electric fields of  T and  MV/m, respectively, corresponding to a ring radius of  m. Choosing a moderate value for the -field allows the use of a normal conducting magnet to switch the field polarity reasonably fast. The muon momentum and the strength of the magnetic field fix the electric field strength at a value that can be readily achieved. Our choice of parameters results in an intrinsic sensitivity of σdμ≃1.1×10−16 e cm/√N. (6) This is comparable to that presented in Ref. [19], based on  GeV/,  MV/m,  T, ,  m, . The idea for the operation of the experiment at the µE1 beam line is to use one muon at a time in the storage ring and observe its decay before the next muon is injected. This way, the high beam intensity is traded off for beam quality and muons suitable for the injection can be selected. Assuming an injection latency of 1 µs and an average observation time of  µs results in more than muon decays per second and allows for detected events per year, thus σdμ≃5×10−23 e cm. (7) ## 3 Storage ring injection Injection into a compact storage ring is a significant challenge. In our application, the velocity of the 125 MeV/ muons is about 23 cm/ns, corresponding to a revolution time of about 11 ns. The use of a conventional kicker device faster than the revolution time may not be feasible. Existing devices are at least an order of magnitude slower, although there are promising developments for the International Linear Collider (ILC) [24]. An alternative and viable scheme for particle injection would be through beam resonance. Injection of electrons into small storage rings using 1/2 (and also 2/3) integer resonances has been demonstrated [25, 26] and can be adapted for muons. This injection method is a time reversal of half integer resonance extraction [27]. In the radial phase space the separatrix of the half integer resonance together with a stable region around the central orbit is created by a so-called perturbator. The perturbator is creating odd multipole fields with field strengths depending on the radial betatron frequency. The muons are injected near the separatrix in the unstable region through an inflector. By ramping down the perturbator field, these muons are captured by expanding the stable region of the phase space. The µE1 beam line at PSI delivers muons in a transverse phase space of approximately  mmmrad when operating within 1% of momentum acceptance (FWHM). The phase space can be reduced by a suitable collimation system to fit the acceptance of the resonance injection scheme. We consider a weak focusing storage ring with a tune . In Fig. 1 we show the results of a simulation demonstrating twenty-turn injection (red) out of a narrow, i.e., collimated phase space (blue). The stable part of the phase space, i.e., the observation phase of the muons, is shown in black. The synchronization of the field ramping and the muon injection can be achieved via triggering on an upstream muon entrance telescope, in combination with the accelerator radio-frequency and the detection of the previous decay positron (or a suitable time-out). The loss in statistics due to the time needed for ramping up the perturbator is of order 10%. The decrease of the actual observation time window due to the time needed for reaching the stable orbit is of the same order. Thus a total loss of statistics of about 15% is expected. The necessity to synchronize the perturbator ramping with a “good” muon comes from the low intensity at the existing muon beam (here the chance to have an acceptable muon in a 20 ns time window is only about 2%). Assuming a pulsed muon source of much larger intensity, the ramping of the perturbator can be synchronized to the machine frequency, which simplifies the injection. Many (e.g., ) muons within one bunch would then be captured into the stable orbit. ## 4 Polarimetry The muon spin orientation is reconstructed from the distribution of the decay positrons. Due to the magnetic field of the storage ring, the positrons will be bent towards the inner side of the ring. Both the efficiency for detecting a decay positron and the analyzing power, i.e., sensitivity to muon polarization, must be optimized. The simplest and most straightforward detection system only distinguishes upward and downward going positrons. The number of upward versus downward going positrons is independent of the muon energy. In this case, the analyzing power for a vertical muon spin component is typically . Efficiencies can be several tens of percent. It was shown in [5] that detecting the vertical positron angle is less prone to systematic errors. The additional information on the positron also improves the statistical sensitivity of the experiment. Contrary to the up–down counting asymmetry, the vertical angle does depend on the muon energy and is inversely proportional to . Because the width of the vertical angle distribution has the same dependence, the relative precision does not depend on the muon momentum. As a guard against systematic errors, however, a larger signal and thus a lower muon momentum is preferred. ## 5 Systematic effects and countermeasures Two categories of systematic errors can be distinguished: (1) those that lead to an actual growth of the polarization into the vertical plane; and (2) those that lead to an apparent vertical polarization component. In [19], the six dominant effects and their counter measures are discussed. The setup described in this Letter does not introduce additional systematic error sources, so that these counter measures are applicable to our setup as well. We briefly discuss those systematic errors which are affected by the lower muon momentum. An important source of systematic error is the existence of an average electric field component along the magnetic field. The resulting false EDM is ηfalse≃2a2γ2βEvEr. (8) At our momentum, the increased sensitivity due to a lower is more than compensated by the low : , as compared to the experiment suggested in, e.g., [21], for which . A systematic error at the level of the statistical reach () leads to the (modest) requirement . Furthermore, when switching from clockwise to counter-clockwise injection, the false EDM remains the same, whereas the true EDM signal changes sign [19]. A net longitudinal magnetic field , combined with an initial transverse polarization leads to a false EDM of order ηfalse≃2γβBLBTPTPL. (9) Assuming an initial transverse-to-longitudinal polarization of 10% leads to the requirement to match the statistical uncertainty. The latter corresponds to a current of 13 mA flowing through the orbit of the stored muons, or an electric field change of 2.5 GV/m/s perpendicular to the orbit and synchronized to the measuring cycle. Variation of the initial transverse polarization, or allowing a slow residual precession will expose this component. Since the experimental setup is quite small, shimming undesired field component to an acceptable level will be considerably easier than in a larger setup. Moreover, to match the relatively modest statistical reach leads to rather relaxed requirements on these field perturbations. Systematic errors of the second category include shifts and rotations of the detectors. The optimal measuring cycle is about two lifetimes and thus scales with . For , the growth of the up–down counting asymmetry is . Static detector and beam displacements therefore cannot lead to a false EDM signal. The effect of random motion, i.e., not correlated with the measurement cycle, is reduced by six orders of magnitude (the square-root of the number of measuring cycles), and therefore also not of any concern. Only if the motion is synchronous with the measuring cycle, false EDM signals may appear. The detector position relative to the average muon decay vertex determines the systematic error, so both detector and beam motion must be considered. Especially the latter is of concern, because it is intrinsically synchronized with the measurement. For a displacement along , the resulting false EDM when using the up–down counting ratio is ηfalse≃γτ103ddt(δl), (10) with the typical scale of the experimental setup. This places the rather stringent limit  0.1 µm/. The positron momentum dependences of the true and this false EDM signal are significantly different so that they can be disentangled. ## 6 Conclusion We have described a compact muon storage ring based on a novel resonant injection scheme as a viable setup to measure the EDM of the muon using the frozen spin technique. Such a measurement would demonstrate the feasibility of a still unexplored technique for the direct search for EDMs of charged particles and would serve as a stepping stone for future applications of this promising method. At existing muon facilities (PSI µE1 beamline [22]) a sensitivity of seems reachable in one year of data taking, an improvement of the existing limit by more than three orders of magnitude. Already at this level of precision, several interesting physics tests are possible. First, it could unambiguously exclude the EDM to be the explanation of the difference between the measured anomalous magnetic moment and its SM prediction. It would furthermore test various SM extensions, in particular those that do not respect lepton universality. In view of the possible advent of new, more powerful pulsed muon sources, the same experimental scheme can be realized but with considerably more muons per bunch being injected into the ring. It appears realistic to expect accelerators with on the order of 100 kHz repetition rates and more than muons stored per bunch. The statistical sensitivity of the described approach would then reach down to or better. Although systematic issues at this level of precision have been discussed in some detail in [19], more detailed studies would be needed. ## Acknowledgements We are grateful to M. Böge, W. Fetscher, K. Jungmann, S. Ritt, and A. Streun for fruitful discussions. Furthermore, we acknowledge that J.P. Miller independently suggested the use of low-momentum muons to exploit the “frozen-spin” method. The work by C.J.G.O. is funded through an Innovational Research Grant of the Netherlands Organization for Scientific Research (NWO). ## References • [1] M. Pospelov, A. Ritz, Ann. Phys. 318 (2005) 119. • [2] C.A. Baker, et al., Phys. Rev. Lett. 97 (2006) 131801. • [3] B.C. Regan, et al., Phys. Rev. Lett. 88 (2002) 071805. • [4] M.V. Romalis, W.C. Griffith, J.P. Jacobs, E.N. Fortson, Phys. Rev. Lett. 86 (2001) 2505. • [5] Muon Collaboration, G.W. Bennett, et al., submitted to Phys. Rev. D, arXiv:0811.1207. • [6] M.E. Pospelov, I.B. Khriplovich, Sov. J. Nucl. Phys. 53 (1991) 638. • [7] K.S. Babu, S.M. Barr, I. Dorsner, Phys. Rev. D 64 (2001) 053009. • [8] K.S. Babu, B. Dutta, R.N. Mohapatra, Phys. Rev. Lett. 85 (2000) 5064. • [9] J.L. Feng, K.T. Matchev, Y. Shadmi, Nucl. Phys. B 613 (2001) 366. • [10] A. Romanino, A. Strumia, Nucl. Phys. B 622 (2002) 73. • [11] A. Pilaftsis, Nucl. Phys. B 644 (2002) 263. • [12] K.S. Babu, J.C. Pati, Phys. Rev. D 68 (2003) 035004. • [13] A. Bartl, W. Majerotto, W. Porod, D. Wyler, Phys. Rev. D 68 (2003) 053005. • [14] BABAR Collaboration, B. Aubert, et al., Phys. Rev. Lett. 95 (2005) 041802; Belle Collaboration, K  Hayasaka, et al., Phys. Lett. B 666 (2008) 16. • [15] Muon Collaboration, G.W. Bennett, et al., Phys. Rev. D 73 (2006) 072003. • [16] B.L. Roberts, Nucl. Phys. B Proc. Suppl. 155 (2006) 372. • [17] J. Bailey, et al., J. Phys. G: Nucl. Phys. 4 (1978) 345. • [18] J.L. Feng, K.T. Matchev, Y. Shadmi, Phys. Lett. B 555 (2003) 89. • [19] F.J.M. Farley, et al., Phys. Rev. Lett. 93 (2004) 052001. • [20] Y.K. Semertzidis, et al., arXiv:hep-ph/0012087 (2000). • [21] M. Aoki, et al., J-PARC Letter of Intent: Search for a Permanent Muon Electric Dipole Moment at the Level (2003). • [22] The PSI µE1 beamline, see http://aea.web.psi.ch/beam2lines/beam_mue1.html. • [23] I.C. Barnett, et al., Nucl. Instrum. Methods Phys. Res., Sect. A 455 (2000) 329. • [24] T. Naito, H. Hayanoa, M. Kuriki, N. Terunuma, J. Urakawa, Nucl. Instrum. Methods Phys. Res., Sect. A 571 (2007) 599. • [25] H. Yamada, Nucl. Instrum. Methods Phys. Res., Sect. B 199 (2003) 509. • [26] D. Hasegawa, et al., Proceedings 14 Symposium on Accelerator Science and Technology, Tsukuba, Japan, 2003, p. 111. • [27] T. Takayama, Nucl. Instrum. Methods Phys. Res., Sect. B 24/25 (1987) 420.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344269037246704, "perplexity": 1264.2827159394826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00341.warc.gz"}
https://chemistry.stackexchange.com/questions/48916/hplc-peak-area-vs-concentration
HPLC: peak area vs concentration Many text books say that there is a proportionality between the peak area of the chromatogram and the concentration. However, in my opinion, there should be a proportionality between the concentration and the peak intensity, not the peak area. The peak intensity is increasing proportionally with respect to the absorbance of the substance. Hence, by using the Beer-Lamberant law, we can induce there is a proportionality between peak intenstiy and the concentration. Is there any mathematical relation between the peak area and the concentration? Any help will be appreciated. • Welcome to chemistry.SE! If you had any questions about the policies of our community, please ‎visit the help center. Apr 3, 2016 at 18:43 • You asked the wrong question. The question should be "Why isn't peak intensity as good as peak area to determine concentration using HPLC?" – MaxW Apr 3, 2016 at 18:47 • @MaxW That can be an answer for my question, but I wonder if we can deduce the relation between them mathematically, by using the fact that the area is the integral value of peak intensty with respect to the time. Apr 3, 2016 at 18:51 • – MaxW Apr 3, 2016 at 23:00 • Present a correlation for concentration and area used to carry out the validations? Jun 17, 2021 at 4:05 However, in my opinion, there should be a proportionality between the concentration and the peak intensity, not the peak area. There is a proportionality between both peak area vs. concentration and peak height vs. concentration. 1. Peak height is proportional to the instantaneous amount of analyte that is transiting the detector. 2. Peak area is proportional to the sum of all of analyte moleucles that have transited the detector. 3. From 1 and 2 you might be able to infer the relationship betweeen peak height $h$ and area $A$: $$A = \int{h(t)\;dt}$$ 4. People are usually interested in the total amount of substance injected into the column. (If they know the injection volume, they can calculate the concentration from this value.) The total amount of substance would be calculated from the peak area. 5. The maximal peak height is proportional to the peak area, but only if the peak "shape" is constant. Here are some scenarios where peak shape will not be constant: 1. You want to compare injection #2 you made on your column two years ago to injection #2000 that you made recently. Due to column degradation, the recent injections have much more noticeable peak tailing. Because the longer tail leads to wider, asymmetric peaks, there will be less total height at the peak maximum relative to "better", symmetric peaks. 2. You change the flow rate of the HPLC. All peaks are narrower, and thus, higher. 3. The scan rate of your detector changes, and the detector reports "counts" or "intensity" rather than counts per time. This is actually very common for mass spectrometers, but less common for absorbance detectors. If you scan twice as fast in MS, your peak heights go down by approximately twofold, but you have data points twice as often, so the peak area is relatively unchanged. 4. Your detector undersamples the peak. Say your chromatography is nice and the "real" peak shape is nicely gaussian, but is ~ten seconds wide and your detector only gives you a data point every two seconds. Say that due to very small random drifts in retention time, on some injections one of the data points coincides exactly with the maximum of the "real" peak, but in others, it is a little bit off. The peak heights in this scenario will vary considerably more than the peak areas. (If the recorded maximum is off-center from the true maximum, there will be two data points that are higher, i.e. "closer" to the maximum than if the data maximum coincides with the true maximum, so integration will partially correct this error.) Essentially, the peak maximum is a single-point sample from the gaussian distribution of the peak, while the area is a several-point sample from the gaussian distribution, so it has better sampling properties. Is there any mathematical relation between the peak area and the concentration? Yes, there is, but it depends on the peak shape. For perfectly gaussian peaks, $$h_{max} = \frac{A}{\sigma \sqrt{\tau}}$$ where $\tau$ is $2 \pi$ and $\sigma$ is the width of the peak, which is related to the full-width at half-maximum of the peak by $\mathrm{FWHM} = 2 \sigma \sqrt{2 \ln 2}$. However, for non-gaussian peaks, this relationship does not hold. • Does this mean that I can't use absorbance intensity (maximum peak height) to make my concentration calibration curve if I have a peak that's not bb (like vb, bv, or vv)? Oct 25, 2016 at 23:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348922729492188, "perplexity": 918.7024253300108}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00731.warc.gz"}
http://mathhelpforum.com/calculus/87042-error-bound-taylor-polynomial.html
# Math Help - error bound taylor polynomial 1. ## error bound taylor polynomial Use taylor's theorem to bound the error in approximating the function f(x) = e^x with the maclaurin series M_6(x) on the interval [-1,1] The formual for this type of thing is $|f(x) - P_n(x)| \leq \frac {K_{n + 1}} {(n + 1)!}|x - x_0|^{n + 1}$ the max bound is $K_{n + 1} = e^1$ $x_0 =0, n = 6$ $\frac {e^1} {7!} |-1|^7 \approx 5.39 * 10^-4$ is this correct? 2. Originally Posted by diroga The formual for this type of thing is $|f(x) - P_n(x)| \leq \frac {K_{n + 1}} {(n + 1)!}|x - x_0|^{n + 1}$ the max bound is $K_{n + 1} = e^1$ $x_0 =0, n = 6$ $\frac {e^1} {7!} |-1|^7 \approx 5.39 * 10^{-4}$ is this correct? More or less, but you need to improve your notation and explain what things are. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997075617313385, "perplexity": 1483.6161056179758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.5/warc/CC-MAIN-20140930004103-00190-ip-10-234-18-248.ec2.internal.warc.gz"}
https://ta.wikipedia.org/wiki/%E0%AE%8F%E0%AE%B0%E0%AF%8D%E0%AE%9F%E0%AF%8D%E0%AE%9A%E0%AF%81
# ஏர்ட்சு Lights flash at frequency f = 0.5 Hz (Hz = hertz), 1.0 Hz or 2.0 Hz, where ${\displaystyle x}$ Hz means ${\displaystyle x}$ flashes per second. T is the period and T = ${\displaystyle y}$ s (s = seconds) means that ${\displaystyle y}$ is the number of seconds per flash. T மற்றும் f இரண்டும் பெருக்கல் நேர்மாறுகள்: f = 1/T மற்றும் T = 1/f.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937100172042847, "perplexity": 4177.737631363684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00169.warc.gz"}
http://math.stackexchange.com/questions/137504/prove-that-ri-r-in-r-mid-xr-in-i-text-for-every-x-in-r-is-an-ide
# Prove that $[R:I] =\{r \in R\mid xr \in I\text{ for every }x \in R\}$ is an ideal of $R$ that contains $I$ If $I$ is an ideal in a ring $R$ let $[R:I] =\{r \in R\mid xr \in I\text{ for every }x \in R\}$. How can I show that $[R:I]$ is an ideal of $R$ which contains $I$. - Welcome to math.SE: since you are a new user, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say what your thoughts on it are so far; this will prevent people from telling you things you already know, and help them write their answers at an appropriate level. –  Zev Chonoles Apr 27 '12 at 0:19 Hint: First show that I is contained in here (use the fact that I is an ideal). Then the ideal part should follow immediately, since for $r\in [R : I]$ and any $x\in R$ we have $xr \in I \subset [R:I]$ –  Deven Ware Apr 27 '12 at 0:39 You have swapped the "numerator" and the "denominator" in your definition of what is sometimes called a colon ideal. The correct notation is $$(I:R) =\{r \in R\mid xr \in I\text{ for every }x \in R\}$$ –  Georges Elencwajg Apr 27 '12 at 15:02 Containment: You need to show that elements of $I$ have the necessary property, ie. for all $x\in I$, $xr\in I$ for each $x\in R$. Both follow directly from the definitions: once you grok the definition of Ideal and $[R:I]$ it should be pretty quick. Bonus question: Show that $I=[R:I]$ if $R$ contains $1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657273650169373, "perplexity": 221.69007816383098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276250.57/warc/CC-MAIN-20140728011756-00413-ip-10-146-231-18.ec2.internal.warc.gz"}
http://advances.sciencemag.org/content/2/6/e1600295.full
Research ArticleCONDENSED MATTER PHYSICS # A strongly robust type II Weyl fermion semimetal state in Ta3S2 See allHide authors and affiliations Vol. 2, no. 6, e1600295 ## Abstract Weyl semimetals are of great interest because they provide the first realization of the Weyl fermion, exhibit exotic quantum anomalies, and host Fermi arc surface states. The separation between Weyl nodes of opposite chirality gives a measure of the robustness of the Weyl semimetal state. To exploit the novel phenomena that arise from Weyl fermions in applications, it is crucially important to find robust separated Weyl nodes. We propose a methodology to design robust Weyl semimetals with well-separated Weyl nodes. Using this methodology as a guideline, we search among the material parameter space and identify by far the most robust and ideal Weyl semimetal candidate in the single-crystalline compound tantalum sulfide (Ta3S2) with new and novel properties beyond TaAs. Crucially, our results show that Ta3S2 has the largest k-space separation between Weyl nodes among known Weyl semimetal candidates, which is about twice larger than the measured value in TaAs and 20 times larger than the predicted value in WTe2. Moreover, all Weyl nodes in Ta3S2 are of type II. Therefore, Ta3S2 is a type II Weyl semimetal. Furthermore, we predict that increasing the lattice by <4% can annihilate all Weyl nodes, driving a novel topological metal-to-insulator transition from a Weyl semimetal state to a topological insulator state. The robust type II Weyl semimetal state and the topological metal-to-insulator transition in Ta3S2 are potentially useful in device applications. Our methodology can be generally applied to search for new Weyl semimetals. Keywords • Weyl fermion • Fermi arc • Topology ## INTRODUCTION The rich correspondence between high-energy particle physics and low-energy condensed matter physics has been a constant source of inspiration throughout the history of modern physics (1). This has led to important breakthroughs in many aspects of fundamental physics, such as the Planck constant and blackbody radiation, the Pauli exclusion principle and magnetism, and the Anderson-Higgs mechanism and superconductivity, which, in turn, helped us understand materials that can lead to important practical applications. Recently, there has been significant interest in realizing high-energy particles in solid-state crystals. The discovery of massless Dirac fermions in graphene and on the surface of topological insulators has taken the center stage of research in condensed matter and materials science for the past decade (25). Weyl semimetals (621) are crystals whose quasi-particle excitation is the Weyl fermion (6), a particle that played a crucial role in the development of quantum field theory and the Standard Model but has not yet been observed as a fundamental particle in nature. Weyl fermions have a definite left- or right-handed chirality and can be combined in pairs of opposite chirality to generate a massless Dirac fermion. In a Weyl semimetal, the chirality associated with each Weyl node can be understood as a topologically protected charge, thus broadening the classification of topological phases of matter beyond insulators. The presence of parallel electric and magnetic fields can break the apparent conservation of the chiral charge, which results in the condensed matter version of the chiral anomaly, making a Weyl semimetal, unlike ordinary nonmagnetic metals, more conductive with an increasing magnetic field (22, 23). Weyl nodes are extremely robust against imperfections in the host crystal and are protected by the crystals’ inherent translational invariance (12). This gives rise to an exceptionally high electron mobility, suggesting that Weyl semimetals may be used to improve electronics by more efficiently carrying electric currents (24). Because Weyl fermion quasi-particles are naturally spin-momentum locked (12, 14, 15) and superconductivity in these materials may exhibit non-Abelian statistics (2527), they may also be exploited to realizing new applications, such as in spintronics and quantum computers. Furthermore, a monolayer [the two-dimensional (2D) limit] of time-reversal breaking Weyl semimetals can host quantized anomalous Hall (or spin Hall) current without an external magnetic field. To make these novel phenomena experimentally accessible, especially under ambient conditions so that they can be used in device applications, a robust Weyl semimetal with well-seperated Weyl nodes is critically needed. Recently, the first Weyl semimetal was discovered in the TaAs (tantalum arsenide) family (1719, 21, 2837). However, research progress is still significantly held back because of the lack of robust and ideal material candidates. In a Weyl semimetal, Weyl nodes of opposite chirality are separated in momentum space. The degree of separation between Weyl nodes provides a measure of the “topological strength” of the Weyl phase (38) that one has to overcome to annihilate the Weyl fermions in pairs. A large k-space separation of the Weyl nodes gurantees a robust and stable Weyl semimetal state, which is a prerequisite for observing the many exotic phenomena predicted to be detectable in spectroscopic and transport experiments. Therefore, it is of critical importance to find robust and ideal Weyl semimetals, which have fewer Weyl nodes and more importantly whose Weyl nodes are well separated in momentum space and are located near the chemical potential in energy. Moreover, in contrast to the Weyl fermions in high-energy physics, which travel exactly at the speed of light and strictly obey Lorentz invariance, the emergent Weyl fermions in a Weyl semimetal are not subject to these restrictions. It has been recently proposed that the emergent Weyl fermions in a Weyl semimetal can be classified into two types (39). The type I Weyl fermions, which have been realized in TaAs (1719, 21, 2837), respect Lorentz symmetry and have a typical conical dispersion. On the other hand, the type II Weyl fermions strongly violate Lorentz symmetry and manifest in a tilted-over cone in energy-momentum space. Such a type II Weyl semimetal state not only provides a material platform for testing exotic Lorentz-violating theories beyond the Standard Model in tabletop experiments but also paves the way for studying novel spectroscopic and transport phenomena specific to type II Weyl fermions, including the chiral anomaly (whose transport response strongly depends on the direction of the electric current), an antichiral effect of the chiral Landau level, a modified anomalous Hall effect, and emergent Lorentz invariant properties due to electron-electron interaction (3841). To date, the type II Weyl semimetal state has only been suggested in W1−xMoxTe2 (38, 39) and observed in LaAlGe (41). Therefore, it is of importance to search for new type II Weyl semimetals. Here, we propose a methodology to design and search for robust Weyl semimetals with well-separated Weyl nodes. Using this methodology as a guideline, we identify by far the most robust and ideal Weyl semimetal candidate in the single-crystalline compound tantalum sulfide (Ta3S2) with new and novel properties beyond TaAs. Crucially, our results show that Ta3S2 has the largest k-space separation between Weyl nodes among known Weyl semimetal candidates, which is about twice larger than the measured value in TaAs and 20 times larger than the predicted value in WTe2. Moreover, all Weyl nodes in Ta3S2 are of type II. Therefore, Ta3S2 is a type II Weyl semimetal. We further predict a novel topological metal-to-insulator transition from a Weyl semimetal state to a topological insulator state in Ta3S2. The robust type II Weyl semimetal state and the topological metal-to-insulator transition in Ta3S2 are potentially useful in device applications. Our methodology can be generally applied to search for new Weyl semimetals. ## RESULTS We start by describing our methodology to design robust Weyl semimetals using well-separated Weyl nodes. It has been widely accepted that strong spin-orbit coupling (SOC) is a key ingredient to realizing topological states. Our methodology evades this commonly accepted point of view. We show that, to design robust Weyl semimetals with well-separated Weyl nodes, one needs to look for materials (i) that break space-inversion symmetry, (ii) that have small density of states (DOS) at the chemical potential, and (iii) that are already a Weyl semimetal in the absence of SOC. SOC, on the other hand, does not play a significant role in the whole consideration. We elaborate on our methodology in Fig. 1. Figure 1 (A and B) shows the previous way of looking for Weyl semimetals. Specifically, without SOC, the conduction and valence bands show some nodal crossings (which are not Weyl nodes). The inclusion of SOC splits each nodal point into a pair of Weyl nodes of opposite chiralities. In this way, the separation between the Weyl nodes is entirely determined by the SOC strength of the compound. For example, the first and the only Weyl semimetal in experiments, TaAs, belongs to this type (17, 19). TaAs has almost the strongest SOC that one could achieve in real materials. Even then, the separation was only barely resolved in experiments (19). Finding Weyl semimetals with larger separation than TaAs using the previous method is not possible. By contrast, in Fig. 1 (C and D), we present a new methodology. We propose to look for compounds that are already a Weyl semimetal without SOC (Fig. 1C). The inclusion of SOC will split each Weyl node into two nodes of the same chirality. In this way, SOC becomes irrelevant. The separation between the Weyl nodes of opposite chiralities is determined by the magnitude of the band inversion, which is not limited by the SOC strength and can be very large. Therefore, our new methodology can give rise to robust Weyl semimetals with well-separated Weyl nodes. Using this methodology as a guideline, we have searched among the material parameter space and identified by far the most robust and ideal Weyl semimetal candidate in the inversion-breaking, single-crystalline compound Ta3S2 with new and novel properties beyond TaAs. Ta3S2 crystallizes in a base-centered orthorhombic structure (42, 43). Single crystals of this compound have been grown (42, 43), and transport experiments have indeed reported a semimetallic behavior (42). The lattice constants are a = 5.6051 Å, b = 7.4783 Å, and c = 17.222 Å, and the space group is Abm2 (# 39). There are 24 Ta atoms and 16 S atoms in a conventional unit cell (Fig. 2, A and B). It can be seen that the lattice lacks space-inversion symmetry, which is key to realizing the Weyl semimetal state in this time-reversal invariant system. Moreover, the system has two glide mirror symmetries associated with the y and z directions, that is, and , but it does not have any mirror symmetry along the x direction. The symmetry condition determines the number, energy, and momentum space configuration of the Weyl nodes in Ta3S2, which will be discussed below. Figure 2 (D and F) shows the first-principles calculated band structure in the absence of SOC, from which it can be seen that the conduction and valence bands dip into each other, suggesting a semimetallic ground state. Particularly, we find that the conduction and valence bands cross each other without opening up a gap along the X-Γ-Z-X1 direction. Upon the inclusion of SOC (Fig. 2, E and G), the band structure is found to be fully gapped along all high-symmetry directions. To search for the Weyl nodes in Ta3S2, we calculated the band structure throughout its Brillouin zone (BZ). In the absence of SOC (Fig. 3A), we found a line node on the ky = 0 plane, which is the band crossings along the Γ-X-Z-X1 direction, as shown in Fig. 2D. This line node is on the ky = 0 plane and is, therefore, protected by the mirror symmetry . In addition, we also found two pairs of Weyl nodes located on the kx = 0 plane (Fig. 3A). We determine the chirality of the Weyl node by computing the Berry curvature through a closed 2D manifold enclosing the node. Considering the available symmetries as discussed above, Ta3S2 has only one irreducible pair of Weyl nodes. The second pair is obtained by applying the mirror operation . In general, a mirror symmetry operation reflects a Weyl node on one side of the mirror plane to the mirror-reflected location on the other side while also flipping the sign of the chiral charge. Hence, it can be seen that the two pairs of Weyl nodes without SOC are directly related by the mirror operation . Upon the inclusion of SOC, each Weyl node without SOC splits into two spinful Weyl nodes of the same chirality. This is quite intuitive because each state without SOC should be considered as two states of opposite spins. For this reason, there are four pairs of Weyl nodes in the presence of SOC. Again, there is only one irreducible pair and the others are related by the mirror operations and . Also, because of the mirror symmetries, all the Weyl nodes have the same energy in Ta3S2. We show the dispersion away from a Weyl node along all three momentum space directions in Fig. 3 (C and D). It can be seen that the Weyl nodes in Ta3S2 are of type II (39), because the two bands that cross to form the Weyl nodes have the same sign of velocity along one momentum direction (in this case, ky). In the presence of SOC (Fig. 3D), the Weyl nodes are approximately 10 meV below the Fermi level, which is in contrast to the case in MoxW1−xTe2 systems (38, 39). This makes Ta3S2 more hopeful than MoxW1−xTe2 (38, 39) for observing the type II Weyl nodes by photoemission experiments. The k separation of the Weyl nodes (Fig. 3D) in Ta3S2 is as large as ~ 0.15 Å−1. This is by far the largest among known Weyl semimetal candidates and, in fact, twice larger than the measured value in TaAs (about 0.07 to 0.08 Å−1) (19) and 20 times larger than the predicted value in WTe2 (~ 0.007 Å−1) (39). The fact that the Weyl nodes are well separated in momentum space and the fact that they are located near the chemical potential make Ta3S2 by far the most robust and ideal Weyl semimetal candidate for observing and realizing the novel Weyl physics in both spectroscopic and transport experiments. Another signature of the Weyl semimetal state is the Fermi arc electron states on the surface of the crystal. Figure 4 shows the calculated surface state electronic structure on the surface of Ta3S2. The calculated surface state Fermi surface (Fig. 4, A and D) shows a rich structure, including both topological Fermi arcs and topologically trivial surface states. There are finite projected bulk Fermi surfaces as shown by the shaded areas in Fig. 4C, because all the Weyl nodes are type II. It is known that, at the energy of a type II Weyl node, the bulk Fermi surface is not an isolated point but a touching point between an electron and a hole pocket (39). The Fermi surface does not respect mirror symmetry along the or axis. As shown in Fig. 4A, is the projection of the kx = 0 plane, which is not a mirror plane. is the projection of the ky = 0 plane, which is indeed a mirror plane. However, it corresponds to a glide mirror operation . Therefore, the surface breaks the glide mirror symmetry. To visualize the Fermi arc surface states, Fig. 4B shows the energy dispersion cut along Cut1 (denoted by the red dashed line in Fig. 4A). In both box 1 and box 2, we clearly see that a surface state is terminated directly onto a Weyl node, which is the touching point between the shaded areas. This calculation demonstrates the existence of the Fermi arc surface states. Specifically, we label the two surface states in box 1 (upper right panel in Fig. 4B) as β and α from left to right. We see that β is the Fermi arc. In Fig. 4D, we show the high-resolution Fermi surface zoomed in near the point. The Weyl node that corresponds to box 1 is the black dot that is directly above the bottom-leftmost black dot in Fig. 4D. The two surface states, α and β, are identified and, indeed, the left surface state, β, is the Fermi arc terminating onto this Weyl node. Through similar analyses, we can determine that α is the Fermi arc corresponding to the bottom-leftmost Weyl node. On the basis of the above analyses, we show the determined Fermi arc connection in Fig. 4E. We also emphasize that the topological band theory of the Weyl semimetal phase only requires that the number of Fermi arcs that are terminated on a given projected Weyl node must be equal to the absolute value of its chiral charge. The detailed connectivity pattern can vary based on surface conditions, such as surface potential, surface relaxation, and surface density. Hence, the purpose of Fig. 4E is to show the existence of Fermi arcs, which are a key signature of the Weyl semimetal state in Ta3S2. The details of the surface electronic structure, including the connectivity pattern, will depend on the surface conditions, which have to be determined by experiments. Now, we show the topological metal-to-insulator transition in Ta3S2 in Fig. 5. To best visualize the transition, we show the band dispersion along a cut that goes through an irreducible pair of Weyl nodes (as defined by the red dashed line in Fig. 6D). As shown in Fig. 5A, we show this cut as a function of different values of the lattice constant b. At the original lattice constant b′ = b, we indeed see a pair of type II Weyl nodes, as expected. As we increase the lattice constant by 3.0% (b′ = 1.030b), we see that Weyl nodes approach each other and their separation decreases by half. As we further increase the lattice constant to b′ = 1.040b, the two Weyl nodes annihilate each other and the band structure becomes fully gapped. By a careful calculation, we determined that the lattice constant corresponding to the critical point is b′ = 1.037b. The resulting fully gapped state for b′ > 1.037b has two possible fates, that is, either a trivial insulator or a topological insulator. We have calculated the Wilson loop of the Wannier function centered on the kz = 0 plane and on the kz = π plane (Fig. 5B), from which we determined that the gapped state for b′ > 1.037b is a topological insulator and its indices are (1; 000). Therefore, by increasing the lattice constant b, one can realize a topological phase transition from a Weyl semimetal state to a topological insulator state in Ta3S2. The corresponding evolution of the surface electronic structure is shown in Fig. 5C for the case of the surface. The projected Weyl nodes of opposite chirality, which are connected by the Fermi arcs, approach each other and eventually meet on the axis where they annihilate. The resulting surface has a single surface state whose Fermi surface encloses the Kramers point , which also demonstrates the topological insulator state. Besides the topological phase transition, we found that the system exhibits other important tunabilities. Specifically, it can be seen that although the Weyl nodes at the original lattice constant are type II, they become type I at b′ = 1.030b, because the Weyl node is now formed by two bands with the opposite sign of velocity (Fig. 5A). Therefore, there is a transition from type II Weyl fermions to type I Weyl femions as one increases the lattice constant b. Moreover, we note that the energy of the bands in Fig. 5A shifts across the chemical potential as one increases b. Therefore, specific values of the lattice constant b also exist, at which important features will be moved exactly onto the chemical potential. We focus on two important features, that is, the Weyl nodes and the Van Hove singularities (VHSs) that arise from the Weyl cones (see the middle panel of Fig. 5A). Placing the Weyl nodes at the Fermi level is very meaningful because they are monopoles of Berry curvature. Thus, any novel phenomenon that arises from the chirality of the Weyl fermions, such as the chiral anomaly, will become most significant when the Weyl nodes are at the Fermi level. Putting the VHS at the Fermi level can also be interesting because the VHS is due to a saddle point in the band structure, which means that the DOS will show a maxima at the energy of the VHS. An enhanced DOS is favorable for inducing correlated physics, such as superconductivity or magnetism. A detailed phase diagram is shown in Fig. 5C. We show that the band structure of Ta3S2 exhibits a new type of critical point as one decreases the lattice constant c or increases the SOC λ. As discussed above, under ambient conditions, the conduction and valence bands only touch each other at eight discrete points in the BZ, which are the eight Weyl nodes. Here, we show that decreasing the lattice constant c or increasing the SOC λ leads to the generation of new Weyl nodes. Representing the critical point for this process is the critical value for the lattice constant c or SOC λ corresponding to the point where the conduction and valence bands just touch, ccritical = 0.98c or λcritical = 1.027 λ. Taking the critical point of λcritical = 1.027 λ as the example, we show the k-space locations of these newly emerged band touchings by the green dots in Fig. 6D. We find that the critical point band structure is novel. Specifically, the dispersion along kz near the band touching behaves like two downward-facing parabolas. These two parabolas touch at their vertex, which forms the band touching point. This is distinct from the critical points associated with any previously known Weyl semimetal candidates. For example, the critical point band structure of TaAs can be thought of as two parabolas of opposite directions, one facing up and the other facing down (Fig. 6A). Then, entering the Weyl phase from the critical point essentially means “pushing” the two parabolas “into” each other so that they cross to form the two Weyl nodes. The situation in the MoxW1−xTe2 system is very similar, the only difference being that the direction of the parabolas is titled away from being vertical (Fig. 6B). By contrast, in Ta3S2, we have two parabolas that face the same direction (Fig. 6C). A distinct and unique property of the new critical point is that it leads to a saddle point in the band structure, giving rise to a VHS. The saddle point behavior can be seen from the band dispersions shown in Fig. 6E. If one focuses on the conduction band in Fig. 6E, then the touching point is the energy minima for the dispersions along the kx, ky directions, but it is the energy maxima along the kz direction. The saddle point band structure brings about a VHS, which generates a maxima in the DOS and a divergence in the first derivative of the DOS at the energy of the VHS, as shown in Fig. 6G. ## DISCUSSION We elaborate the meaning of the robust and ideal Weyl semimetal candidate as emphasized in our paper. First, we mean that the realization of the candidate is likely to be experimentally feasible. This involves the following critical conditions: (i) the prediction is based on the realistic crystal structure, which means that the compound does crystallize in the proposed crystal structure under ambient conditions; (ii) the prediction does not require fine-tuning of the chemical composition or the magnetic domains; and (iii) the Weyl nodes are not located at energies far above the chemical potential such that they can be observed by photoemission. This was the case for our prediction of TaAs (17), which has now been realized (19). This is also the case here for Ta3S2, which demonstrates the experimental feasibility of our proposal on Ta3S2. Second, and more importantly, the term “robust” also refers to a large separation of the Weyl nodes in momentum space because, as discussed above, the separation of the Weyl nodes provides a measure of a Weyl semimetal’s topological strength. We again highlight that Ta3S2 has the largest k-space separation between Weyl nodes among known Weyl semimetal candidates, which is about twice larger than that of TaAs. This will greatly help resolve the Weyl nodes in various spectroscopic measurements, such as photoemission and scanning tunneling spectroscopy. This will also make it easier to probe the chiral anomaly and other Berry curvature monopole physics in electrical and optical transport experiments. Finally, we compare the topological metal-to-insulator transition in Ta3S2 with transitions predicted in other Weyl candidates (44, 45). Theoretical work by Nozaki et al. (43) predicted the topological phase transitions from a trivial band insulator to a Weyl semimetal and then to a topological insulator by varying the chemical composition x in LaBi1−xSbxTe3 or applying external pressure to BiTeI. However, the composition or pressure range that corresponds to the Weyl semimetal phase is predicted to be extremely narrow (43). Hence, it requires ultra–fine-tuning, which is very difficult in experiments. Also, LaBi1−xSbxTe3 has never been grown in the crystal structure required by the proposal of Nozaki et al. (43) at least in the single-crystal form. The work by Liu et al. (45) proposed similar transitions in the β-Bi4Br4 under external pressure. To induce a Weyl semimetal phase in the β-Bi4Br4 crystal structure that has inversion symmetry, a hypothetical inversion-breaking term was assumed in the calculation. By contrast, Ta3S2 is an inversion-breaking, single-crystalline compound. Single-crystalline Ta3S2 samples have been grown (42, 43). The Weyl semimetal state is stable and does not require fine-tuning. We propose the following three methods for increasing the b lattice constant: (i) It can be achieved by applying external force. The method has been demonstrated by Zheng et al. (46). As an order-of-magnitude estimate, we calculated the required force by first-principles calculations and obtained an approximately 6-GPa force for an approximately 4% increase in the b lattice constant. Force in this range is experimentally feasible and the change of the lattice constant can be monitored by transmission electron microscopy (46). (ii) It may also be achieved by growing a Ta3S2 film onto a substrate with lattice mismatch. (iii) It may be achieved by growing samples with isoelectronic chemical substitution, such as Ta3(S1−xSex)2. These facts highlight that Ta3S2 is, to date, the most ideal platform not only for advancing our understanding of Weyl semimetals and Weyl physics but also for facilitating the exploitation of the exotic and novel properties in future device applications. ## MATERIALS AND METHODS First-principles calculations of Ta3S2 were performed using the OpenMX code based on norm-conserving pseudopotentials generated with multireference energies and optimized pseudoatomic basis functions (47, 48). The SOC was incorporated through j-dependent pseudopotentials, and the generalized gradient approximation was adopted for the exchange-correlation energy functional (49, 50). For each Ta atom, three, two, two, and one optimized radial function were allocated for the s, p, d, and f orbitals (s3p2d2f1), respectively. For each S atom, s3p2d2f1 was adopted. The cutoff radius for both Ta and S basis functions was 7 bohr, and the cutoff energy was 1000 rydberg. A k-point mesh of 13 × 11 × 11 for the primitive unit cell and the experimental lattice parameters were adopted in the calculations. We used the Ta d and S p orbitals to construct the Wannier functions. We calculated the surface spectral weight of a semi-infinite slab using the iterative Green’s function method from the Wannier function–based tight-binding model. We did not choose the (001) and the (010) surfaces because pairs of Weyl nodes of opposite chirality are projected onto each other on these two surfaces (Fig. 3B). Hence, the (001) and the (010) surfaces do not carry net-projected chiral charge and are not expected to show Fermi arcs. Because the purpose of the surface calculations (Fig. 4) was to demonstrate the existence of Fermi arcs, the usage of the surface is proper and sufficient. ## SUPPLEMENTARY MATERIALS Supplementary Text fig. S1. Fermi arc surface states and Weyl nodes on the surface without SOC. fig. S2. Distribution of Weyl nodes in Ta3S2. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited. ## REFERENCES AND NOTES Acknowledgments: Funding: Work at Princeton University was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under DE-FG-02-05ER46200. Work at the National University of Singapore was supported by the National Research Foundation (NRF), Prime Minister’s Office, Singapore, under its NRF fellowship (NRF award no. NRF-NRFF2013-03). T.-R.C. and H.-T.J. were supported by the National Science Council, Taiwan. H.-T.J. also thanks the National Center for High-Performance Computing, Computer and Information Network Center, National Taiwan University, and the National Center for Theoretical Sciences, Taiwan, for technical support. Work at Northeastern University was supported by U.S. DOE/BES grant no. DE-FG02-07ER46352 and benefited from Northeastern University’s Advanced Scientific Computation Center and the National Energy Research Scientific Computing Center supercomputing center through DOE grant no. DE-AC02-05CH11231. S.M.H., G.C., and T.R.C. acknowledge their visiting scholar positions at Princeton University, which were funded by the Gordon and Betty Moore Foundation EPiQS Initiative through grant GBMF4547 (Hasan). Author contributions: Preliminary material search and analysis were performed by S.-Y.X. Theoretical analysis and computations were performed by G.C., S.-M.H., C.-C.L., T.-R.C., H.-T.J., A.B., and H.L. G.C. made the figures with help from S.-Y. X. S.-Y.X. wrote the article with major help from G.C., D.S.S., and H.L. G.B., H.Z., I.B., and N.A. helped in general with the theoretical analysis and the proof checking of the article. H.L. supervised the theoretical part of the work. M.Z.H. was responsible for the overall direction, planning, and integration among different research units. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. View Abstract
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503043055534363, "perplexity": 791.9850765972233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864544.25/warc/CC-MAIN-20180521200606-20180521220606-00527.warc.gz"}
https://cs.stackexchange.com/questions/30205/are-there-more-partially-recursive-functions-than-and-recursive-functions
# Are there more partially recursive functions than and recursive functions? Is the cardinality of the set of partially recursive functions greater than the cardinality of the set of recursive functions ? No they have the same cardinality. They have the cardinality $\aleph_0$. Both sets are infinite in size so we have to compare them based on their level of infiniteness, since as we know there are infinite levels of infinity. Both sets are countably infinite so we say they have the same cardinality. • Maybe a better way is to observe that they are both represented by finite strings. The set of all finite strings is countable. Both classes are at most countable. Seeing also that for any natural number $c$, you have the constant function $f_c:x \mapsto c$, gives that both sets are infinite. Hence both classes are infinite and countable. – Pål GD Sep 22 '14 at 20:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056337475776672, "perplexity": 165.13283142715977}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00578.warc.gz"}
https://oeis.org/wiki/Complete_sequences
This site is supported by donations to The OEIS Foundation. # Complete sequences Please do not rely on any information it contains. A sequence of natural numbers is complete when all positive integers can be represented as the sum of some finite subsequence of the sequence.[1] A sequence is weakly complete (often simply complete in recent literature) when all sufficiently large numbers can be so represented. If the sum of finite subsequences contains some infinite arithmetic progression the sequence is called subcomplete. The same terminology is used for sets and multisets. ## Definition Given a sequence S of positive integers, let P(S) be the set of numbers which can be represented as the sum of a finite subsequence of S. Then S is complete if (and only if) ${\displaystyle P(S)=\mathbb {N} ,}$ weakly complete if (and only if) ${\displaystyle \mathbb {N} \setminus P(S)}$ is finite, and subcomplete if and only if there are positive integers m and n such that ${\displaystyle \{mk+n:k\in \mathbb {N} \}\subset P(S).}$ A necessary and sufficient condition for a sequence of positive integers to be complete is that the smallest term is 1 and the sum of the smallest n terms is greater than the n+1-st term.[2] Hence the sequence of the powers of 2 is a term-by-term upper bound on any complete sequence. ## Relationship between complete and subcomplete For every subcomplete set, the addition of finitely many elements can transform it into a complete set. (The same holds trivially for subcomplete sequences and multisets.) The reverse is also true: removing any finite number of elements from a complete sequence yields a subcomplete sequence. A subcomplete sequence S is weakly complete if and only if for all n, P(S) has an element in all residue classes mod n. ## Density Paul Erdős conjectured that ${\displaystyle \scriptstyle a(n+1)/a(n)\to 1}$ was sufficient for a set to be subcomplete, but Cassels[3] showed that even ${\displaystyle \scriptstyle a(n+1)/a(n)<1+a_{n}^{-1/2+\varepsilon }}$ is not sufficient for subcompleteness. Erdős[4] then proved a minimal density, which was improved by Folkman[5] Hegyvári[6], Łuczak & Schoen[7], and finally Szemerédi & Vu[8] who showed that (with ${\displaystyle \scriptstyle A(x)=\left|\{s\in S:s\leq x\}\right|}$) there is a constant c such that if ${\displaystyle A(n)>c{\sqrt {n}}}$ for all n then S is subcomplete. There are sequences with ${\displaystyle A(n)>{\sqrt {2n}}}$ which are not subcomplete, so the result is the best possible up to the constant. The analogous result for multisets and sequences was proved by Folkman[5] and improved by Szemerédi & Vu[9] who show that that (with ${\displaystyle \scriptstyle A^{*}(x)}$ being the number of elements of S up to x, with multiplicity) there is a constant c such that if ${\displaystyle A^{*}(n)>cn}$ for all sufficiently large n then S is subcomplete. ## Polynomials Roth & Szekeres[10] showed that if a polynomial with real coefficients maps integers to integers, a necessary and sufficient condition for the range of the polynomial p to be weakly complete is that the leading coefficient is positive and for any prime p there is an integer m such that p does not divide f(m). Graham[11] re-proves the theorem by a different technique and gives an alternate necessary and sufficient criterion. Write ${\displaystyle \scriptstyle f(x)=c_{0}+c_{1}{x \choose 1}+\cdots +c_{n}{x \choose n},}$ then ${\displaystyle \scriptstyle f(\mathbb {N} )}$ is weakly complete if and only if ${\displaystyle \scriptstyle c_{n}>0}$ and ${\displaystyle \scriptstyle \operatorname {gcd} (N(c_{i}))=1}$ where N finds the numerator in lowest terms (or gives 0 if the number is irrational). Alternately, if a (rational) polynomial maps positive integers to positive integers, its image is subcomplete. Burr[12] extends this theorem to perturbed polynomials: if f is a polynomial of positive degree, ${\displaystyle \scriptstyle \alpha <1,}$ and ${\displaystyle \scriptstyle s_{n}=f(n)+O(n^{\alpha })}$ is a sequence of positive integers, then ${\displaystyle \scriptstyle \{a_{1},a_{2},\ldots \}}$ is subcomplete. He also proves a version allowing noninteger exponents in the 'polynomial'. ## References 1. V. E. Hoggatt and Charles King, Problem for Solution: E1424, The American Mathematical Monthly 67:6 (Jun.-Jul. 1960), p. 593. 2. J. L. Brown, Jr., Note on complete sequences of integers, The American Mathematical Monthly 68:6 (Jun.-Jul. 1961), pp. 557-560. 3. J. W. S. Cassels, On the representation of integers as sums of distinct summands taken from a fixed set, Acta Scientiarum Mathematicarum (Szeged) 21 (1960), pp. 111–124. 4. Paul Erdős, On the representation of large integers as sums of distinct summands taken from a fixed set, Acta Arithmetica 7:4 (1962), pp. 345-354. 5. J. Folkman, On the representation of integers as sums of distinct terms from a fixed sequence, Canad. J. Math. 18 (1966), pp. 643–655. 6. Norbert Hegyvári, On the representation of integers as sums of distinct terms from a fixed set, Acta Arithmetica 92:2 (2000), pp. 99-104. 7. Tomasz Łuczak and Tomasz Schoen, On the maximal density of sum-free sets, Acta Arithmetica 95:3 (2000), pp. 225-229. 8. E. Szemerédi and V. H. Vu, Finite and infinite arithmetic progressions in sumsets, Annals of of Mathematics (2) 163:1 (2006), pp. 1-35] 9. E. Szemerédi and V. Vu, Long arithmetic progressions in sumsets: thresholds and bounds, J. Amer. Math. Soc. 19:1 (2006), pp. 119–169. See also the preprint arXiv:math/0507539 though this does not contain the result in question. 10. K. R. Roth and G. Szekeres, Some asymptotic formulae in the theory of partitions, Quarterly J. Math. 5 (1954), pp. 241-259. 11. R. L. Graham, Complete sequences of polynomial values, Duke Math. J. 31 (1964), pp. 275-285. 12. Stefan A. Burr, On the completeness of sequences of perturbed polynomial values, Pacific J. Math. 85:2 (1979), pp. 355-360.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888489603996277, "perplexity": 509.6433993934892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363072.47/warc/CC-MAIN-20210301212939-20210302002939-00610.warc.gz"}
https://www.physicsforums.com/threads/help-with-complex-frequency.612982/
Help with Complex Frequency 1. Jun 10, 2012 Markel Help with "Complex Frequency" Hello all, I need some help with this concept. I don't really see how a frequency can be complex. I am using a vector network analyzer over a range of a few GHz, however the model I'm using requires as a complex frequency as input. How do I convert an angular frequency to complex frequency? By searching other posts, I found that complex frequency s = σ + jω, where σ is related to the decay rate. But how do I find this decay rate from a machine which is simply operating X Hz? Thanks, 2. Jun 10, 2012 Bob S Re: Help with "Complex Frequency" Are you really looking at a vector frequency like F(ωt) = Fo [cos(ωt) + j·sin(ωt)]? 3. Jun 11, 2012 Markel Re: Help with "Complex Frequency" I'm not entirely sure what you mean, but I don't think so. I simply have a model for the dielectric properties of a test material, and the model is explicit in s, the complex frequency. However I've only measured some reflection coeficients at specific frequencies, and I have until now only ever heard of real frequencies. 4. Jun 11, 2012 Andy Resnick Re: Help with "Complex Frequency" I've seen complex frequencies used when computing thermal properties of materials (specifically, the Hamaker constant). 5. Jun 11, 2012 DragonPetter Re: Help with "Complex Frequency" Complex frequency may be misleading because it includes more information than just frequency. You have to consider it in the context of sinusoidal waves with a certain phase in time. Complex frequency is not a different or special kind of "imaginary frequency", the only frequency that is physical is real-number frequency; the phase and magnitude information is affected by this complex component. The s-plane includes magnitude and phase information in addition to frequency information, and complex numbers are a good way of representing the inter-relation of all these quantities. It is just given this name "complex" because complex numbers are how we represent the same values, magnitude - phase - frequency, in the frequency and time domain (see Euler's ... identity/formula I can't remember which is which). Last edited: Jun 11, 2012 6. Jun 11, 2012 Markel Re: Help with "Complex Frequency" Ok, that makes sense. So how do I construct the complex frequency from real frequency? Is there something like a fourier transform? 7. Jun 11, 2012 DragonPetter Re: Help with "Complex Frequency" You should think of complex frequency as the s-plane or another complex domain. In fact, I don't really ever recall using the term complex frequency as a specific term during my studies, but I am not an expert. It seems very confusing to attach to that term. Let me be more specific, you will not find complex "imaginary frequency" information in the frequency domain that never shows up or hides itself from the time domain information that you experience as physically real. Are you familiar with phasors? http://en.wikipedia.org/wiki/Phasor That should help you to get to the bottom of your problem. Can you express a phasor as a complex number, and then encode the phase,magnitude, and frequency of your signal into complex form? My guess is that your network analyzer wants to know all of this information rather than just frequency. There is a guy on here called rbj that probably can give you an accurate and better answer, so maybe you should ask this in the EE forum. Last edited: Jun 11, 2012 8. Jun 11, 2012 Markel Re: Help with "Complex Frequency" Thanks for the help. At least now I know what to look for. So the article on S plane in wikipedia gives the following laplace transform: ${F(s) = \int_0^{inf} f(t)e^{-st}dt}$​ So if I have a constant frequency with time, I should get as F(s): ${F(s) = \int_0^{inf} \omega e^{-st}dt} = \frac{\omega}{s}$​ But now what is s? 9. Jun 11, 2012 rbj Re: Help with "Complex Frequency" actually, i never thought that the problem new electrical engineers (and students) usually are having was with "complex frequency", but was with "complex signals" and with "negative frequency". i guess the concept of complex frequency might have some meaning in the context of exponentially damped (or increasing, in an unstable system) sinusoids. i guess you could say that this signal: $$x(t) = e^{-\alpha t} \cos( \omega t )$$ has a complex frequency where $\omega$ might be considered to be the real part and $\alpha$ is the imaginary part. is this what you're having trouble with? 10. Jun 11, 2012 DragonPetter Re: Help with "Complex Frequency" I think you have confused the laplace transform. f(t) is a function of time, and so w is just a constant in your attempt. You just transformed a DC signal (with amplitude of w). This will not give you the complex representation of a sine with a frequency w. Look at this, and find sin(wt) and cos(wt). http://www.stanford.edu/~boyd/ee102/laplace-table.pdf 11. Jun 11, 2012 the_emi_guy Re: Help with "Complex Frequency" Markel, Can you provide the VNA model that you are using, and the specific setup item that you are trying to input. As mentioned above, you may be confusing complex frequency with complex frequency response. If you can show us exactly what you are trying to do we can help you sort it out. 12. Jun 12, 2012 Markel Re: Help with "Complex Frequency" Here is the VNA I'm using: http://www2.rohde-schwarz.com/product/zvt8.html And I'm trying to get the dielectric permitivities in relation to Y, the aperture admitance from this model. $Y = \frac{\sum_{n=1}^{4} \sum_{p=1}^{8}\hat{\alpha}(\sqrt{\epsilon_{r}})^{p}(sa)^{n}}{1+\sum_{m=1}^{4} \sum_{q=1}^{8}\hat{\beta}(\sqrt{\epsilon_{r}})^{q}(sa)^{m}}$​ Where $\ s = \sigma + i\omega$ the complex frequency $\ a$ is the conductor radius, $\hat{\alpha}$ and $\hat{\beta}$ are modeling parameters. Thanks for the help. 13. Jun 12, 2012 AlephZero Re: Help with "Complex Frequency" I would back off a bit from your "real" problem and start by figuring out how to get the admittance parameters for a simple RCL circult from your VNA. You need some sort of curve fitting process, to convert the measured "amplitude and phase response against frequency" into "poles, zeros and residuals" and thus turn the measurements into into an admittance function like Y(s) = s / (Ls^2 + Rs + 1/C). Just browing some of the analyser guides, it's not too clear whether the VNA has the software to do that built in, or you need some other signal processing software (e.g. simething supplied with the analyser that runs on a PC). The admittance function in your model is a ratio of low order polynomials, so it is of this general form, but it representing a network with several "resonant circuts", not just one. I don't think you want to start researching how to do the curve fitting yourself, especially if you are starting from the level of "what is a complex frequency" - you shoudn't need to understand all the details of how the software works to use it. Note, I'm more familiar with this in measuring mechanical vibrations, but the basic math is the same so the steps in the process must be similar. 14. Jun 14, 2012 the_emi_guy Re: Help with "Complex Frequency" Markel, Sounds like you are measuring the permittivity of some substance using the open ended coaxial probe method. Is this correct? 15. Jun 16, 2012 Markel Re: Help with "Complex Frequency" Yes, that's right. Here's the paper where I found the model I mentioned above. My measurements are not giving physical results, and I think the problem may be how I used the frequency. 16. Jul 24, 2012 Markel Re: Help with "Complex Frequency" So the transform according to what you sent is: $\sin{\omega t} \implies \frac{s}{s^{2} + \omega ^2} = \frac{1/2}{s - jw} + \frac{1/2}{s + jw}$ But I'm not really sure if the frequency generated by the VNA is a sine wave. Does someone know how to check this. Also, I'm still confused as to what 's' is. Being that it's in the exponent of the e^{-st} it appears to be the inverse of some time constant of decay. Where do I find a value for s? If my signal is of constant amplitude, can I assume that s = 0 ? 17. Jul 24, 2012 Markel Re: Help with "Complex Frequency" Do you have much experience with this method? If so, I'd really love to pick your brain for a bit.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216036558151245, "perplexity": 970.6448435030048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00377.warc.gz"}
http://www.physicsforums.com/showpost.php?p=2511980&postcount=4
View Single Post P: 17 Changes in planet orbits as a star (eg. the Sun) decreases is mass. Quote by S.Vasojevic Yes planets are moving their orbits as sun losses mass. But numbers are almost insignificant. When you add both solar winds and mass-energy conversion of sun, I think that you get about one earth's own radius increase in earth's orbit over entire life of sun so far. Thanks for the answer. Sorry if it sounded like a stupid question, I didn't realise that the change in Orbit would be so insignificant.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628157377243042, "perplexity": 1493.615845012479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510259834.25/warc/CC-MAIN-20140728011739-00142-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/closed-set-proof.830944/
# Closed set proof 1. Sep 5, 2015 1. The problem statement, all variables and given/known data Show that the set of limit points of the set $A \subseteq \mathbb{R}$ given by $L$ is a closed set. 2. Relevant theorems Definition: A closed set F (subset of R) is such that it contains its limit points. Definition: A limit point x of a set A (subset of R) is such that the intersection Of every epsilon neighborhood with A excluding x is not empty. Theorem: A number x is a limit point if and only if some cauchy sequence in A is convergent to x while each term in that cauchy sequence is not equal to x. Theorem: A set F is closed if and only if every cauchy sequence in F has a limit that is also an element of F. 3. The attempt at a solution Proof: Since $L$ is the set of limit points $A \subseteq \mathbb{R}$, then if $(a_n)$ is a cauchy sequence in $A$ then $\lim a_n \in L$. Suppose $x$ is a limit point of $L$, then we can form a cauchy sequence satisfying $l_n \neq x$ for all $n$, and $\lim l_n = x$. However, each term in the sequence $(l_n)$ is a limit reached by some cauchy sequence in $A$. I will try to construct a cauchy sequence in A that converges to x hence showin that $x \in L$ and $L$ is a closed set. We start by choosing $(a_n)$ such that; $$|a_n - l_n| < \frac{1}{n}$$ For this definition of $(a_n)$, given any $\epsilon > 0$, we choose $\frac{1}{N_1} < \frac{\epsilon}{2}$ such that for all $n \geq N_1$; $$|a_n - l_n| < \frac{\epsilon}{2}$$ Since $(l_n)$ converges to $x$, then, for all $n \geq N_2$; $$|l_n - x| < \frac{\epsilon}{2}$$ Choosing $N = \max{(N_1,N_2)}$; $$|a_n - x| = |(a_n - l_n) - (l_n - x)| \leq |a_n - l_n| + |l_n - x| < \epsilon$$ Hence, this contruction of $(a_n)$ converges to $x$. Q.E.D. Is this correct? Last edited: Sep 5, 2015 2. Sep 5, 2015 ### PeroK I think you have the essense of the proof, but it's not very clear. For example, the first statement in your proof is: "Since $L$ is the set of limit points $A \subseteq \mathbb{R}$, then if $(a_n)$ is a cauchy sequence in $A$ such that $\lim a_n \in L$." This doesn't make sense. Also, you would benefit from writing what you are trying to prove. You've left the "relevent equations" section blank, but actually your definition of a limit point and a closed set are relevant here. And, at the end, you omit stating the significance of $(a_n)$ converging to $x$. I would take what you have and seriously tidy things up. For example, I would start: "To show that $L$ is closed, we will show that $L$ contains its limit points. Let $x$ be a limit point of $L$ ..." 3. Sep 5, 2015 My sincerest apologies "Since $L$ is the set of limit points $A \subseteq \mathbb{R}$, then if $(a_n)$ is a cauchy sequence in $A$ THEN $\lim a_n \in L$." i will update the relevant theorems sections in a second, and polish some parts of the proof. Thanks for the feedback. Last edited: Sep 5, 2015 4. Sep 9, 2015 I read that the only way to improve in proof writing is to let someone rip apart the proofs you write. Please, any suggestions regarding this proof are most welcome :) 5. Sep 9, 2015 ### andrewkirk A couple of suggestions, as you requested Ahmad: This can be expressed more clearly and succinctly as: A number x is a limit point of a set A if and only if there is a Cauchy sequence in A-{x} whose limit is x. This can be expressed more clearly and succinctly as: Suppose $x$ is a limit point of $L$, then we can form a cauchy sequence of points in L-{x} with limit $x$. 6. Sep 9, 2015 Thank you very much for the suggestions! Also, could u provide insight into the validity of the proof. I am specially concerned about the part where i construct the cauchy sequence, since i feel the construction is very artificial. 7. Sep 9, 2015 ### andrewkirk Are you allowed to use the axiom of choice Ahmad? You have used it at least twice, once in choosing the sequence $(l_k)$ and once in choosing $(a_k)$. I'm pretty sure the proof can be done without using the axiom of choice, but generally such proofs are longer and require more care than proofs that use it. Also, the following is not valid without some justification being provided. A justification is available, but requires a few steps and should not be omitted. in any case, I suspect there's a better way of putting this, that is easier to justify. But first we need to know whether you're allowed to use Choice. 8. Sep 10, 2015 I am not familiar with the axiom of choice, could you please give me a reference on the axiom of choice. I had as my goal the proof that L is closed. I assumed that x is arbitrary limit point of L, which allows me to say that there exists a sequence in L-{x} such that it converges to x. This allowed me to say that (ln) exists. Now i wanted to show that there exists a sequence in A such that it converges to x as well, this would imply that x is in L be definition of L. Since i am trying to prove a "there exists" statement i assumed that i had the right to show that a certain construction works. There is a justification for the choice of (an) which i believe i mentioned in passing. We know that there exists a sequence in A-{ln} that converges to ln, so since there is such sequence for each n, i chose an element from each such sequence that is within 1/n from ln. There is a much easier proof given in my book, but i self study real analysis and try to guess the reasoning and if possible work it out on my own. Last edited: Sep 10, 2015 9. Sep 10, 2015 ### andrewkirk The wikipedia article on the axiom of choice is a pretty good start. Basically the axiom asserts that if you have an infinite collection of non-empty sets you can choose a set that has exactly one element from each. Most theorems in topology and analysis can be proved without it. But a surprising number of ordinary-looking theorems do require it. It's generally preferred not to use it if possible because if we accept it then some really weird consequences follow. My statement that you used it twice was incorrect. You didn't use it when you chose the sequence $(l_k)$, because that is only choosing one sequence from a set of sequences converging on x, which we know to be non-empty because x is a limit point. Hence it's making only one choice. But you used the axiom when you defined the sequence $(a_k)$ because for every $k\in \mathbb{N}$ you are choosing one value of $a_k$ from among the infinite number of possibilities, and an infinite number of such choices has to be made - one for each $k$. 10. Sep 11, 2015 Hmmm my book "understanding analysis" followed the same argument in many occassions so i just assumed that i had the right to use it as well. So i will post-pone my understanding here until i get a good grip on the axiom of choice. Thank you :) 11. Sep 11, 2015 ### andrewkirk I think that the axiom of choice may be needed to prove it by using a sequence that converges to x, as you've tried to do, because one has to define that sequence, which may require making an infinite number of choices. Hence I don't think sequences are a good way to approach this proof. I think it would be easier to prove by using your above definition of limit point 'A limit point x of a set A (subset of R) is such that the intersection Of every epsilon neighborhood with A excluding x is not empty.' If we assume that x, a limit point of L, is not in L, then it's not a limit point of A, so what does the definition of limit point tell us about how 'far away' x must be from A? Can that lead us to a contradiction with the assumption that it's a limit point of L? 12. Sep 11, 2015 ### andrewkirk I know how you feel. My topology book - the venerable 'Topology' by Munkres - explains the Axiom of Choice and why it's important, all very nice and clearly, but then proceeds to freely assume it in all sorts of proofs where it doesn't need to, and without even making it clear when he's doing it*. That annoys the heck out of me because I consider it very important to know whether a theorem requires Choice to prove it, given how contentious the axiom is. *it's a good exercise to go through a proof and try and work out whether it has been used. I often get that wrong, as I did when I thought that your $(l_n)$ sequence used it. 13. Sep 11, 2015 I browsed through the wiki article to get an essence of the axiom of choice. This particular segment seems to me relevant here: "In many cases such a selection can be made without invoking the axiom of choice; this is in particular the case if the number of bins is finite, or if a selection rule is available: a distinguishing property that happens to hold for exactly one object in each bin. " Since i have prescribed a rule for the selection of (an) does that imply that i am not invoking the axiom of choice? Or is it not as simple as wiki makes it seem to be, and i need to do yet more reading to come to a concrete comclusion? Thanks. 14. Sep 11, 2015 ### andrewkirk No, because you have not given a procedure for identifying a unique $a_n$. You've just observed that every $l_n$ has a bunch of sequences that converge to it, and you arbitrarily choose one of those sequences from which to pluck $a_n$. And you have to do that an infinite number of times. I think there's a much quicker root to the proof by not using sequences. 15. Sep 11, 2015 There is a much easier way in my book. I will read about the axiom of choice in depth and then revisit this thread. Thank you :) 16. Sep 11, 2015 ### andrewkirk If you don't use sequences, you don't have to worry about the axiom of choice for this problem. It can be done in three lines. If $x\notin L$ then $\exists\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subseteq \sim A$. But $x$ is a limit point of $L$. Can we find a point in $L$ close enough to $x$ that it must be some minimum distance from any point in $A$? Can we get a contradiction from that point? 17. Sep 12, 2015 ### verty If $L$ is defined by $L = \{x \in \mathbb{R}: \phi(x)\}$, one can prove $\phi(x)$ without needing $(a_n)$ to exist. 1. WLOG, for each $n$ let $s_n$ be an arbitrary sequence of $A$ converging to $l_n$. 2. WLOG, for each $n$ let $a_n$ be an arbitrary element of $s_n$ so assigned, such that $|a_n - l_n| < {\epsilon \over 2}$. 3. The $a_n$'s so assigned are such that $\phi(x)$ is satisfied, but no generality was lost, therefore $\phi(x)$. I confess to not being an expert at rigour but I personally didn't see a problem with the given sequences proof. Have something to add? Draft saved Draft deleted Similar Discussions: Closed set proof
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701571464538574, "perplexity": 219.61082298722502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00389.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/86573/boron-nitride-bn-exists-in-two-forms-the-first-is-a-slippery-solid-formed-from-t
# Problem: Boron nitride (BN) exists in two forms. The first is a slippery solid formed from the reaction of BCl 3 with NH 3, followed by heating in an ammonia atmosphere at 750 ˚C. Subjecting the first form of BN to a pressure of 85,000 atm at 1800 ˚C produces a second form that is the second hardest substance known. Both forms of BN remain solids to 3000 ˚C. Suggest structures for the two forms of BN. ⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet. ###### Problem Details Boron nitride (BN) exists in two forms. The first is a slippery solid formed from the reaction of BCl 3 with NH 3, followed by heating in an ammonia atmosphere at 750 ˚C. Subjecting the first form of BN to a pressure of 85,000 atm at 1800 ˚C produces a second form that is the second hardest substance known. Both forms of BN remain solids to 3000 ˚C. Suggest structures for the two forms of BN.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452111482620239, "perplexity": 2306.170738182404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00151.warc.gz"}
https://www.physicsforums.com/threads/epr-experiment-with-pool-balls.12311/
# EPR Experiment with Pool Balls 1. Jan 9, 2004 ### Tachyon son ***EPR Experiment with Pool Balls*** First of all sorry if what Im going to ask seems crazy stupid, but it is an idea that rounds my mind since I started reading about EPR subject. As far as Ive read, you can imagine EPR with photon polarization or with particle spin. So, I have imagined it with two complementary pool balls inside a bag. One is red, the other is black. I take one without looking at it. Then I travel, lets say, 1 lightyear distance. Now it is when I look at the ball to see its colour. 50% probability then. Obviously, if my ball is red, the remaining one in the bag is black. If we applicate the non locality principle, it will say that both balls were on a uncertain color until being looked. I know this is very stupid concept, because we certainly know that my ball was red all the time since my election, and the remaining one of course was black. The point of my question is why we cant apply the same logic to the EPR experiment. If I use two electrons with spin 1 and -1 to make the experiment, they were in that state all the time since the separation! Theres no any comunication nor information travel! 2. Jan 9, 2004 ### Staff: Mentor Re: ***EPR Experiment with Pool Balls*** I assume you mean the usual kind of EPR experiment where you have two spin 1/2 particles in an entangled state where the total spin is zero. If we just measured the spin in one and the same direction for both particles, then your model would (seem to) work. If we measure spin along the x-axis, we will always get a matching pair of answers: if particle A says up, particle B says down. You could pretend that each particle had its x-axis spin assigned to it, just like the color of your pool balls. But things get more interesting when we measure the spin at different angles for each particle, say x-axis for one, y-axis for the other. It turns out that these spin measurements are incompatible, meaning that measuring one seems to "destroy" any value of the other. For any given measurement, you can only measure one direction of spin. You would have to extend your pool ball model to include a new variable to represent the y-axis measurement: let color be the spin in the x-direction, and (say) shape represent spin in the y-axis (round = up; cube = down). Things get even more complicated when you let the spin angle have any angle. A more sophisticated (but still unworkable) model would just assign an "instruction set" to each particle. The instructions would tell it what to do upon encountering a measuring device for any spin direction: essentially this is a list of spin values for any direction of measurement. All the spin information rides along with each particle, so no funny business about communication or information travel. But the bottom line is this. Nature doesn't seem to work that way! These kinds of model (your pool balls or the "instruction set" model) have been showed to lead to correlations that do not match the results of real experiments. (This is the essential content of Bell's theorem.) Quantum mechanics, on the other hand, predicts the results nicely. This may not seem like much of an answer, since I'm basically saying: It just doesn't work. To go deeper would involve describing the spin correlations and the details of Bell's theorem. 3. Jan 9, 2004 ### Swamp Thing But suppose particle A encounters a measuring device first. So it follows the instructions and 'becomes' a particle with spin Sa. So far so good.. but now particle B, as per its version of the instructions, must assume a definite spin Sb: it has no choice. However, Sb is corelated with Sa, which in turn is a function of the kind of measurement that A has encountered, and this could be light years away at that moment.. so the 'instruction set' model would have to be nonlocal anyway. The point being that a nonlocal "instruction set model" can be ruled out without delving into the details of Bell's theorm etc... no? 4. Jan 9, 2004 ### Tachyon son Your answers are too focused on my error concerning spin version of the experiment, thanks for repliying and clarify that. The whole point of my question rests at the end: "Why we cant apply the same logic (of pool balls) to the EPR experiment". In other words, non locality exists or is the polarization of photons already decided from the start of the experiment? Last edited: Jan 9, 2004 5. Jan 9, 2004 ### NateTG The whole notion of 'instruction set' is what Bell's theorem is about. However, Bell's theorem does make some assumptions that aren't necessarily valid. I haven't studied QM, but I am fairily convinced that the non-locality is a quirk of the theory more than it is a contradiction -- Specifically, that it is possible to construct a model of the Electron that deals with the EPR paradox without nonlocality, but that behaves identically to other predictions made by the typical QM model otherwise. Proponents of the Consistent Histories approach to QM claim that the EPR paradox is actually like your pool ball example, but I don't know enought about it, or how it differs from the Copenhagen Interpretation to give you any further insight. Last edited: Jan 9, 2004 6. Jan 9, 2004 ### Swamp Thing Suppose you had a wierd kind of pool ball to which you could put the question : "Are you red or black?" and it would randomly reply either "red" or "black". If two balls were corelated, then getting a "red" reply from one would guarantee a "black" reply from the other. But now there is another question that you could ask: "Are you new or old?" (forget about the *meaning* of the answer :) If you get "new" from the first ball you are guaranteed to get "old" from the second. Finally, a ball that has replied "red" once will stick to this answer as long as you stick to the same question; but if you ask a "red" ball the old-new question, you will randomly get "old" or "new". If you ask "red or black" of a "new" ball you will get a random reply, either "red" or "black". If you alternate your questions successively, there is a fair chance that a ball that once said "red" will now say "black". NOW, in order to preserve the correlation between the two pool balls, it is clear that the second must know what question you asked the first one, so that it will know whether to randomize or not. As I understand it, it is this information that 'travels' nonlocally in the 'instruction set' model. Once this information is available, the second ball can follow the instructions to produce the corelated answer. If you discard the instruction set concept, then the answer itself must travel nonlocally. (Corrections welcome!) Last edited: Jan 9, 2004 7. Jan 20, 2004 ### FZ+ Because that is what the EPR experiment invalidates. In essence, Einstein stated that it is nonsensical to say that observing at the end changes the other ball, and that the ball must have had a state (red or black) from your election. It is simply our knowledge that is lacking. Bell then analysed this and produced the Bell inequalities, which would be true if this sort of "local realistic", hidden variable logic is true. But experiments then carried out violated the Bell inequalities, showing this sort of thinking to be invalid. Simple as that. 8. Jan 21, 2004 ### nightlight Not quite as simple. No experiment, despite three decades of trying, has invalidated local realism. Although the "Quantum Mystery Cult" euphemistically labels this failure as "loopholes" in the existent experiments, the simple fact is that only the "data" adjusted (cherry picked, for non-QMC members) using various convenient rationalizations (additional "assumptions" such as "fair" sampling, "accidental" coincidence subtractions, non-enhancement,... etc ) violate Bell inequalities. The unadjusted data not only has not violated the inequalities, but there are even plausible classical theories (such as "stochastic electrodynamics" for photons and Barut's self-field electrodynamics for fermions) which reproduce the actual data of all EPR experiments so far. Last edited: Jan 21, 2004 9. Jan 21, 2004 ### DrChinese Harsh words, and not really accurate. Following the standard Copenhagen Interpretation of QM does not qualify one as a member of the "Quantum Mystery Cult." Your spirit is misplaced. The fact is that decades of experiments have soundly supported the predictions of QM, and have failed to indicate the existence of a more complete specification of reality as discussed in EPR. To the vast majority of scientists in the area, the matter is reasonably settled by the experiments of Aspect et al. Local reality is rejected by experiment. What is true is that there have and will continue to be those to whom the experiements leave some amount of room for a "way out". For years, the criticism was leveled at Aspect that the observer and subject systems were in communication. He fixed that criticism. Lately, there has been a criticism on the grounds of counting inefficiency. I could certainly agree that further refinement of the experiments to answer such criticism is warranted. I don't expect anything radical or surprising to occur, but you never know. 10. Jan 21, 2004 Staff Emeritus He is referring to criticisms of the Aspect and other experiments directed at showing the Bell inequality violation by quantum mechanics. There are some weaknesses that even quantum physicists recognize, and the "reality" partisans have chosen to make a stand on these. 11. Jan 21, 2004 ### nightlight The fact is that decades of experiments have soundly supported the predictions of QM, No one is arguing against the QM statistical predictions. The argument is against the unsubstantiated claims that the experiments exclude local realism. To arrive at that "conclusion" the data has to be cherry picked based on metaphysical and unverified (or unverifiable) ad hoc rules. For example, in all the experiments there is a "fair sampling" assumption -- an assumption which implies that the local hidden variables do not affect the probability of detection. Under such assumption all that the experiment excludes are the local hidden variables which don't affect the probability of detector trigger. Check for example the paper by Emilio Santos which explains why "fair sampling" is an absurd assumption. To the vast majority of scientists in the area, the matter is reasonably settled by the experiments of Aspect et al. Local reality is rejected by experiment. The experiments still show only that certain absurdly restricted (as Santos explains in the paper above) types of local realism are excluded. Perfectly plausible local realistic theories, such as stochastic electrodynamics (e.g. check papers by Emilio Santos & Trevor Marshall for details) fit the actual data as well as QM. What is true is that there have and will continue to be those to whom the experiements leave some amount of room for a "way out". I suppose all the past inventors of "perpetuum mobile" machines could claim the same about the non-believers -- except for that little glitch with friction, which is entirely due to the present technological imperfections, and which we will fix in the near future, the machine runs for ever (even though it actually stops). The doubters are merely looking for "unimportant" loopholes and "wiggle room." Yeah, sure. It either works or it doesn't. For years, the criticism was leveled at Aspect that the observer and subject systems were in communication. He fixed that criticism. That was a fake "criticism" by the supporters of the QMC, not the opponents. No one was proposing models, much less theories, which would explain the optical experiments that way (via distant subluminal communication between the two far apart sides of the aparatus). The Aspect's "fix" was thus like a magician theatrically rolling up his coat sleeves, after a "neutral" voice from the public shouted about the card hiding in the sleeve. Lately, there has been a criticism on the grounds of counting inefficiency. The inefficiency problem better known under the euphemism "detection loophole" has been a known problem well before Aspect did his thesis. It hasn't been fixed. Last edited: Jan 21, 2004 12. Jan 22, 2004 ### DrChinese Nice paper by Santos, BUT... 1. It is a new paper, and hardly the last word. Certainly would not be considered authoritative at this point. However, I will accord it the courtesy of addressing it on its merits. 2. Bell's Inequalities: I did not take away from the Santos paper any real criticism of the Bell derivation. In a "perfect world", the Inequality could be used to rule out all LHV theories. I disagree with the notion that Bell's "second part" (per the paper) is confused in some way. All I can see is the criticism that an actual "loophole free" experimental setup was not described. Hardly a reasonable critique of Bell by any common standard. Bell did his job fully. 3. The Aspect-type experimental setup and the "fair sampling" assumption: Santos states: "In the context of LHV theories the fair sampling assumption is, simply, absurd. In fact, the starting point of any hidden variables theory is the hypothesis that quantum mechanics is not complete, which essentially means that states which are considered identical in quantum theory may not be really identical. For instance if two atoms, whose excited states are represented by the same wave-function, decay at different times, in quantum mechanics this fact may be attributed to an ”essential indeterminacy”, meaning that identical causes (identical atoms) may produce different effects (different decay times). In contrast, the aim of introducing hidden variables would be to explain the different effects as due to the atomic states not being really identical, only our information (encapsuled in the wave-function) being the same for both atoms. That is, the essential purpose of hidden variables is to attribute differences to states which quantum mechanics may consider identical. Therefore it is absurd to use the fair sampling assumption -which rests upon the identity of all photon pairs- in the test of LHV theories, because that assumption excludes hidden variables a priori. "For similar arguments it is not allowed to subtract accidental coincidences, but the raw data of the experiments should be used. In fact, what is considered accidental in the quantum interpretation of an experiment might be essential in a hidden variables theory." There are some pretty big claims here, and I don't think they are warranted. Fair sampling is far from an absurd assumption. There has never been a single experimental test of a quantum variable which has even slightly hinted at the existence of a deeper level of reality than is currently predicted by QM. Hardly what I would call "absurd". You might as well call the notion that the sun will rise tomorrow as absurd. You might say that it is an unwarranted or burdensome requirement. But I don't even follow that line of reasoning. Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally. In the view of Santos, not only are the Bell Inequalities not violated in the Aspect experiments, but a new and previously unknown hidden local quantum observable is rearing its head. And somehow this observable only shows itself during this type of experiment, and no others. That observable is one in which the photon detection is suppressed or enhanced just enough to appear to match the predictions of QM (i.e. outside of the Bell Inequality); while actually falling within the statistical range of the Inequality. That's a big step, one which I might reasonably expect to have been noticed previously. 4. I have not had time to otherwise anaylze the formula logic of the paper. I will take a look at that. A degree of skepticism is good, and healthy. I don't see the point of insults. Last edited: Jan 22, 2004 13. Jan 22, 2004 ### nightlight 1. It is a new paper, and hardly the last word. Certainly would not be considered authoritative at this point. However, I will accord it the courtesy of addressing it on its merits. That particular paper is new, but Santos, Marshall, Jaynes and others have been criticizing the EPR-Bell experiment claims since the late 70s (check listings there, there are at least couple dozen papers by Marshall-Santos group). This wasn't merely a critique based on artificial narrow counterexamples for the particular experimental claims but a full fledged local realistic theory of quantum optics phenomena (stochastic electrodynamics; it falls short for the massive particles although the Barut's self-field electrodynamics covers fermions as well as QED to the orders it was computed). Regardless of the ultimate value of stochastic electrodynamics as an alternative theory (it is incomplete as it stands), the mere existence of a local fields model for the actual EPR-Bell experimental data plainly demonstrates that the claims that any local realistic mechanism is being excluded by the experiments is false. 2. Bell's Inequalities: I did not take away from the Santos paper any real criticism of the Bell derivation. The Santos-Marshall group makes distinction between the QM dynamics, which they accept, and the "measurement theory" (the non-dynamical, mystical part - projection postulate) which they reject. The Bell's theorem needs a collapse of the remote state to achieve its locality violation. They reject such collapse and point out that it hasn't been demonstrated by the experiments. The problem nowdays with challenging the general state collapse hypothesis (projection postulate) is that it is a key ingredient necessary for Quantum Computing to work. If it is not true in the full generality, the QC won't work any better than a classical analog computer. Thus the challenge is not merely against ideas but against the funding draw QC has, a sure recipe to get yourself cut off from the leading journals and conferences. (Before the QC hype, there was a healthy debate and they were published in every major journal.) There are some pretty big claims here, and I don't think they are warranted. Fair sampling is far from an absurd assumption. In any deterministic hidden variable theory, the detection probability must by definition depend on some hidden variable value. The "fair sampling" hypothesis is thus an assumption that the hidden variable affecting the detection probability (the probability of triggering the avalanche and its timing when coincidence time-windows are used for pair detection) is independent from the hidden variables affecting the detected outcome (i.e. +/- choice). Therefore that is all that experiments exclude -- the local theories for which the two sets of hidden variables are independent of each other. That is not true even for the most simple minded classical electrodynamics models of polarization and detection (or for stochastic electrodynamics or for Barut's self-field ED). Thus the assumption is absurd since it helps experiments exclude something that isn't even included among the proposed alternatives. This is no different "exclusion" than the "refinements" of the experiments to use randomly varying polarizer direction (which you brought up earlier) -- it topples down its own strawman, not the actual theories being proposed by the opponents. There has never been a single experimental test of a quantum variable which has even slightly hinted at the existence of a deeper level of reality than is currently predicted by QM. Hardly what I would call "absurd". QM doesn't offer any "reality" deeper or otherwise. If you believe in any reality, local or not, the quantum phenomena require explanation beyond the prescriptions on how to calculate the probabilities. You might say that it is an unwarranted or burdensome requirement. But I don't even follow that line of reasoning. Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally. There is no need for "unwarranted" or "burdensome" attributes in order to analyze what is it exactly that the "fair sampling" (purely mathematically) excludes -- it is an ad hoc constraint on hidden variables, which hand-waves off the table several proposed alternatives, leaving only the strawman local theories (that no one has proposed) for the experiments to refute. For more discussion on the "fair" sampling hypothesis and the proposed simple additional experiment to test it for the existent EPR-Bell setups check the paper by G. Adenier, A. Khrennikov. I haven't seen as yet any of the several active quantum optics groups, who are claiming to have established Bell inequality violations, checking the assumption on their setup. Since the additional tests proposed are quite simple on the existent setup, it is suprising that no one has yet picked the clear cut open challenge of the above paper, especially considering that the verification of the fair sampling as proposed would eliminate all known plausible LHV theories (they all rely on "unfair" sampling). Or maybe some have tried it and the data didn't come out the way they wished, and they didn't want to be the first with the "bad" news. We'll have to wait and see. PS: After writing the above, I contacted the authors of the cited paper and the status is that even though they had contacted all the groups which have done or plan to do EPR-Bell experiments, oddly no one was interested in testing the 'fair sampling' hypothesis. Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally. As pointed out by Santos, the QM has two sharply divided components, dynamics and the measurement theory. They reject the measurement theory (in its full generality) and some of its implications. That is precisely what the Bell EPR tests were supposed to clarify - does the world behave that way. The result so far have not produced the type of distant collapse (projection of the composite state) as assumed by Bell for his inequalities. The "fair sampling" is an assumption outside of QM (or any other theory or any experiment). The actually proposed alternative theories do not satisfy fair sampling, i.e. the hidden variables do not decouple into independent sets which separately control the detection timing and probability from variables controlling the +/- outcome. Last edited: Jan 26, 2004 14. Jan 27, 2004 ### venkat epr without pool balls hi tachyon son! the problem with thinking the EPR problem wih pool balls is that there is a well defined colour for the pool ball, whether you measure it or not! but in q.m, a particle has a defenite value for an observable only when you measure it! in fact , this is what the original EPR paper is about! it doesn't talk anything about pool balls or about the usual thing about two particles with total spin zero sent in opposite directions(the usual stuff) what the actual epr paper says is this.. in q.m, you can't have a particle in a state of defenite mamentum and position ...this is the position momentum uncertainity princple. now suppose you have an entangled pair(momentum entangled, ie total momentum is zero) of particles going off in the opposite directions , and you decide to measure. now, if you measure the position of particle A( let us call it particles A and B), particle B goes to a state(eigenstate) with a well defined position. ( particle A, on which you perform the measurement also goes to an eigenstate of position.) but suppose you decide to measure momentum instead, then particle B goes to a state with well defined momentum! so, in fact particle B goes to an eigenstate,which depends on what you decide to measure! suppose the particles are light years apart, then, your choice of whether to measure position or momentum influences( instantaneously) a a particle which is light years away to collapse it into an eigenstate(of what you measure)! until you make the measurement, you cannot say that the particles are in a state of position or momentum. you can do the EPR experiment with spin as well...that version is due to Bohm or somebody....and in fact the Aspect experiment which confirmed bell's theorem was performed with the polarizationof photons! so it doesn't depend on which variable(or, in the language of q.m,observable ) you use! that's all.[zz)] 15. Jan 27, 2004 ### nightlight Re: epr without pool balls The Aspect's experiment, or any other attempt in over three decades of trying, have not confirmed Bell's inequality. See the above discussion of the "fair" sampling hypothesis (that all such experiments assume upfront) and what it means. 16. Jan 27, 2004 ### DrChinese Re: Re: epr without pool balls 1. The predictions of QM are confirmed by Aspect's experiments, there is no question about this point. Period. 2. The only question - as nightlight argues to the negative - is whether ALL Local Hidden Variable theories are excluded as a result of Aspect's experiments. The reference paper cited (Santos) asserts there ARE at least some LHV theories which would yield predictions in line with the results of the Aspect experiments. (Personally, I question the conclusion but am still reviewing this.) Nightlight is pushing a point of view which is not generally accepted. It may well be right, but remains to be seen. 17. Jan 27, 2004 ### nightlight Re: Re: Re: epr without pool balls 1. The predictions of QM are confirmed by Aspect's experiments, there is no question about this point. Period. The QM prediction which violates Bell's inequality has not been confirmed by the measured data, by Aspect or any other experiment. Only the adjusted data under: a) "fair" sampling hypothesis b) subtraction of "accidental" coincidences violate Bell's inequality. Both of these assumptions are outside of QM and even though there were proposals (for over a decade, see refs in Santos & Khrennikov) for experiments to verify them, no group has reported performing them. The theoretical prediction itself requires, among others, distant collapse of the composite state, a part of "measurement theory" of QM, which is not a generally accepted addition to the dynamical postulates of QM. The groups which reject assumptions (a) and (b), also question the "measurement theory," the distant instantaneous composite state collapse which Bell assumed. For them there is no such prediction (and everyone agrees that, so far, there is no _measured_ data confirming it). 2. The only question - as nightlight argues to the negative - is whether ALL Local Hidden Variable theories are excluded as a result of Aspect's experiments. The reference paper cited (Santos) asserts there ARE at least some LHV theories which would yield predictions in line with the results of the Aspect experiments. (Personally, I question the conclusion but am still reviewing this.) All sides agree that not all LHV theories are excluded by the experiments. What Santos points out in the paper is that LHVs which are excluded are the most absurd subset of the conceivable LHV theories (there is no actual theory constructed, not even partial one, which satisfies "fair" sampling hypothesis), i.e. the experiment topples merely a strawman made up by the experimenter. The actual alternative LHV theories (or the QM extensions/completions) which exist (whether they are ultimately right or wrong in their full scope), such as stochastic electrodynamics (SED) and self-field electrodynamics, are not being addressed by these experiments -- these theories are waved off by hand upfront by an ad hoc "fair sampling" assumption, which is outside QM and which somehow no one wants to put to test. These LHV theories agree perfectly with the EPR-Bell experiments (as Marshall, Santos and their sudents have shown in numerous papers). Nightlight is pushing a point of view which is not generally accepted. It may well be right, but remains to be seen. Among the people doing the experiments and their critics, there is no dispute as what is being excluded by the experiments themselves. They all know what the assumptions (a) and (b) sweep away upfront and they know that the actual alternatives from the opposition are not being tested. They all know they could test assumption (a) and that no one wants to report whether they have done it and what was the result. The only disagreement on the experimental side is in the prediction what will happen as the technology improves -- the state collapse supporters believe Bell inequality will be ultimately violated as detectors improve (without "loopholes" i.e. without the need to adjust data via (a), (b) and such). The opponents believe it won't be violated. On the theoretical side, the contention is the "measurement theory", specifically the postulate on the composite system state collapse, and there is no generally accepted single view on that. Nothing in day to day use of QM/QED depends on that postulate, so the vast majority of physicists ignore the subject altogether -- it doesn't affect their work either way. If it turns out falsified, there won't be any experimental consequences in anything anyone has done so far (the only experiment which could confirm it, excluding alternatives, would be a loophole free EPR-Bell test). The main effect would be on the EPR-Bell storyline and on the so-called Quantum Computing (which would lose its non-classical "magic" powers attributed to it by the present state collapse proponents, as being right around the corner, as soon as the 'decoherence' is taken care of and the detectors improve). In summary, the only disagreement is in what will be measured/found in the future. What has actually been measured is known to those in the field and is not a matter of belief or taste. You only need to read carefully, bracket out the hype, euphemisms and the unspoken or footnoted limitations (which have been largely known since mid-1970s), to see that there is no actual disagreement between Santos/Marshall group and the EPR-Bell experimenters, as to what exactly has been excluded by the data and what by the additional assumptions. It is only in what will happen in the future that they can really disagree about, and the time is on the skeptics' side. Last edited: Jan 27, 2004 18. Jan 27, 2004 ### DrChinese Re: Re: Re: Re: epr without pool balls While I disagree with your characterization of the state of the current evidence, the above is just plain wrong. Bell's Inequality has little or nothing to do with testing the predictions of quantum theory, although Aspect's experiments do confirm the predictions of QM as a by-product. The Bell Inequality requires only the essential beliefs in local reality and follows classical reasoning. If you accept that the two emitted photons carry a crossed polarization, the inequality can be deduced. Quantum mechanics does not assume that the photons have definite polarization independent of their measurement. Classical reasoning requires this, and that is what leads to the Inequality, which is ultimately a reformulation of the idea that every measured permutation must have a likelihood of occurance between 0 and 100%. If this were true (which is the point being debated and which the Aspect experiments indicate are in fact false) then QM would not be a complete theory. Maybe. But it would not indicate that QM is "wrong". That could never happen, any more than you might consider Newton's gravitional laws "wrong". On the other hand, the reason some people are so emotional about the Aspect experiments is this: once all "objections" are dealt with, all LHV theories must be excluded from consideration. They would be rendered totally untenable, essentially "wrong". So the issue has different stakes depending on which side you are on. Aspect must be getting rather tired of hearing that his experiments have shown nothing. At any rate, I can agree that all voices are not in agreement on the interpretation of the results at this time. The most common conclusion I have heard is that locality has been violated, although that is not a strict conclusion from the results. And some, such as yourself, are not comfortable with the experimental procedure. Fine, perhaps there is a flaw. I don't see the angle of attack, but perhaps it is there. 19. Jan 27, 2004 ### nightlight Re: Re: Re: Re: Re: epr without pool balls Bell's Inequality has little or nothing to do with testing the predictions of quantum theory, although Aspect's experiments do confirm the predictions of QM as a by-product. The Bell Inequality requires only the essential beliefs in local reality and follows classical reasoning. If you accept that the two emitted photons carry a crossed polarization, the inequality can be deduced. Of course it has to do -- the whole point was to produce a prediction of QM which no local deterministic theory would be able to. The QM predicition asserted by Bell was that QM would violate inequality that no local deterministic theory could violate. The whole excercise would have been pointless without the QM prediction falling on the opposite side of the Bell inequality from any LHV theory. Quantum mechanics does not assume that the photons have definite polarization independent of their measurement. That (the assumption of the lack of definite polarization) by itself doesn't imply violation of the Bell inequality. What does imply the violation is the projection postulate, part of the QM measurement theory, when applied to the entangled state. Classical reasoning requires this, and that is what leads to the Inequality, That alone, without also deducing a QM prediction which will violate inequality, would be pointless. On the other hand, the reason some people are so emotional about the Aspect experiments is this: once all "objections" are dealt with, all LHV theories must be excluded from consideration. Emotions have nothing to do with experimental facts. If you study this subject beyond the popular literature and hype, you can find out for yourself which class of LHV theories were excluded by the experimental data and which were excluded upfront (as not being the objective of the experiments). The status is as stated in my earlier posts (or as Santos states). If you find out that I have misclassified them (as described in previous posts), I would be glad to see the correction here. The most common conclusion I have heard is that locality has been violated, although that is not a strict conclusion from the results. And some, such as yourself, are not comfortable with the experimental procedure. Again, this is not a discussion of your or my inner "comfort". It is a simple straightforward question as to what has been excluded by the experimental data and what was taken out of consideration upfront. The plain fact, known to everyone in the field (since mid 1970s, although not emphasized equally by everyone) is that the "fair sampling" constraint on LHVs implies LHV theories in which the local variables determining the detection probabilities are independent of the variables determining the +/- outcome. It just happens that no such theories were constructed and that the actual LHV alternatives/extensions of QM (which can make predictions) do not satisfy the "fair sampling" constraint and their predictions agree with the experimental data. You seem to be confusing the LHVs excludied by the experiments with those excluded by the Bell's inequality -- indeed all LHVs are excluded by the Bell inequality, i.e. all LHVs satisfy the inequality. The only problem is that what Bell claimed to be a QM prediction violating the inequality (deduced via the projection postulate and measurement "theory") has not panned out in the experiments -- no experimental data has violated the inequality despite over three decades of trying. Only the data filtered through the additional ad hoc assumptions (always the "fair sampling" and often some others), which are outside the QM and are untested on their own, violate the inequalities. The point I brought up in this thread (along with Santos, Marshall, Barut, Jaynes, Khrenikov,... and other skeptics) is that if one looks closer at the experiments and the "fair sampling" assumption, it turns out that all the actual LHV alternatives (those actually constructed and developed, the theories making concrete predictions) are excluded by the "fair sampling" hypotheses all by itself, before any laser was turned on and before any detector counted a single count. If you wish to draw some other line among the LHVs excluded and those not excluded by the actual data, please, go ahead (without the mixup between the QM prediction asserted by Bell and the actual experimental data). Explain what kind of LHVs does the "fair sampling" hypothesis exclude all by itself? Lets hear your version and how does your separation line show that the experimental data (and not the "fair sampling" hypothesis) exclude the "pool ball logic" which started this thread. Last edited: Jan 28, 2004 20. Jan 28, 2004 ### DrChinese Nightflight: QM does not violate Bell's Inequality because the Inequality does not apply. QM makes predictions for actual experiments of photon beams with 2 polarizers. The QM prediction for a photon beam passing through both polarizers is a function only of the angle between the polarizers. The same formula applies whether you are talking about photons in an entangled state, such as the Aspect experiment measures, or a single beam passing through consecutive polarizers. In fact, the formula is the same in classical optics too, but only when light is treated like a wave. The problem from a LHV perspective is that if the beam is postulated to have a) an orientation which exists independently of the measurement apparatus which was b) determined at the time the photon was created. These 2 conditions are too severe to survive. You don't need the Aspect setup to see that something is wrong with that anyway. It follows from experiments anyone can do with 2, 3 and more polarizers in a single beam too. I will explain in a separate post. The Aspect experiments are simply the logical extension of the measurement process issues which were quickly evident as QM was being formulated, a la the double slit experiment. Clearly reality does not act as it does in the classical world, and I don't understand why this point is a topic of debate. Next you will be telling me that the double slit experiment does not prove anything, either. The fact is that any way you cut it, the Heisenberg Uncertainty Relations apply and there is no observable deeper level of local reality. Similar Discussions: EPR Experiment with Pool Balls
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8265913128852844, "perplexity": 1111.6071578916562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00015.warc.gz"}
https://www.investopedia.com/articles/fundamental/04/021104.asp
How do you know when a company is at risk of corporate collapse? To detect any signs of looming bankruptcy, investors calculate and analyze all kinds of financial ratios: working capital, profitability, debt levels, and liquidity. The trouble is, each ratio is unique and tells a different story about a firm's financial health. At times they can even appear to contradict each other. Having to rely on a bunch of individual ratios, the investor may find it confusing and difficult to know when a stock is going to the wall. Tutorial: Financial Ratios In a bid to resolve this conundrum, New York University professor Edward Altman introduced the Z-score formula in the late 1960s. Rather than search for a single best ratio, Altman built a model that distills five key performance ratios into a single score. As it turns out, the Z-score gives investors a pretty good snapshot of corporate financial health. Z-Score Formula The Z-score formula for manufacturing firms, which is built out of the five weighted financial ratios: \begin{aligned} &\text{Z-Score} = (1.2 \times A) + (1.4 \times B) + (3.3 \times C) + (0.6 \times D) + (1.0 \times E)\\ &\textbf{where:}\\ &A = \text{Working Capital} \div \text{Total Assets} \\ &B = \text{Retained Earnings} \div \text{Total Assets} \\ &C = \text{Earnings Before Interest \& Tax} \div \text{Total Assets} \\ &D = \text{Market Value of Equity} \div \text{Total Liabilities} \\ &E = \text{Sales} \div \text{Total Assets} \\ \end{aligned} Strictly speaking, the lower the score, the higher the odds are that a company is heading for bankruptcy. A Z-score of lower than 1.8, in particular, indicates that the company is on its way to bankruptcy. Companies with scores above 3 are unlikely to enter bankruptcy. Scores in between 1.8 and 3 define a gray area. The Z-Score Explained It's helpful to examine why these particular ratios are part of the Z-score. Why is each significant? Working Capital/Total Assets (WC/TA) This ratio is a good test for corporate distress. A firm with negative working capital is likely to experience problems meeting its short-term obligations because there are simply not enough current assets to cover those obligations. By contrast, a firm with significantly positive working capital rarely has trouble paying its bills. Retained Earnings/Total Assets (RE/TA) This ratio measures the amount of reinvested earnings or losses, which reflects the extent of the company's leverage. Companies with low RE/TA are financing capital expenditure through borrowings rather than through retained earnings. Companies with high RE/TA suggest a history of profitability and the ability to stand up to a bad year of losses. Earnings Before Interest and Tax/Total Assets (EBIT/TA) A version of return on assets (ROA), an effective way of assessing a firm's ability to squeeze profits from its assets before deducting factors like interest and tax. Market Value of Equity/Total Liabilities (ME/TL) This ratio shows that if a firm were to become insolvent, how much the company's market value would decline before liabilities exceed assets on the financial statements. This ratio adds a market value dimension to the model that isn't based on pure fundamentals. In other words, a durable market capitalization can be interpreted as the market's confidence in the company's solid financial position. Sales/Total Assets (S/TA) This tells investors how well management handles competition and how efficiently the firm uses assets to generate sales. Failure to grow market share translates into a low or falling S/TA. WorldCom Test To demonstrate the power of the Z-score, test how it holds up with a tricky test case. Consider the infamous collapse of telecommunications giant WorldCom in 2002. WorldCom's bankruptcy created \$100 billion in losses for its investors after management falsely recorded billions of dollars as capital expenditures rather than operating costs. Calculate Z-scores for WorldCom using annual 10-K financial reports for years ending December 31, 1999, 2000 and 2001. You'll find that WorldCom's Z-score suffered a sharp fall. Also note that the Z-score moved from the gray area into the danger zone in 2000 and 2001, before the company declared bankruptcy in 2002. But WorldCom management cooked the books, inflating the company's earnings and assets in the financial statements. What impact do these shenanigans have on the Z-score? Overstated earnings likely increase the EBIT/total assets ratio in the Z-score model, but overstated assets would shrink three of the other ratios with total assets in the denominator. So the overall impact of the false accounting on the company's Z-score is likely to be downward.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464147448539734, "perplexity": 4155.289325215423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257002.33/warc/CC-MAIN-20190523023545-20190523045545-00045.warc.gz"}
http://mathhelpforum.com/number-theory/198286-fermats-little-theorem-lemma-print.html
# Fermat's little theorem (lemma) • May 3rd 2012, 03:24 AM froodles01 Fermat's little theorem (lemma) I'm doing Number Theory, completed multiplication, division etc, but now moved on to Fermat's little theorem. Having a bit of trouble seeing how to go about a couple of examples I have. Generally a^(p-1) 1 (mod p) where p is a prime number & a is an integer i) 3^18 divided by 19 so p=19 & a=3 3^18 1 (mod 19) Hence remainder is 1 Fine ii) 3^55 divided by 19 but 55 isn't a prime, so how do I do this, please? • May 3rd 2012, 04:09 AM a tutor Re: Fermat's little theorem (lemma) $3^{55}=3(3^{18})^3$ • May 3rd 2012, 04:44 AM froodles01 Re: Fermat's little theorem (lemma) Yes, the example solution tells me this, too & goes on two steps further which is = 3 x 1^3 = 3 (mod19) why is it 3x . . . . (3^18)^3 the 3x has me confused, sorry. • May 3rd 2012, 05:23 AM a tutor Re: Fermat's little theorem (lemma) I'm not sure what you are confused about. Can you clarify? Can you see that $3^{55}=3(3^{18})^3$? • May 3rd 2012, 05:47 AM Deveno Re: Fermat's little theorem (lemma) the prime number p, in this case, is 19 (what goes in the modulus). the fact that 55 isn't prime isn't relevant. what we DO know is that: 318 = 1 (mod 19) this is FLT with p = 19, a = 3, and p-1 = 18. THEN we use the division algorithm to write 55 = 18q + r (divide 55 by 18, and compute the remainder). since 18*3 = 54, we get q = 3 and r = 1: 55 = 18(3) + 1. therefore: 355 (mod 19) = 318(3) + 1 (mod 19) = (318(3))(31) (mod 19) (from the rules of exponents) = (318)3(3) (mod 19) = (318 (mod 19))3(3 (mod 19)) (the usual rules of multiplication still hold mod 19) = (1 (mod 19))(3 (mod 19)) = (1)(3) (mod 19) = 3 (mod 19) • May 23rd 2012, 02:01 AM froodles01 Re: Fermat's little theorem (lemma) Thanks for a brilliant explanation, however I have another similar question I don't seem to be able to finish. Could someone help with this please? 43^43 divided by 17 FLT is a^p-1 ≡ 1 (mod p) & I know that 43 = 2*16+11 I have used the steps as in above explanation, 43^43 = 2^16(2) + 11 (mod 17) = 2^16(2) + 11 (mod 17) = (2^16(2))(2^11) (mod 17) = (2^16)^2 (2)^11 (mod 17) = (2^16 (mod 17))^2 * (2 (mod 17))^11 er. . . not quite sure what to do next. • May 23rd 2012, 05:50 AM Deveno Re: Fermat's little theorem (lemma) that's not the method i wrote. we have, first of all, 43 = 9 (mod 17) (we reduce "a" mod p). so 4343 = 943 (mod 17). now 43 = (16)(2) + 11, so 943 = 9(16)(2) + 11 (mod 17) = (916)2(911) (mod 17) = (1)2(911) = 911 (mod 17) <---FLT used HERE (with a = 9, and p-1 = 16) note we've done two things here: we've reduced the number being exponentiated from 43 to 9, and we've reduced the exponent from 43 to 11. from here, we have to do things "the hard way": 92 = 81 = 13 (mod 17) 94 = (92)2 = 132 = 169 = 16 = -1 (mod 17) 98 = (94)2 = 1 (mod 17) therefore, 911 = 98+3 = (98)(93) = (1)(93) = 93 (mod 17) finally, 93 = (92)(9) = (13)(9) = 117 = 15 (mod 17) hence 4343 = 15 (mod 17)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123123288154602, "perplexity": 2716.787841635474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836392.83/warc/CC-MAIN-20160723071036-00252-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/91253/solving-for-the-smallest-x-1-2-cdots-20-equiv-x-pmod-7
# Solving for the smallest $x$ : $1! + 2! + \cdots+ 20! \equiv x\pmod 7$ I know the smallest $x \in \mathbb{N}$, satisfying $1! + 2! + \cdots + 20! \equiv x\pmod7$ is $5$. I would like to know methods to get to the answer. - One natural question, I think, is to find the remainder when $1! + 2! + \cdots + (n-1)!$ is divided by $n$. This is oeis.org/A067462 . The problem of when this is zero is apparently of at least minor interest, according to Guy, Unsolved Problems in Number Theory: oeis.org/A057245 –  Michael Lugo Dec 14 '11 at 1:49 ## 1 Answer Note that each of $7!$, $8!$, $9!,\ldots, 20!$ is congruent to $0$ modulo $7$, since they are all divisible by $7$. Note that $6!\equiv -1\pmod{7}$ by Wilson's Theorem, which cancels $1!$. That leaves $2!+3!+4!+5! = 2! + 3!(1 + 4 + 20)$. But $3!\equiv -1\pmod{7}$, and $20\equiv -1\pmod{7}$, so $2!+3!+4!+5! \equiv 2-(1+4-1) = -2\equiv 5\pmod{7}$. - It is implicit in Arturo's answer, but it is interesting to note that $1! + 2! + \cdots + n! \equiv 5$ for any $n \geq 7$. –  Austin Mohr Dec 13 '11 at 22:11 +1,The problem could be solved without Wilson's theorem but it always handy to know and use these tools. –  Quixotic Dec 13 '11 at 22:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037041664123535, "perplexity": 223.7928302861002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930423.94/warc/CC-MAIN-20150521113210-00327-ip-10-180-206-219.ec2.internal.warc.gz"}
https://arxiv.org/abs/0910.1181
gr-qc (what is this?) # Title: Nonsingular Dirac particles in spacetime with torsion Abstract: We use the Papapetrou method of multipole expansion to show that a Dirac field in the Einstein-Cartan-Kibble-Sciama (ECKS) theory of gravity cannot form singular configurations concentrated on one- or two-dimensional surfaces in spacetime. Instead, such a field describes a nonsingular particle whose spatial dimension is at least on the order of its Cartan radius. In particular, torsion modifies Burinskii's model of the Dirac electron as a Kerr-Newman singular ring of the Compton size, by replacing the ring with a toroidal structure with the outer radius of the Compton size and the inner radius of the Cartan size. We conjecture that torsion produced by spin prevents the formation of singularities from matter composed of quarks and leptons. We expect that the Cartan radius of an electron, ~10^{-27} m, introduces an effective ultraviolet cutoff in quantum field theory for fermions in the ECKS spacetime. We also estimate a maximum density of matter to be on the order of the corresponding Cartan density, ~10^{51} kg m^{-3}, which gives a lower limit for black-hole masses ~10^{16} kg. This limit corresponds to energy ~10^{43} GeV which is 39 orders of magnitude larger than the maximum beam energy currently available at the LHC. Thus, if torsion exists and the ECKS theory of gravity is correct, the LHC cannot produce micro black holes. Comments: 8 pages; published version Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Theory (hep-th) Journal reference: Phys.Lett.B690:73-77,2010; Erratum-ibid.B727:575,2013 DOI: 10.1016/j.physletb.2010.04.073 10.1016/j.physletb.2013.11.005 Cite as: arXiv:0910.1181 [gr-qc] (or arXiv:0910.1181v2 [gr-qc] for this version) ## Submission history From: Nikodem Poplawski [view email] [v1] Wed, 7 Oct 2009 08:12:03 GMT (4kb) [v2] Fri, 29 Oct 2010 14:59:41 GMT (11kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728877305984497, "perplexity": 1669.9360743363084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814036.49/warc/CC-MAIN-20180222061730-20180222081730-00653.warc.gz"}
http://openstudy.com/updates/4e349f420b8ba7b2da422d20
## Got Homework? ### Connect with other students for help. It's a free community. • across Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 55 members online • 0 viewing ## TransendentialPI Group Title How do you find the inverse of h(x) = x + x^0.5? I understand how to switch x and y, but how can we isolate the y? 3 years ago 3 years ago Edit Question Delete Cancel Submit • This Question is Closed 1. malevolence19 Group Title Best Response You've already chosen the best response. 1 The inverse is: $\frac{1}{2}(-2y-1)\pm \frac{1}{2}\sqrt{4y+1}$ • 3 years ago 2. TransendentialPI Group Title Best Response You've already chosen the best response. 0 Thanks. Did you use software to get this? I'm needing to do this by hand. h(x) looks to be one to one and looking at the derivatives it looks like h(x) is one to one. I'll keep looking, thanks. • 3 years ago 3. TransendentialPI Group Title Best Response You've already chosen the best response. 0 I was helping someone with this exercise. They left out the fact we were looking for $h ^{-1}(6)$ given $h(x)=x+\sqrt x$ Here is what I came up with: This is asking when does h(x) = 6 6 = x + x^.5 0= x + x^.5 -6 factors to 0=(x^.5 + 3)(x^.5 - 2) x = 9 and x = 4 Checking in the original and by looking at the graph of h, we see the only place h(x) = 6 is at x = 4. h-1(6)=4 Maybe this will help someone some time. • 3 years ago • Attachments: ## See more questions >>> ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666329622268677, "perplexity": 2989.2845208791996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00109-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/tangential-speed.288466/
# Tangential speed 1. Jan 29, 2009 ### gigglin_horse 1. The problem statement, all variables and given/known data I thought this would be an easy question for me, but I cant figure it out. "What is the tangential speed of a passenger on a Ferris wheel that has a radius of 10 meters and rotates once every 30 seconds?" 2. Relevant equations Tangential speed = rotational speed x radial distance 3. The attempt at a solution Tangential speed = 2RMP x 10 meters = 20......what units? Not m/sec, not RPM..... ...But... I figured the circumference is 62.83 meters x 2RPM = 125 meters per minute 2. Jan 29, 2009 ### Hannisch With the information that it's 2RPM, you can very easily find the angluar velocity, ω. I'd use that as a starting point to finding the tangential velocity. 3. Jan 29, 2009 ### gigglin_horse That doesn't help... thanks though Similar Discussions: Tangential speed
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516362547874451, "perplexity": 3027.9910866169876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00125-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/74676/axiom-of-choice-non-measurable-sets-countable-unions
# Axiom of choice, non-measurable sets, countable unions I have been looking through several mathoverflow posts, especially these ones http://mathoverflow.net/questions/32720/non-borel-sets-without-axiom-of-choice , http://mathoverflow.net/questions/73902/axiom-of-choice-and-non-measurable-set and there still are many questions I would like to ask: 1) According to the first answer of the first post "It is consistent with ZF without choice that the reals are the countable union of countable sets" (and therefore all sets are borel, and hence measurable), however this seems in contrast with the answer to the second post which states that "the existence of a non-Lebesgue measurable set does not imply the axiom of choice" (and therefore it is possible to construct a ZF model without choice where there exists a non-Lebesgue-measurable set). How can these two statements be both right? 2) I can't understand why the axiom of (countable) choice is necessary to prove that a countable union of countable sets is countable. By saying that the sets are countable, I have already assumed the existence of a bijection from every set to the set of natural numbers, in other words, I have indexed the elements of each set. So what is the problem in chosing elements from each set? This relates to the above topic in that if the AC weren't necessary to prove that countable union of countable sets is countable, then "It is consistent with ZF without choice that the reals are the countable union of countable sets" can no longer be correct, since this would imply that in ZF without choice the reals are countable. I am only a third year math student with no background in set theory (only naive), so please excuse the ignorance. I hope someone can answer me, thank you! - I'm glad to see your question worked out and you got some wonderful answers here. For this kind of question, math.SE is better suited than MO. –  Arturo Magidin Oct 21 '11 at 23:15 Yes, and thank you for redirecting me! I suspect I will be using this website a lot in the future... –  Emilio Ferrucci Oct 22 '11 at 8:48 For the first question, let us consider the following statement: $x\in\mathbb R$ and $x\ge 0$. It is consistent with this statement that: 1. $x=0$, 2. $x=1$, 3. $x>4301$, 4. $x\in (2345235,45237911+\frac{1}{2345235})$ This list can go on indefinitely. Of course if $x=0$ then none of the other options are possible. However if we say that $x>4301$ then the fourth option is still possible. The same is here. If all sets are measurable then it contradicts the axiom of choice; however the fact that some set is unmeasurable does not imply the axiom of choice since it is possible to contradict the axiom of choice in other ways. It is perfectly possible that the universe of set theory behave "as if it has the axiom of choice" up to some rank which is so much beyond the real numbers that everything you can think of about real numbers is as though the axiom of choice holds; however in the large universe itself there are sets which you cannot well order. Things do not end after the continuum. That been said, of course the two statements "$\mathbb R$ is countable union of countable sets and "There are non-measurable sets" are incompatible. However this is the meaning of it is consistent relatively to ZF. It means that each of those can exist with the rest of the axioms of ZF without adding contradictions (as we do not know that ZF itself is contradiction-free to begin with.) As for the second question, of course each set is countable and thus has a bijection with $\mathbb N$. From this the union of finitely many countable sets is also countable. However in order to say that the union of countably many countable sets is countable one must fix a bijection of each set with $\mathbb N$. This is exactly where the axiom of choice comes into play. There are models in which a countable union of pairs is not only not countable, but in fact has no countable subset whatsoever! Assuming the axiom of countable choice we can do the following: Let $\{A_i\mid i\in\mathbb N\}$ be a countable family of disjoint countable sets. For each $i$ let $F_i$ be the set of injections of $A_i$ into $\mathbb N$. Since we can choose from a countable family, let $f_i\in F_i$. Now define $f\colon\bigcup A_i\to\mathbb N\times\mathbb N$ defined by: $f(a)= f_i(a)$, this is well defined as there is a unique $i$ such that $a\in A_i$. From Cantor's pairing function we know that $\mathbb N\times\mathbb N$ is countable, and so we are done. - So the axiom of choice is necessary to choose a bijection from each set to N. I might be able to prove particular cases without using the AC (by defining the bijections explicitly), but in general I have no such guarantee, correct? And do I need the axiom of choice to prove that a countable union of finite sets is countable (for the same reason)? Where can I find a proof that in ZF without choice the set of real numbers is a countable union of countable sets? (perhaps a well known text book which I can find in my university library). Thank you very much for your answer! –  Emilio Ferrucci Oct 21 '11 at 21:41 @Emilio: Yes, it is necessary to fix a bijection. However in some cases you can have the bijection defined explicitly. For example $\bigcup\{n\}\times\mathbb N$ is of course countable. As for finite sets the situation is similar, although you could reduce the axiom of countable choice slightly (choose from finite sets), but as the example I have mentioned (union of pairs) shows this is still not provable. Lastly, the proof is not from ZF, but rather that it is possible to have this. The proof is not very accessible without some extensive preliminaries in set theory such as forcing. –  Asaf Karagila Oct 21 '11 at 21:47 Ok things are definetly clearer now. Looks like it's back to measure theory for me! Thanks again for everyone's answers –  Emilio Ferrucci Oct 21 '11 at 21:53 @Emilio: No problem. You should thank the other answerers in comments on their own answers though. :-) –  Asaf Karagila Oct 21 '11 at 22:00 For Question $1$, the assertion "the existence of a non-Lebesgue measurable set does not imply the axiom of choice" means that we cannot prove the full Axiom of Choice from the existence of a non-Lebesgue measurable set. Similarly, we cannot prove the full Axiom of Choice from the assumption of Countable Choice. But we cannot prove Countable Choice in ZF (here I should insert "if ZF is consistent," but won't bother). There are many assertions that cannot be proved in ZF, can be proved in ZFC, but do not imply the full Axiom of Choice, that is, are strictly weaker than the full Axiom of Choice. The existence of a non-Lebesgue measurable set is just one of them. So you can think of an axiom that asserts the existence of a non-measurable set as intermediate in strength between making no "choice" assumptions at all, and asserting the full Axiom of choice. Added: The following Wikipedia aricle has a nice list of assertions that are equivalent to the Axiom of Choice, and also a nice list of assertions that cannot be proved in ZF, can be proved in ZFC, but do not imply the Axiom of Choice. - (1) The statement It is consistent with ZF without choice that the reals are the countable union of countable sets does not mean that in ZF without choice, all subsets of $\mathbb R$ must be measurable. It just says that in that case, "all sets are measurable" is one possibility, possibly among many. Therefore it does not conflict with the existence of a non-Lebesgue measurable set does not imply the axiom of choice Taken together, these two statements just means that in ZF without choice there can either be nonmeasurable sets, or not be any nonmeasurable sets. Both possibilities are consistent. (2) If you have a countable family of countable sets, all you know that for each set in the family there exist one or more bijections between that set and the natural numbers. As long as you're only looking at one of them, you can just choose one of these bijections. However, if you want to prove that the union of the family is countable, you need to choose a particular bijection for each of the sets simultaneously, and you need (countable) choice to do that. - The following sci.math post by Abhijit Dasgupta deals with something I had read, which claimed that every subset of $\mathbb R$ is $F_{\sigma \delta \sigma}$ in the Feferman-Levy model: groups.google.com/group/sci.math/msg/1c76fa715eda2302 –  Dave L. Renfro Oct 21 '11 at 21:49 Perhaps so, but that is not the only model of ZF$\neg$C. (It cannot be; thanks to Gödel and Rosser every consistent extension of ZF will have an infinity of essentially different models). –  Henning Makholm Oct 21 '11 at 21:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711588621139526, "perplexity": 170.66879151254147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.mapleprimes.com/users/Hidious/questions
# Hidious 2 years, 68 days These are questions asked by Hidious ### Maple objects display precedence in graph? ... April 17 2011 by Maple 14 Hello all, This is my first time posting here; I usually find answers to all my questions by searching patiently but I was unsuccessful with this particular issue. I attend an introduction Maple class and i'm building an animated bike as project. I was already rather comfortable with Maple so i'm trying to show off a little by adding lots of cheesy details to the scene, even if unnecessary. However, some objects such as the sky, which i would like to... Page 1 of 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354310631752014, "perplexity": 1820.1549490754774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706934574/warc/CC-MAIN-20130516122214-00007-ip-10-60-113-184.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/61510/solve-the-angular-part-of-schrodinger-equation-numerically
# Solve the angular part of Schrodinger equation numerically I would like to solve the angular part (the one for what is usually called the $\theta$ angle) of a time-independent 3D Schrodinger equation $$\frac{\mathrm{d}}{\mathrm{d}x}\left[ (1-x^2) \frac{\mathrm{d}P(x)}{\mathrm{d}x} \right]+\left[ l(l+1) - \frac{m^2}{1-x^2} \right]P(x) = 0,$$ where $l=0,1,2,\ldots$ and $m = -l, -l+1, \ldots, l$ as usual and $x\in[-1,1]$, Now, the complication is that I want to do it numerically. Analytically, one gets a bunch of Legendre polynomials and spherical harmonics. However, for me it is unclear which boundary conditions should I set. One boundary condition will probably be equivalent to the normalization of my solutions. In order to make it compatible with the Legendre polynomials, I can set it to $$P(1) =1.$$ However, what about the second one (it is a second-order ODE after all)? I guess, it should somehow encode the fact that my solutions should be bounded. Any comments, including sending me to RTFM (with appropriate links) are more than welcome! - The other solutions are associated Legendre functions of the second kind, which blow up at $x=\pm 1$. –  Michael Brown Apr 18 '13 at 14:31 In order to help you, it would be good to know which method you are using to solve this numerically. Maybe this question would be better placed in Computational Science SE. –  lomppi Apr 18 '13 at 15:23 Michael Brown: I would consider these solutions unphysical and avoid them. Also, they are excluded by the P(1)=1 condition. sebastian: at this moment I am trying to formulate the problem, because it is not possible to feed it to any algorithm (method) as it is. Therefore, in my opinion, this is a Physics question and not a CS one. –  ffc Apr 18 '13 at 16:02 ok, but you should at least give a clue whether you want to take a finite difference approach or something else. otherwise it will be difficult to help you. I did some googeling and found this link. maybe it helps. –  lomppi Apr 18 '13 at 17:30 What physical problem are you trying to solve - free particle? hydrogen atom? spherical infinite well? spherical annular well? Boundary conditions depend on the problem, not the ODE. –  Chris White Apr 19 '13 at 1:41 What you are doing is an eigenvalue problem. Eigenvectors are determined by the space you are looking at, and this is why you usually specify some boundary conditions. In your case just the requirement of absence of singularities should do the job (i.e. you want some subspace of $C[-1,1]$). This is the analytical viewpoint. The numerical viewpoint actually depends on your algorithm. First of all, if you really want "to solve the equation numerically", I assume that you are playing the game of not knowing the answer. So you do not actually know that $l$ is integer beforehand. If I were solving the problem, I would put it on a lattice and then write it as a finite-dimensional eigenvalue problem. In deriving the finite difference equations I would use the fact that my solution should be finite at the endpoints of the interval. A way to do this is to introduce homogenious lattice at points (lets call them so) 0,1,2,3,4... Then integrate the equation from $i-1/2$ to $i+1/2$ and use middle-difference fromulae for derivatives (you will need their values at $i\pm1/2$, so the middle difference will return you back to your lattice) and middle-rectangles formula for integrals (it is important to use approximations of the proper order. I believe that I am telling you an algorithm of second-order presicion). Then you will have to do something with the endpoints. For them do the integration from $0$ to $1/2$ and respectively on the other end. In doing so you will use the fact that $(1-x^2)\frac{dP}{dx}$ is $0$ at the endpoints. And this picks up the appropriate space for your solutions. Long story made short, I believe that at least for some calculational schemes the conditions should be that $(1-x^2)\frac{dP}{dx}$ vanishes at the endpoints. - This is a good answer, so I will accept it - thank you! However, I have found out that it is possible to circumvent the problem (which is somewhat equivalent to Peter's answer) by writing the equation in the original form, namely $$\frac{1}{\sin (\theta)} \frac{d}{d\theta} \sin \theta \frac{d}{d\theta} f(\theta) = b \theta,$$ where $f$ is the function to be found and the boundary conditions are $f'(\pi) = 0$ and normalization $f(\pi)=1$. Then the eigenvalue $b$ is such that the derivative at the left-hand side also vanishes: $f'(0)=0$. –  ffc May 6 '13 at 9:08 It should have been $$\frac{1}{\sin (\theta)} \frac{d}{d\theta} \sin \theta \frac{d}{d\theta} f(\theta) = b f(\theta)$$ in the previous comment, and I wrote up just the $m=0$ case. –  ffc May 6 '13 at 9:14 Since the coordinate is an angle, you should specify the periodic boundary conditions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944565653800964, "perplexity": 262.70116640056614}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988840.31/warc/CC-MAIN-20150728002308-00004-ip-10-236-191-2.ec2.internal.warc.gz"}
http://soscholar.com/domain/detail?domain_id=b0f4acd3-0f99-d49b-c837-30c817eeb0c4
Differential Algebra 152 浏览 0关注 In mathematics, differential rings, differential fields, and differential algebras are ring (mathematics)|rings, field (mathematics)|fields, and algebra over a field|algebras equipped with a derivation (abstract algebra)|derivation, which is a Unary operation|unary function that is linear and satisfies the Product rule|Leibniz product rule. A natural example of a differential field is the field of rational functions C(t) in one variable, over the complex numbers, where the derivation is differentiation with respect to&nbsp;t. 相关概念 主要的会议/期刊 CDC ISSAC PAC BIT CORR AAECC NA JSC
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293864965438843, "perplexity": 3556.6371418693398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00763.warc.gz"}
https://searxiv.org/search?author=Peng%20Sun
### Results for "Peng Sun" total 16554took 0.12s On the Entropy of Flows with Reparametrized Gluing Orbit PropertySep 27 2018We show that a flow or a semiflow with a weaker reparametrized form of gluing orbit property is either minimal or of positive topological entropy. Ergodic properties of N-continued fractionsApr 26 2017Apr 29 2018We discuss some ergodic properties of the generalized Gauss transformation $$T_N(x)=\{\frac{N}{x}\}.$$ We generalize a series of results for the regular continued fractions, such as Khinchin's constant and L\'evy's constant. A Generalization of Gauss-Kuzmin-Lévy TheoremMay 08 2017Nov 09 2017We prove a generalized Gauss-Kuzmin-L\'evy theorem for the $p$-numerated generalized Gauss transformation $$T_p(x)=\{\frac{p}{x}\}.$$ In addition, we give an estimate for the constant that appears in the theorem. A Generalization of the Gauss-Kuzmin-Wirsing constantJun 14 2017We generalize the result of Wirsing on Gauss transformation to the generalized tranformation $T_p(x)=\{\cfrac{p}{x}\}$ for any positive integer $p$. We give an estimate for the generalized Gauss-Kuzmin-Wirsing constant. Zero-Entropy Dynamical Systems with Gluing Orbit PropertyOct 21 2018We show that a dynamical system with gluing orbit property and zero topological entropy is equicontinuous, hence it is topologically conjugate to a minimal rotation. Measures of Intermediate Entropies for Skew Product DiffeomorphismsJun 10 2009Jan 18 2010In this paper we study a skew product map $F$ with a measure $\mu$ of positive entropy. We show that if on the fibers the map are $C^{1+\alpha}$ diffeomorphisms with nonzero Lyapunov exponents, then $F$ has ergodic measures of intermediate entropies. ... More Exponential Decay of Expansive ConstantsJan 04 2011A map $f$ on a compact metric space is expansive if and only if $f^n$ is expansive. We study the exponential rate of decay of the expansive constant of $f^n$. A major result is that this rate times box dimension bounds topological entropy. Zero-Entropy Dynamical Systems with Gluing Orbit PropertyOct 21 2018Apr 06 2019We show that a dynamical system with gluing orbit property and zero topological entropy is equicontinuous, hence it is topologically conjugate to a minimal rotation. Moreover, we show that a dynamical system with gluing orbit property has zero topological ... More Minimality and Gluing Orbit PropertyAug 02 2018Aug 21 2018We show that a topological dynamical system is either minimal or have positive topological entropy. Moreover, for equicontinuous systems, we show that topological transitivity, minimality and orbit gluing property are equivalent. These facts reflect the ... More Exponential Decay of Lebesgue NumbersFeb 02 2010Nov 20 2010We study the exponential rate of decay of Lebesgue numbers of open covers in topological dynamical systems. We show that topological entropy is bounded by this rate multiplied by dimension. Some corollaries and examples are discussed. Energy Evolution for the Sivers Asymmetries in Hard ProcessesApr 18 2013Aug 12 2013We investigate the energy evolution of the azimuthal spin asymmetries in semi-inclusive hadron production in deep inelastic scattering (SIDIS) and Drell-Yan lepton pair production in pp collisions. The scale dependence is evaluated by applying an approximate ... More On the unsplittable minimal zero-sum sequences over finite cyclic groups of prime orderSep 06 2014Let $p > 155$ be a prime and let $G$ be a cyclic group of order $p$. Let $S$ be a minimal zero-sum sequence with elements over $G$, i.e., the sum of elements in $S$ is zero, but no proper nontrivial subsequence of $S$ has sum zero. We call $S$ is unsplittable, ... More TMD Evolution: Matching SIDIS to Drell-Yan and W/Z Boson ProductionAug 22 2013We examine the QCD evolution for the transverse momentum dependent observables in hard processes of semi-inclusive hadron production in deep inelastic scattering and Drell-Yan lepton pair production in $pp$ collisions, including the spin-average cross ... More Decentralized Detection with Robust Information Privacy ProtectionAug 30 2018Feb 14 2019We consider a decentralized detection network whose aim is to infer a public hypothesis of interest. However, the raw sensor observations also allow the fusion center to infer private hypotheses that we wish to protect. We consider the case where there ... More On the Relationship Between Inference and Data Privacy in Decentralized IoT NetworksNov 26 2018In a decentralized Internet of Things (IoT) network, a fusion center receives information from multiple sensors to infer a public hypothesis of interest. To prevent the fusion center from abusing the sensor information, each sensor sanitizes its local ... More On the Relationship Between Inference and Data Privacy in Decentralized IoT NetworksNov 26 2018Apr 01 2019In a decentralized Internet of Things (IoT) network, a fusion center receives information from multiple sensors to infer a public hypothesis of interest. To prevent the fusion center from abusing the sensor information, each sensor sanitizes its local ... More On the Relationship Between Inference and Data Privacy in Decentralized IoT NetworksNov 26 2018Apr 07 2019In a decentralized Internet of Things (IoT) network, a fusion center receives information from multiple sensors to infer a public hypothesis of interest. To prevent the fusion center from abusing the sensor information, each sensor sanitizes its local ... More Hydrodynamics with conserved current via AdS/CFT correspondence in the Maxwell-Gauss-Bonnet gravityMar 19 2011Jun 05 2011Using the AdS/CFT correspondence, we study the hydrodynamics with conserved current from the dual Maxwell-Gauss-Bonnet gravity. After constructing the perturbative solution to the first order based on the boosted black brane solution in the bulk Maxwell-Gauss-Bonnet ... More Gluon Distribution Functions and Higgs Boson Production at Moderate Transverse MomentumSep 07 2011We investigate the gluon distribution functions and their contributions to the Higgs boson production in pp collisions in the transverse momentum dependent factorization formalism. In addition to the usual azimuthal symmetric transverse momentum dependent ... More Soft Gluon Resummations in Dijet Azimuthal Angular Correlations at the ColliderMay 05 2014We derive all order soft gluon resummation in dijet azimuthal angular correlation in hadronic collisions at the next-to-leading logarithmic level. The relevant coefficients for the Sudakov resummation factor, the soft and hard factors, are calculated. ... More The role of protection zone on species spreading governed by a reaction-diffusion model with strong Allee effectNov 30 2018It is known that a species dies out in the long run for small initial data if its evolution obeys a reaction of bistable nonlinearity. Such a phenomenon, which is termed as the strong Allee effect, is well supported by numerous evidence from ecosystems, ... More Transverse Momentum Resummation for $s$-channel single top quark production at the LHCNov 04 2018Feb 15 2019We study the soft gluon radiation effects for the $s$-channel single top quark production at the LHC. By applying the transverse momentum dependent factorization formalism, the large logarithms about the small total transverse momentum ($q_\perp$) of ... More AOSO-LogitBoost: Adaptive One-Vs-One LogitBoost for Multi-Class ProblemOct 18 2011Jul 04 2012This paper presents an improvement to model learning when using multi-class LogitBoost for classification. Motivated by the statistical view, LogitBoost can be seen as additive tree regression. Two important factors in this setting are: 1) coupled classifier ... More Scheme dependence and Transverse Momentum Distribution interpretation of Collins-Soper-Sterman resummationMay 21 2015Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions ... More NLO QCD Corrections to $B_c$-to-Charmonium Form FactorsMar 10 2011Apr 15 2012The $B_c(^1S_0)$ meson to S-wave Charmonia transition form factors are calculated in next-to-leading order(NLO) accuracy of Quantum Chromodynamics(QCD). Our results indicate that the higher order corrections to these form factors are remarkable, and hence ... More Long Range Correlation in Higgs Boson Plus Two Jets Production at the LHCApr 30 2016We study Higgs boson plus two high energy jets production at the LHC in the kinematics where the two jets are well separated in rapidity. The partonic processes are dominated by the t-channel weak boson fusion (WBF) and gluon fusion (GF) contributions. ... More Quantum Transport in Topological Semimetals under Magnetic Fields IIDec 25 2018We review our recent works on the quantum transport, mainly in topological semimetals and also in topological insulators, organized according to the strength of the magnetic field. At weak magnetic fields, we explain the negative magnetoresistance in ... More Quantum Non-Magnetic state near Metal-Insulator Transition - a Possible Candidate of Spin Liquid StateDec 13 2008Oct 09 2009In this paper, based on the formulation of an O(3) non-linear sigma model, we study the two-dimensional Pi-flux Hubbard model at half-filling. A quantum non-magnetic insulator is explored near the metal-insulator transition that may be a possible candidate ... More Double refraction and spin splitter in a normal-hexagonal semiconductor junctionNov 17 2017In analogy with light refraction at optical boundary, ballistic electrons also undergo refraction when propagate across a semiconductor junction. Establishing a negative refractive index in conventional optical materials is difficult, but the realization ... More Partitioning Well-Clustered Graphs: Spectral Clustering Works!Nov 07 2014Jan 31 2017In this paper we study variants of the widely used spectral clustering that partitions a graph into k clusters by (1) embedding the vertices of a graph into a low-dimensional space using the bottom eigenvectors of the Laplacian matrix, and (2) grouping ... More Quantum Transport in Topological Semimetals under Magnetic Fields (II)Dec 25 2018May 04 2019We review our recent works on the quantum transport, mainly in topological semimetals and also in topological insulators, organized according to the strength of the magnetic field. At weak magnetic fields, we explain the negative magnetoresistance in ... More Kinematical correlations for Higgs boson plus High Pt Jet Production at Hadron CollidersSep 14 2014We investigate the effect of QCD resummation to kinematical correlations in the Higgs boson plus high transverse momentum (Pt) jet events produced at hadron colliders. We show that at the complete one-loop order, the Collins-Soper-Sterman resummation ... More Deep Reinforcement Learning Based Mode Selection and Resource Management for Green Fog Radio Access NetworksSep 15 2018Fog radio access networks (F-RANs) are seen as potential architectures to support services of internet of things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system ... More Two-qubit controlled-PHASE Rydberg blockade gate protocol for neutral atoms via off-resonant modulated driving within a single pulseDec 10 2018Neutral atom array serves as an ideal platform to study the quantum logic gates, where intense efforts have been devoted to enhance the two-qubit gate fidelity. We report our recent findings in constructing theoretically a different type of two-qubit ... More Anomalous Spin Dynamics of Hubbard Model on Honeycomb LatticesNov 16 2009In this paper, the honeycomb Hubbard model in optical lattices is investigated using O(3) non-linear sigma model. A possible quantum non-magnetic insulator in a narrow parameter region is found near the metal-insulator transition. We study the corresponding ... More Globally Tuned Cascade Pose Regression via Back Propagation with Application in 2D Face Pose Estimation and Heart Segmentation in 3D CT ImagesMar 30 2015Recently, a successful pose estimation algorithm, called Cascade Pose Regression (CPR), was proposed in the literature. Trained over Pose Index Feature, CPR is a regressor ensemble that is similar to Boosting. In this paper we show how CPR can be represented ... More Speeding-up Age Estimation in Intelligent Demographics System via Network OptimizationMay 22 2018Age estimation is a difficult task which requires the automatic detection and interpretation of facial features. Recently, Convolutional Neural Networks (CNNs) have made remarkable improvement on learning age patterns from benchmark datasets. However, ... More Transverse Momentum Resummation for Dijet Correlation in Hadronic CollisionsJun 19 2015We study the transverse momentum resummation for dijet correlation in hadron collisions based on the Collins-Soper-Sterman formalism. The complete one-loop calculations are carried out in the collinear factorization framework for the differential cross ... More Heavy Quarkonium Production at Low Pt in NRQCD with Soft Gluon ResummationOct 12 2012Apr 16 2013We extend the non-relativistic QCD (NRQCD) prediction for the production of heavy quarkonium with low transverse momentum in hadronic collisions by taking into account effects from all order soft gluon resummation. Following the Collins-Soper-Sterman ... More Towards Information Privacy for the Internet of ThingsNov 14 2016Jun 29 2017In an Internet of Things network, multiple sensors send information to a fusion center for it to infer a public hypothesis of interest. However, the same sensor information may be used by the fusion center to make inferences of a private nature that the ... More CAPRL: Signal Recovery from Compressive Affine Phase Retrieval via LiftingSep 11 2018Sep 20 2018In this paper, we consider compressive/sparse affine phase retrieval proposed in [B. Gao B, Q. Sun, Y. Wang and Z. Xu, Adv. in Appl. Math., 93(2018), 121-141]. By the lift technique, and heuristic nuclear norm for convex relaxation of rank and $\ell$ ... More Application of gradient descent algorithms based on geodesic distancesApr 05 2019In this paper, the Riemannian gradient algorithm and the natural gradient algorithm are applied to solve descent direction problems on the manifold of positive definite Hermitian matrices, where the geodesic distance is considered as the cost function. ... More Testing Charmonium Production Mechanism via Polarized $J/ψ$ Pair Production at the LHCMar 05 2009Jun 13 2010At present the color-octet mechanism is still an important and debatable part in the non-relativistic QCD(NRQCD). We find in this work that the polarized double charmonium production at the LHC may pose a stringent test on the charmonium production mechanism. ... More SCAN+rVV10: A promising van der Waals density functionalOct 19 2015Apr 01 2016The newly developed "strongly constrained and appropriately normed" (SCAN) meta-generalized-gradient approximation (meta-GGA) can generally improve over the non-empirical Perdew-Burke-Ernzerhof (PBE) GGA not only for strong chemical bonding, but also ... More Zero entropy invariant measures for some skew product diffeomorphismsDec 15 2008In this paper we study some skew product diffeomorphisms with nonuniformly hyperbolic structure along fibers. We show that there is an invariant measure with zero entropy which has atomic conditional measures along fibers. Entropy and Ergodic Measures for Toral AutomorphismsMar 06 2011We show that for every linear toral automorphism, especially the non-hyperbolic ones, the entropies of ergodic measures form a dense set on the interval from zero to the topological entropy. Resummation of High Order Corrections in Higgs Boson Plus Jet Production at the LHCFeb 25 2016We study the effect of multiple parton radiation to Higgs boson plus jet production at the LHC, by applying the transverse momentum dependent (TMD) factorization formalism to resum large logarithmic contributions to all orders in the expansion of the ... More Soft Factor Subtraction and Transverse Momentum Dependent Parton Distributions on LatticeMay 29 2014We study the transverse momentum dependent (TMD) parton distributions in the newly proposed quasi-parton distribution function framework in Euclidean space. A soft factor subtraction is found to be essential to make the TMDs calculable on lattice. We ... More Probing the Conformations of Single Molecule via Photon Counting StatisticsNov 12 2014We suggest an approach to detect the conformation of single molecule by using the photon counting statistics. The generalized Smoluchoswki equation is employed to describe the dynamical process of conformational change of single molecule. The resonant ... More Hunting eta_b through radiative decay into J/psiDec 14 2006Jan 25 2007We propose that the radiative decay process, \eta_b\to J/\psi\gamma, may serve as a clean searching mode for \eta_b in hadron collision facilities. By a perturbative QCD calculation, we estimate the corresponding branching ratio to be of order 10^{-7}. ... More A lightweight forum-based distributed requirement elicitation process for open source communityOct 11 2012Nowadays, lots of open source communities adopt forum to acquire scattered stakeholders' requirements. But the requirements collection process always suffers from the unformatted description and unfocused discussions. In this paper, we establish a framework ... More Resource Allocation in Cloud Radio Access Networks with Device-to-Device CommunicationsSep 14 2017To alleviate the burdens on the fronthaul and reduce the transmit latency, the device-to-device (D2D) communication is presented in cloud radio access networks (C-RANs). Considering dynamic traffic arrivals and time-varying channel conditions, the resource ... More Compositional Network EmbeddingApr 17 2019Apr 18 2019Network embedding has proved extremely useful in a variety of network analysis tasks such as node classification, link prediction, and network visualization. Almost all the existing network embedding methods learn to map the node IDs to their corresponding ... More Improved Exponential Time Lower Bound of Knapsack Problem under BT modelJun 14 2006M.Alekhnovich et al. recently have proposed a model of algorithms, called BT model, which covers Greedy, Backtrack and Simple Dynamic Programming methods and can be further divided into fixed, adaptive and fully adaptive three kinds, and have proved exponential ... More Driving conditions dependence of magneto-electroluminescence in tri-(8-hydroxyquinoline)-aluminum based organic light emitting diodesJun 17 2011we investigated the magneto-electroluminescence (MEL) in tri-(8-hydroxyquinoline)-aluminum based organic light-emitting diodes (OLEDs) through the steady-state and transient method simultaneously. The MELs show the great different behaviors when we turn ... More Energy-Efficient Relaying over Multiple Slots with Causal CSIMay 22 2012Nov 29 2012In many communication scenarios, such as in cellular systems, the energy cost is substantial and should be conserved, yet there is a growing need to support many real-time applications that require timely data delivery. To model such a scenario, in this ... More A data-driven adaptive regularization method and its applicationsJul 17 2018Regularization method and Bayesian inverse method are two dominating ways for solving inverse problems generated from various fields, e.g., seismic exploration and medical imaging. The two methods are related with each other by the MAP estimates of posterior ... More Modulated Unit-Norm Tight Frames for Compressed SensingNov 27 2014In this paper, we propose a compressed sensing (CS) framework that consists of three parts: a unit-norm tight frame (UTF), a random diagonal matrix and a column-wise orthonormal matrix. We prove that this structure satisfies the restricted isometry property ... More Resummation of High Order Corrections in $Z$ Boson Plus Jet Production at the LHCOct 09 2018Apr 30 2019We study the multiple soft gluon radiation effects in $Z$ boson plus jet production at the LHC. By applying the transverse momentum dependent factorization formalism, the large logarithms introduced by the small total transverse momentum of the $Z$ boson ... More What Do Deep CNNs Learn About Objects?Apr 09 2015Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters. What exactly do these parameters represent? Recent work has started to analyse CNN representations, ... More Learning Deep Object Detectors from 3D ModelsDec 22 2014Oct 12 2015Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net ... More Application of Machine Learning in Wireless Networks: Key Techniques and Open IssuesSep 24 2018Mar 01 2019As a key technique for enabling artificial intelligence, machine learning (ML) is capable of solving complex problems without explicit programming. Motivated by its successful applications to many practical tasks like image recognition, both industry ... More Optimizing Network Performance for Distributed DNN Training on GPU Clusters: ImageNet/AlexNet Training in 1.5 MinutesFeb 19 2019Mar 16 2019It is important to scale out deep neural network (DNN) training for reducing model training time. The high communication overhead is one of the major performance bottlenecks for distributed DNN training across multiple GPUs. Our investigations have shown ... More 3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubblesJun 05 2017In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical-cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed ... More An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient MethodMay 16 2018Jun 08 2018We propose a novel algorithmic framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-gradient (VMOR-HPE) method with a global convergence guarantee for the maximal monotone operator inclusion problem. Its iteration complexities and local linear ... More Collective effects of multi-scatterer on coherent propagation of photon in a two dimensional networkDec 22 2012Jul 22 2013We study the collective phenomenon in the scattering of a single-photon by one or two layers of two-level atoms. By modeling the photon dispersion with a two-dimensional (2D) coupled cavity array, we analytically derive the scattering probability of a ... More Topological Dirac semimetal phases in InSb/$α$-Sn semiconductor superlatticesJul 15 2016We demonstrate theoretically the coexistence of Dirac semimetal and topological insulator phases in InSb/$\alpha$-Sn conventional semiconductor superlattices, based on advanced first-principles calculations combined with low-energy $k\cdot p$ theory. ... More Universal Non-perturbative Functions for SIDIS and Drell-Yan ProcessesJun 11 2014Mar 20 2015We update the well-known BLNY fit to the low transverse momentum Drell-Yan lepton pair productions in hadronic collisions, by considering the constraints from the semi-inclusive hadron production in deep inelastic scattering (SIDIS) from HERMES and COMPASS ... More Reciprocal Recommendation System for Online DatingJan 26 2015Jan 27 2015Online dating sites have become popular platforms for people to look for potential romantic partners. Different from traditional user-item recommendations where the goal is to match items (e.g., books, videos, etc) with a user's interests, a recommendation ... More GraphH: High Performance Big Graph Analytics in Small ClustersMay 16 2017Aug 07 2017It is common for real-world applications to analyze big graphs using distributed graph processing systems. Popular in-memory systems require an enormous amount of resources to handle big graphs. While several out-of-core approaches have been proposed ... More Rebuilding of destroyed spin squeezing in noisy environmentsNov 07 2017We investigate the process of spin squeezing in a ferromagnetic dipolar spin-1 Bose-Einstein condensate under the driven oneaxis twisting scheme, with emphasis on the detrimental effect of noisy environments (stray magnetic fields) which completely destroy ... More Efficient generation of many-body singlet states of spin-1 bosons in optical superlatticesJun 06 2017We propose an efficient stepwise adiabatic merging (SAM) method to generate many-body singlet states in antiferromagnetic spin-1 bosons in concatenated optical superlattices with isolated double-well arrays, by adiabatically ramping up the double-well ... More Quantum transport through three-dimensional topological insulator p-n junction under magnetic fieldAug 23 2018The 3D topological insulator (TI) PN junction under magnetic fields presents a novel transport property which is investigated both theoretically and numerically in this paper. Transport in this device can be tuned by the axial magnetic field. Specifically, ... More SFCSD: A Self-Feedback Correction System for DNS Based on Active and Passive MeasurementApr 21 2017Domain Name System (DNS), one of the important infrastructure in the Internet, was vulnerable to attacks, for the DNS designer didn't take security issues into consideration at the beginning. The defects of DNS may lead to users' failure of access to ... More Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed SensingJun 28 2017Feb 07 2018We study the problem of recovering an $s$-sparse signal $\mathbf{x}^{\star}\in\mathbb{C}^n$ from corrupted measurements $\mathbf{y} = \mathbf{A}\mathbf{x}^{\star}+\mathbf{z}^{\star}+\mathbf{w}$, where $\mathbf{z}^{\star}\in\mathbb{C}^m$ is a $k$-sparse ... More Resummation of High Order Corrections in $Z$ Boson Plus Jet Production at the LHCOct 09 2018We study the multiple soft gluon radiation effects in $Z$ boson plus jet production at the LHC. By applying the transverse momentum dependent factorization formalism, the large logarithms introduced by the small total transverse momentum of the $Z$ boson ... More Improving Localization Accuracy in Connected Vehicle Networks Using Rao-Blackwellized Particle Filters: Theory, Simulations, and ExperimentsFeb 19 2017Mar 26 2017A crucial function for automated vehicle technologies is accurate localization. Lane-level accuracy is not readily available from low-cost Global Navigation Satellite System (GNSS) receivers because of factors such as multipath error and atmospheric bias. ... More Mobile Formation Coordination and Tracking Control for Multiple Non-holonomic VehiclesFeb 28 2019This paper addresses forward motion control for trajectory tracking and mobile formation coordination for a group of non-holonomic vehicles on SE(2). Firstly, by constructing an intermediate attitude variable which involves vehicles' position information ... More Joint Channel-Estimation/Decoding with Frequency-Selective Channels and Few-Bit ADCsJul 06 2018Dec 09 2018We propose a fast and near-optimal approach to joint channel-estimation, equalization, and decoding of coded single-carrier (SC) transmissions over frequency-selective channels with few-bit analog-to-digital converters (ADCs). Our approach leverages parametric ... More ExFuse: Enhancing Feature Fusion for Semantic SegmentationApr 11 2018Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features ... More A Fast Integrated Planning and Control Framework for Autonomous Driving via Imitation LearningJul 09 2017For safe and efficient planning and control in autonomous driving, we need a driving policy which can achieve desirable driving quality in long-term horizon with guaranteed safety and feasibility. Optimization-based approaches, such as Model Predictive ... More Cross-Layer Adaptive Feedback Scheduling of Wireless Control SystemsSep 29 2008There is a trend towards using wireless technologies in networked control systems. However, the adverse properties of the radio channels make it difficult to design and implement control systems in wireless environments. To attack the uncertainty in available ... More MetaFlow: a Scalable Metadata Lookup Service for Distributed File Systems in Data CentersNov 05 2016Nov 10 2016In large-scale distributed file systems, efficient meta- data operations are critical since most file operations have to interact with metadata servers first. In existing distributed hash table (DHT) based metadata management systems, the lookup service ... More Security of a new two-way continuous-variable quantum key distribution protocolOct 09 2011Aug 27 2012The original two-way continuous-variable quantum-key-distribution (CV QKD) protocols [S. Pirandola, S. Mancini, S. Lloyd, and S. L. Braunstein, Nature Physics 4, 726 (2008)] give the security against the collective attack on the condition of the tomography ... More On Lower Bound of Worst Case Error Probability for Quantum Fingerprinting with Shared EntanglementJun 14 2006This paper discusses properties of quantum fingerprinting with shared entanglement. Under certain restriction of final measurement, a relation is given between unitary operations of two parties. Then, by reducing to spherical coding problem, this paper ... More Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional NetworkMar 08 2017One of recent trends [30, 31, 14] in network architec- ture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more ef- ficient than a large kernel, given the same computational complexity. However, ... More $B_c$ Exclusive Decays to Charmonium and a Light Meson at Next-to-Leading Order AccuracySep 26 2012Dec 02 2013In this paper the next-to-leading order (NLO) corrections to $B_c$ meson exclusive decays to S-wave charmonia and light pseudoscalar or vector mesons, i.e. $\pi$, $K$, $\rho$, and $K^*$, are performed within non-relativistic (NR) QCD approach. The non-factorizable ... More GraphMP: I/O-Efficient Big Graph Analytics on a Single Commodity MachineOct 09 2018Feb 18 2019Recent studies showed that single-machine graph processing systems can be as highly competitive as cluster-based approaches on large-scale problems. While several out-of-core graph processing systems and computation models have been proposed, the high ... More GraphMP: An Efficient Semi-External-Memory Big Graph Processing System on a Single MachineJul 09 2017Recent studies showed that single-machine graph processing systems can be as highly competitive as cluster-based approaches on large-scale problems. While several out-of-core graph processing systems and computation models have been proposed, the high ... More Asset Allocation under the Basel Accord Risk MeasuresAug 06 2013Financial institutions are currently required to meet more stringent capital requirements than they were before the recent financial crisis; in particular, the capital requirement for a large bank's trading book under the Basel 2.5 Accord more than doubles ... More Towards Distributed Machine Learning in Shared Clusters: A Dynamically-Partitioned ApproachApr 22 2017Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: ... More A Chunk Caching Location and Searching Scheme in Content Centric NetworkingJan 10 2017Content Centric Networking (CCN) is a new network infrastructure around content dissemination and retrieval, shift from host addresses to named data. Each CCN router has a cache to store the chunks passed by it. Therefore the caching strategy about chunk ... More StaQC: A Systematically Mined Question-Code Dataset from Stack OverflowMar 26 2018Stack Overflow (SO) has been a great source of natural language questions and their code solutions (i.e., question-code pairs), which are critical for many tasks including code retrieval and annotation. In most existing research, question-code pairs were ... More Robust Semantic Segmentation By Dense Fusion Network On Blurred VHR Remote Sensing ImagesMar 07 2019Robust semantic segmentation of VHR remote sensing images from UAV sensors is critical for earth observation, land use, land cover or mapping applications. Several factors such as shadows, weather disruption and camera shakes making this problem highly ... More Hybrid Quantum-Classical Approach to Quantum Optimal ControlAug 02 2016Jan 19 2017A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. ... More Constrained Maximum Correntropy Adaptive FilteringOct 06 2016Dec 14 2016Constrained adaptive filtering algorithms inculding constrained least mean square (CLMS), constrained affine projection (CAP) and constrained recursive least squares (CRLS) have been extensively studied in many applications. Most existing constrained ... More The Quasi-normal Modes of Charged Scalar Fields in Kerr-Newman black hole and Its Geometric InterpretationJun 27 2015Nov 01 2015It is well-known that there is a geometric correspondence between high-frequency quasi-normal modes (QNMs) and null geodesics (spherical photon orbits). In this paper, we generalize such correspondence to charged scalar field in Kerr-Newman space-time. ... More Compositional Network EmbeddingApr 17 2019Network embedding has proved extremely useful in a variety of network analysis tasks such as node classification, link prediction, and network visualization. Almost all the existing network embedding methods learn to map the node IDs to their corresponding ... More Domain Agnostic Learning with Disentangled RepresentationsApr 28 2019Unsupervised model transfer has the potential to greatly improve the generalizability of deep models to novel domains. Yet the current literature assumes that the separation of target data into distinct domains is known as a priori. In this paper, we ... More FPGA-based Acceleration System for Visual TrackingOct 12 2018Oct 15 2018Visual tracking is one of the most important application areas of computer vision. At present, most algorithms are mainly implemented on PCs, and it is difficult to ensure real-time performance when applied in the real scenario. In order to improve the ... More
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8064547181129456, "perplexity": 2947.421910805477}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255562.23/warc/CC-MAIN-20190520041753-20190520063753-00377.warc.gz"}
https://demo.formulasearchengine.com/wiki/Ives%E2%80%93Stilwell_experiment
# Ives–Stilwell experiment Ives–Stilwell experiment (1938). "Canal rays" (a mixture of mostly H2+ and H3+ ions) were accelerated through perforated plates charged from 6,788 to 18,350 volts. The beam and its reflected image were simultaneously observed with the aid of a concave mirror offset 7° from the beam.[1] (The offset in this illustration is exaggerated.) The Ives–Stilwell experiment tested the contribution of relativistic time dilation to the Doppler shift of light.[1][2] The result was in agreement with the formula for the transverse Doppler effect, and was the first direct, quantitative confirmation of the time dilation factor. Since then, many Ives–Stilwell type experiments have been performed with increased precision. Together with the Michelson–Morley and Kennedy–Thorndike experiments, it forms one of the fundamental tests of special relativity theory.[3] Other tests confirming the relativistic Doppler effect, are the Mössbauer rotor experiment and modern Ives–Stilwell experiments. For other time dilation experiments, see Time dilation of moving particles. For general overview, see Tests of special relativity. Both time dilation and the relativistic Doppler effect were predicted by Albert Einstein in his seminal 1905 paper.[4] Einstein subsequently (1907) suggested an experiment based on the measurement of the relative frequencies of light perceived as arriving from a light source in motion with respect to the observer, and he calculated the additional Doppler shift due to time dilation.[5] This effect was later called "transverse Doppler effect" (TDE), since such experiments were initially imagined to be conducted at right angles with respect to the moving source, in order to avoid the influence of the longitudinal Doppler shift. Eventually, Herbert E. Ives and G. R. Stilwell (referring to time dilation as following from the theory of Lorentz and Larmor) gave up the idea of measuring this effect at right angles. They used rays in longitudinal direction and found a way to separate the much smaller TDE from the much bigger longitudinal Doppler effect. The experiment was performed in 1938[1] and it was reprised several times (see, e.g.[2]). Similar experiments were conducted several times with increased precision, by Otting (1939),[6] Mandelberg et al. (1962),[7] Hasselkamp et al. (1979),[8] ## Experiments with "canal rays" ### The experiment of 1938 Ives remarked that it is nearly impossible to measure the transverse Doppler effect with respect to light rays emitted by canal rays at right angles to the direction of motion of the canal rays (as it was considered earlier by Einstein), because the influence of the longitudinal effect can hardly be excluded. Therefore he developed a method, to observe the effect in the longitudinal direction of the canal rays' motion. If it is assumed that the speed of light is fixed with respect to the observer ("classical theory"), then the forward and rearward Doppler-shifted frequencies seen on a moving object will be ${\displaystyle {\frac {f_{o}}{f_{s}}}={\frac {c}{c\pm v}},}$ where v is recession velocity. Under special relativity, the two frequencies will also include an additional Lorentz factor redshift correction represented by the TDE formula: ${\displaystyle {\frac {f_{o}}{f_{s}}}={\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}$ When we invert these relationships so that they relate to wavelengths rather than frequencies, "classical theory" predicts redshifted and blueshifted wavelength values of 1 + v/c and 1 − v/c, so if all three wavelengths (redshifted, blueshifted and original) are marked on a linear scale, according to classical theory the three marks should be perfectly evenly spaced. ${\displaystyle |\cdot \cdot \cdot \cdot \cdot |\cdot \cdot \cdot \cdot \cdot |\,}$ But if the light is shifted by special relativity's predictions, the additional Lorentz offset means that the two outer marks will be offset in the same direction with respect to the central mark. ${\displaystyle |\cdot \cdot \cdot \cdot |\cdot \cdot \cdot \cdot \cdot \cdot |\,}$ Ives and Stilwell found that there was a significant offset of the centre of gravity of the three marks, and therefore the Doppler relationship was not that of "classical theory". Why it is difficult to measure the transverse Doppler effect accurately using a transverse beam. The illustration shows the results of attempting to measure the 4861 ångström line emitted by a beam of "canal rays" as they recombine with electrons stripped from the dilute hydrogen gas used to fill the canal ray tube. With v = 0.005 c, the predicted result of the TDE would be a 4861.06 ångström line. On the left, conventional Doppler shift results in broadening the emission line to such an extent that the TDE cannot be observed. In the middle, we see that even if one narrows one's view to the exact center of the beam, very small deviations of the beam from an exact right angle introduce shifts comparable to the predicted effect. Ives and Stilwell used a concave mirror that allowed them to simultaneously observe a nearly longitudinal direct beam (blue) and its reflected image (red). Spectroscopically, three lines would be observed: An undisplaced emission line, and blueshifted and redshifted lines. The average of the redshifted and blueshifted lines was compared with the undisplaced line. 1. It didn't require a commitment to an exact value for the velocity involved (which might have been theory-dependent). 2. It didn't require an understanding or interpretation of angular aberration effects, as might have been required for the analysis of a "true" transverse test. A "true transverse test" was run almost 40 years later by Hasselkamp in 1979.[8] ### The experiment of 1941 In the 1938 experiment, the maximum TDE was limited to 0.047 Å. The chief difficulty that Ives and Stilwell encountered in attempts to achieve larger shifts was that when they raised the electric potential between the accelerating electrodes to above 20,000 volts, breakdown and sparking would occur that could lead to destruction of the tube. This difficulty was overcome by using multiple electrodes. Using a four electrode version of the canal ray tube with three gaps, a total potential difference of 43,000 volts could be achieved. A voltage drop of 5,000 volts was used across the first gap, while the remaining voltage drop was distributed between the second and third gaps. With this tube, a highest shift of 0.11 Å was achieved for H2+ ions. Other aspects of the experiment were also improved. Careful tests showed that the "undisplaced" particles yielding the central line actually acquired a small velocity imparted to them in the same direction of motion as the moving particles (no more than about 750 meters per second). Under normal circumstances, this would be of no consequence, since this effect would only result in a slight apparent broadening of the direct and reflected images of the central line. But if the mirror were tarnished, the central line might be expected to shift slightly. Other controls were performed to address various objections of critics of the original experiment. The net result of all of this attention to detail was the complete verification of Ives and Stilwell's 1938 results and the extension of these results to higher speeds.[2] ## Mössbauer rotor experiments The Kündig experiment (1963). An 57Fe Mössbauer absorber was mounted 9.3 cm from the axis of an ultracentrifuge rotor. A 57Co source was mounted on a piezoelectric transducer (PZT) at the rotor center. Spinning the rotor caused the source and absorber to fall out of resonance. A modulated voltage applied to the PZT set the source in radial motion relative to the absorber, so that the amount of conventional Doppler shift that would restore resonance could be measured. For example, withdrawing the source at 195 µm/s produced a conventional Doppler redshift equivalent to the TDE resulting from spinning the absorber at 35,000 rpm. ### Relativistic Doppler effect A more precise confirmation of the relativistic Doppler effect was achieved by the Mössbauer rotor experiments. From a source in the middle of a rotating disk, gamma rays are sent to a receiver at the rim (in some variations this scheme was reversed). Due to the rotation velocity of the receiver, the absorption frequency decreases if the transverse Doppler effect exists. This effect was actually observed using the Mössbauer effect. The maximal deviation from time dilation was 10−5, thus the precision was much higher than that (10−2) of the Ives–Stilwell experiments. Such experiments were performed by Hay et al. (1960),[9] Champeney et al. (1963, 1965),[10][11] Kündig (1963).[12] ### Isotropy of the speed of light Moessbauer rotor experiments were also used to measure a possible anisotropy of the speed of light. That is, a possible aether wind should exert a disturbing influence on the absorption frequency. However, as in all other aether drift experiments (Michelson–Morley experiment), the result was negative, putting an upper limit to aether drift of 3–4 m/s. Experiments of that kind were performed by Champeney & Moon (1961),[13] Champeney et al. (1963)[14] and Turner & Hill (1964).[15] ## Modern experiments ### Fast moving clocks A considerably higher precision has been achieved in modern variations of Ives–Stilwell experiments. In heavy ion storage rings, as the TSR at the MPIK, the Doppler shift of lithium ions traveling at high speeds is evaluated by using saturated spectroscopy. Due to their frequencies emitted, these ions can be considered as optical atomic clocks of high precision. Author Year Speed Maximum deviation from time dilation Grieser et al.[16] 1994 0.064 c ≤ Template:Val Saathoff et al.[17] 2003 0.064 c ≤ Template:Val Reinhardt et al.[18] 2007 0.064 c ≤ Template:Val Novotny et al.[19] 2009 0.34 c ≤ Template:Val Botermann et al.[20] 2014 0.338 c ≤ Template:Val ### Slow moving clocks Meanwhile, the measurement of time dilation at everyday speeds has been accomplished as well. Chou et al. (2010) created two clocks each holding a single 27Al+ ion in a Paul trap. In one clock, the Al+ ion was accompanied by a 9Be+ ion as a "logic" ion, while in the other, it was accompanied by a 25Mg+ ion. The two clocks were situated in separate laboratories and connected with a 75 m long, phase-stabilized optical fiber for exchange of clock signals. These optical atomic clocks emitted frequencies in the petahertz (1 PHz = 1015 Hz) range and had frequency uncertainties in the 10−17 range. With these clocks, it was possible to measure a frequency shift due to time dilation of ∼10−16 at speeds below 36 km/h (< 10 m/s, the speed of a fast runner) by comparing the rates of moving and resting aluminum ions. It was also possible to detect gravitational time dilation from a difference in elevation between the two clocks of 33 cm.[21] ## References 1. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 2. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 3. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 4. {{#invoke:Citation/CS1|citation |CitationClass=journal }} English translation: ‘On the Electrodynamics of Moving Bodies’ 5. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 6. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 7. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 8. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 9. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 10. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 11. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 12. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 13. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 14. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 15. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 16. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 17. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 18. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 19. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 20. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 21. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733988404273987, "perplexity": 1753.3391549866576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00521.warc.gz"}
https://research.tue.nl/nl/publications/modeling-and-identification-of-uncertain-input-systems
# Modeling and identification of uncertain-input systems Riccardo Sven Risuleo (Corresponding author), Giulio Bottegal, Håkan Hjalmarsson 3 Citaten (Scopus) ## Samenvatting We present a new class of models, called uncertain-input models, that allows us to treat system-identification problems in which a linear system is subject to a partially unknown input signal. To encode prior information about the input or the linear system, we use Gaussian-process models. We estimate the model from data using the empirical Bayes approach: the hyperparameters that characterize the Gaussian-process models are estimated from the marginal likelihood of the data. We propose an iterative algorithm to find the hyperparameters that relies on the EM method and results in decoupled update steps. Because in the uncertain-input setting neither the marginal likelihood nor the posterior distribution of the unknowns is tractable, we develop an approximation approach based on variational Bayes. As part of the contribution of the paper, we show that this model structure encompasses many classical problems in system identification such as Hammerstein models, blind system identification, and cascaded linear systems. This connection allows us to build a systematic procedure that applies effectively to all the aforementioned problems, as shown in the numerical simulations presented in the paper. Originele taal-2 Engels 130-141 12 Automatica 105 https://doi.org/10.1016/j.automatica.2019.03.014 Gepubliceerd - 1 jul 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561440706253052, "perplexity": 939.918602719031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00535.warc.gz"}
http://physics.stackexchange.com/questions/63449/absorption-extinction-formula-of-nanoparticles
# Absorption/Extinction formula of nanoparticles I know the absorption/extinction equations in nanoparticle physics should be: $$Q_{abs}=\frac{1}{2}\mathbf{Re}\int \mathbf{J}_{tot}\cdot\mathbf{E}_{tot}^\ast dV=\frac{\omega}{2}\mathbf{Im}(\epsilon)\int|\mathbf{E}_{tot}|^2dV$$ also, for the extinction, it reads: $$Q_{ext}=\frac{1}{2}\mathbf{Re}\int \mathbf{J}_{tot}\cdot\mathbf{E}_0^\ast dV$$ But I see in some papers people use the following equations: $$Q_{abs}=\frac{\omega}{2}\mathbf{Im}(\mathbf{d}\cdot\mathbf{E}_{inside}^\ast)$$ where $\mathbf{d}$ is the total dipole moment of nanoparticles. Also, for the extinction, it reads: $$Q_{ext}=\frac{\omega}{2}\mathbf{Im}(\mathbf{d}\cdot\mathbf{E}_0^\ast)$$ I failed to derive the two equations. Can anyone give some help? Or, some reference papers would be also very helpful. Thanks a lot for the help. - I think the difference between two is that one of them is a spectrum and, in one case the field depends on time and in the other one it depends on frequency. It is only guess, I can be wrong. –  freude May 6 '13 at 8:58 No, both the two equations are dependent on frequency, and are the time-averaged results. –  Hui Zhang May 6 '13 at 17:20 What is V? How large is it? –  freude May 7 '13 at 7:09 V is the total volume of object, and $\int\cdots dV$ is a volume integral. The size of object is usually assumed to be much smaller than the wavelength of incident light. –  Hui Zhang May 10 '13 at 15:00 This mistery has an easy answear: in absence of external currents the total current is the current induced by the electric field, which is just the derivative of the nanoparticle polarization. This holds in general in absence of external current: $$\mathbf{J}_{pol} = \frac{\partial\mathbf{P}}{\partial t}$$ Now in the case of a nanoparticle (or a dielectric/metallic sphere), it is known that the response to an external field is dipole-like. The total electric dipole $\mathbf{d}$ is given by: $$\mathbf{d} = \int \mathbf{P}\, d\mathbf{r}$$ Now, since the total field inside the sphere $\mathbf{E}_{inside}$ and the exciting field $\mathbf{E}_0$ are both uniform (Attention: the total field outside the sphere is not uniform!), you can write: \begin{align} Q_{abs} &= \frac{1}{2}\mathbf{Re}\int \mathbf{J}_{pol}\cdot \mathbf{E}_{inside}^* = \frac{1}{2}\mathbf{Re}\left\{ \mathbf{E}_{inside}^*\cdot \int \mathbf{J}_{pol}\, d\mathbf{r} \right\}\\ &=\frac{1}{2}\mathbf{Re}\left\{ \mathbf{E}_{inside}^*\cdot \left(\frac{\partial}{\partial t}\int \mathbf{P}_{pol}\, d\mathbf{r} \right)\right\} = \frac{1}{2}\mathbf{Re}\left\{\left(\frac{\partial}{\partial t}\mathbf{d}\right)\cdot\mathbf{E}_{inside}^*\right\} \end{align} If the external source is time harmonic, then all the time dependences can be assumed to be of the type $e^{-i\omega t}$, thus $$\partial_t \mathbf{d} = -i\omega \mathbf{d}$$ Substituting into the above equation, you get: $$Q_{abs} = \frac{1}{2}\mathbf{Re}\{-i\omega \mathbf{d}\cdot\mathbf{E}^*_{inside}\} = \frac{\omega}{2}\mathbf{Im}\{\mathbf{d}\cdot\mathbf{E}^*_{inside}\}$$ and similarly for the $Q_{ext}$. - Very helpful, thank you:) –  Hui Zhang Nov 13 '13 at 18:23 You're welcome! –  Mattia Nov 13 '13 at 18:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974059522151947, "perplexity": 876.7561218662131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443062.21/warc/CC-MAIN-20141017005723-00295-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathforum.org/mathimages/index.php?title=Law_of_Sines&diff=prev&oldid=32243
# Law of Sines (Difference between revisions) Revision as of 17:46, 19 August 2011 (edit)← Previous diff Current revision (12:14, 20 June 2012) (edit) (undo) Line 267: Line 267: - ________________________________________________________________ + If the above problem asked to find the radius of the circumcircle of $\vartriangle ABC$, the law of sines could help to find the diameter. If the above problem asked to find the radius of the circumcircle of $\vartriangle ABC$, the law of sines could help to find the diameter. ## Current revision Law of Sines Field: Geometry Image Created By: Richard Scott Law of Sines The law of sines is a tool commonly used to help solve arbitrary triangles. It is a formula that relates the sine of a given angle to its opposite side length. # Basic Description In any triangle, there is a relationship between the measures of the angles and the lengths of the sides: the largest angle is opposite the longest side, the second-largest angle is opposite the second-longest side, and the smallest angle is opposite the shortest side. The law of sines is an equation that more precisely expresses this relationship between the angles of a triangle and the length of their opposite sides. The law of sines states that the ratio between a length of one side of a triangle and the sine of its opposite angle is equal for all three sides. Specifically: Given a triangle with side lengths $a, b, c$ and opposite angles $A, B, C$, $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$ The law of sines is used to find all of the lengths of the sides and the angle measures for an arbitrary triangle given only some of this information. This process is called solving a triangle. To use the law of sines in solving triangles, at least three elements of a triangle must be known. Whenever a side length and two angles are given, the law of sines can be used to solve the triangle. In some cases, the law of sines can provide multiple solutions to a triangle. If two adjacent side lengths are given with one of the opposite angles, the law of sines cannot definitively determine the triangle, but instead offers zero, one, or two possible solutions in what is known as the ambiguous case. The law of sines does not help with solving a triangle in several cases. With two known side lengths and the measure of the angle between, there is no way to use the law of sines to solve the triangle because no pair of opposite angle measure and side length is provided. The law of sines by itself is also not able to provide solutions when three side lengths are provided. Instead, the law of cosines is often used for solving triangles in these cases. # A More Mathematical Explanation ## Two Derivations There are at least two different ways to derive the law of sines: using the area [...] ## Two Derivations There are at least two different ways to derive the law of sines: using the area formula and using the definition of sine. ### Using Area The formula for area of a triangle uses the lengths of the base and height. By using these lengths and the angle measures of a triangle, we can derive the law of sines. A triangle can be oriented so that any one side can be used as the base. Depending on which side is chosen as the base of the triangle, the height may be different. Let $h_{a}$ be the height when the side of length $a$ is the base. When $a$ is the base, $h_{a}$ is the distance from a vertex to the opposite side, such that $h_{a}$ is perpendicular to the side. When $b$ is oriented as the base of the triangle, $h_{b}$ runs perpendicular to side $b$ and is the distance from side $b$ to the vertex $B$. First, we must determine the height of the triangle for each orientation of the base. When $b$ is oriented as the base, $\sin A = \frac{h_{b}}{c}$ $h_{b} = c \sin A$ When $a$ is oriented as the base, $\sin B = \frac{h_{a}}{c}$ $h_{a} = c \sin B$ In any triangle, $\text{Area} = \frac{\text{base} \times \text{height}}{2}$ Since the area of the triangle is the same no matter how the triangle is oriented, the area of the triangle with $b$ as the base is the same as the area of the triangle with $a$ as the base. $\text{Area}_{\text{base} = b} = \text{Area}_{\text{base} = a}$ Substituting the formula for the area of a triangle, $\frac{b h_{b}}{2} = \frac{a h_{a}}{2}$ Both $h_{b}$ and $h_{a}$ can be written in terms of side lengths $a, b, c$ and angles $A, B, C$ as shown in the "More on Height" section. Therefore, we can substitute ${(c\sin A)}$ for $h_{b}$ and ${(c\sin B)}$ for $h_{a}$, giving us $\frac{b(c\sin A)}{2} = \frac{a(c\sin B)}{2}$ Multiplying both sides by $2$ and dividing by $c$ gives us $b \sin A = a \sin B$ Then, rearranging once more gives us our equation in its most common form, $\frac{a}{\sin A} = \frac{b}{\sin B}$ Since we can orient the base differently and go through the same process with other variables, we know that $\frac{b}{\sin B} = \frac{c}{\sin C}$, so $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$ which is the law of sines. ### Using the Definition of Sine We know that, in a right triangle, $\sin A =\frac{\text{opposite}}{\text{hypotenuse}}$ Letting $h$ represent height and $a, b$ represent the lengths of the sides opposite $A, B$, respectively, plug in the appropriate measures to solve for $\sin A, \sin B$. $\sin A = \frac{h}{b}$ Clearing the fractions, $b \sin A = h$ $\sin B = \frac{h}{a}$ Clearing the fractions, $a \sin B = h$ Set both equations for $h$ equal to each other to get $b \sin A = a \sin B$ Divide both sides by $\sin A, \sin B$ for $\frac{a}{\sin A} = \frac{b}{\sin B}$ Since we can go through the same process using a different angle and different variables, we know that $\frac{b}{\sin B} = \frac{c}{\sin C}$, so $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$ ## A Geometric Extension For every triangle, there is some circle for which the vertices of the triangle lay on the circumference. This triangle is known as an inscribed triangle, and the circle is known as the circumcircle or circumscribed circle. By the extended law of sines, $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2r$ where $r$ is the radius of the circumcircle. ### Proof Let there be two inscribed triangles on a circle of radius $r$. Let $\vartriangle ABD$ be a triangle that has a hypotenuse that goes through the center of the circle. Let $\vartriangle ABC$ be an oblique triangle that shares $\overline{AB}$ with $\vartriangle ABD$. For $\vartriangle ABD$, $\sin D = \frac{\overline{AB}}{ \ \overline{AD} \ }$ Angle $C$ is equal to angle $D$ because they are both inscribed angles that cut the same arc. According to properties of inscribed angles, two inscribed angles that cut the same arc in circles of the came radius are equal. Since $\angle{C}$ and $\angle{D}$ are the same, so are $\sin C$ and $\sin D$. Substituting $\sin C$ for $\sin D$ gives us $\sin C = \frac{\overline{AB}}{ \ \overline{AD} \ }$ Solving for $\overline{AD}$ gives us $\overline{AD} = \frac{\overline{AB}}{\sin C}$ Since $\overline{AD}$ is the diameter, $\overline{AD} = 2r$ $2r = \frac{\overline{AB}}{\sin C}$ or equivalently, $2r = \frac{c}{\sin C}$ since $c$ is the length of the side opposite $\angle{C}$. By the law of sines, we know that $\frac{c}{\sin C}= \frac{a}{\sin A} = \frac{b}{\sin B}$ and therefore by the transitive property, $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2r$ ## Example Problem Solve the triangle. Find all of its parts, $\vartriangle ABC$, given $b = 10$, $A = 60^\circ$,$B = 30^\circ$. ### Solution Since all of the angle measures in a triangle add up to $180^\circ, C = 90^\circ$ $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$ $\frac{a}{\sin 60^\circ} = \frac{10}{\sin 30^\circ}$ Cross-multiplying gives us $a \sin 30^\circ = (10) \sin 60^\circ$ Since $\sin 30^\circ= \frac{1}{2}$ and $\sin 60^\circ= \frac{\sqrt{3}}{2}$, $a \left( \frac{1}{2} \right) = (10) \left( \frac{\sqrt{3}}{2} \right)$ $a = 10 \sqrt{3}$ $\frac{c}{\sin 90^\circ} = \frac{10}{\sin 30^\circ}$ Cross-multiplying gives us $c \sin 30^\circ = (10) \sin 90^\circ$ Since $\sin 30^\circ= \frac{1}{2}$ and $\sin 90^\circ= 1$, $c \left( \frac{1}{2} \right) = (10) (1)$ $c = 20$ If the above problem asked to find the radius of the circumcircle of $\vartriangle ABC$, the law of sines could help to find the diameter. $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2r$ $\frac{b}{\sin B} = 2r$ Substituting the values for $b, B$, $\frac{10}{\sin 30^\circ} = 2r$ Since $\sin 30^\circ= \frac{1}{2}$, $\cfrac{10}{\left( \frac{1}{2} \right)} = 2r$ $20 = 2r$ $10 = r$ # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References All images were made by the page's author using Adobe Photoshop and Cinderella2. Have questions about the image or the explanations on this page?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 109, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258325695991516, "perplexity": 208.64455472806543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043058631.99/warc/CC-MAIN-20150728002418-00135-ip-10-236-191-2.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/144976/solving-differential-equation
# solving Differential Equation I have the equation below: $$(t^2 + 1)dx=(x+4)dt$$ Where $x(0) = 3$ I am trying to use separation of variables, and I end up here: $$\ln(x+4)=\arctan(t)+C$$ Trying to simplify it more: $$x=-4+\ln(\arctan(t)+C)$$ Is this correct? I think I should use $x(0) = 3$ to find value of the constant, how can I do that? Thanks - From line 2 to line 3 went something wrong. I guess you wanted to exponentiate both sides? –  Fabian May 14 '12 at 12:10 @Sean87 I like the writing on the cup (Feel the same way about changing the world) –  Kirthi Raman May 14 '12 at 14:01 Good :P but the source is closed ;) –  Sean87 May 14 '12 at 15:16 You are right when you got $\ln(x+4)=\arctan(t)+C$. Starting from here, since $e^{\ln x}=x$, we have $$x+4=e^{\ln(x+4)}=e^{\arctan(t)+C}=e^C\cdot e^{\arctan(t)}=C_1e^{\arctan(t)}$$ where $C_1=e^C$. By the initial condition $x(0)=3$, we have $$7=C_1e^{\arctan(0)}=C_1.$$ Therefore, $$x=7e^{\arctan(t)}-4$$ is the solution of the initial value problem. - It should be $7 e^{arctan(t)}$ because $x(0)=3$ not $4$. You still have +1 –  Kirthi Raman May 14 '12 at 12:36 @Artin: Thanks. I edited it. –  Paul May 14 '12 at 12:45 $$\frac{\mathrm{d}x}{x+4} = \frac{\mathrm{d}t}{t^2+1}$$ Integrating both sides $$\ln (x+4) = arctan(t) + C \tag{1}$$ $x(0) = 3$ implies $$\ln(7) = C$$ Rewriting $(1)$ \begin{align*} \ln (x+4) &= arctan(t) + \ln(7) \\ \ln (x+4) - \ln (7) &= arctan(t)\\ \ln \frac{x+4}{7} &= arctan(t)\\ \frac{x+4}{7} = e^{arctan(t)}\\ \Rightarrow x = -4 + 7 e^{arctan(t)} \end{align*} - For full credit you should probably say why the other possibility $$\ln(-x-4) = \arctan(t)+C$$ is not the one to use. –  GEdgar May 14 '12 at 13:28 Yes, of course (Thanks) –  Kirthi Raman May 14 '12 at 14:00 $x(0)=3$ means when $t = 0$ , $x(t) = x = 3$ here $x$ is function in $t$ Here you have to find value of $C$ $\log(x + 4) = \arctan(t) + C$ $\log(3 + 4) = \arctan(0) + C$; $C = \log(7)$ Therefore, the solution is $\log(x + 4) = \arctan(t) + \log(7)$ - Looks better in TeX, doesn't it? Have a look at how I did it, and then you will be able to do it yourself. –  Gerry Myerson May 14 '12 at 12:32 To see the code behind the Tex code click on edit link just below the answer. –  Tomarinator May 14 '12 at 12:36 @Prasad, also mention that $\log()$ here is to the base $e$ otherwise one can use $\ln()$ –  Kirthi Raman May 14 '12 at 12:39 Another way to see the $\TeX$ code is to highlight the expression, right-click (or Mac equivalent), select "Show Math As" and then "TeX Commands". You might copy what you see straight to your edit box and then modify it to suit your needs (after enclosing in \$'s, since the display omits those). –  David Lewis May 14 '12 at 12:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964138388633728, "perplexity": 1394.7247072613113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776427481.71/warc/CC-MAIN-20140707234027-00013-ip-10-180-212-248.ec2.internal.warc.gz"}
https://zbmath.org/?q=ci%3A0886.35043
× # zbMATH — the first resource for mathematics On a nonlinear coupled system with internal damping. (English) Zbl 0962.35002 This paper studies an initial-boundary value problem for the following coupled hyperbolic-parabolic system: $u_{tt}-\mu\Delta u+\sum_{i=1}^n{\partial\theta\over \partial x_i} +\gamma |u|^\rho u =0, \quad \theta_t -\Delta\theta +\sum^n_{i=1} {\partial^2 u\over\partial t\partial x_i} =0,\quad \text{in }\Omega\times (0,T),$ together with Dirichlet boundary conditions for both $$u$$ and $$\theta$$, and prescribed initial data, where $$\Omega\in \mathbb{R}^n$$ is a smooth bounded domain, $$\mu$$ is a positive function of $$t$$, $$\gamma$$ and $$\rho$$ are positive constants. The case of $$\gamma =0$$ has been investigated by H. R. Clark, L. P. San Gil Jutuca and M. Milla Miranda [Electron. J. Differ. Equ. 1998, Paper 4 (1998; Zbl 0886.35043)]. Under the assumption that $$\mu\in W^{1,1}(0,\infty)$$ and $$\mu'\leq 0$$, $$\rho\leq {n\over n-1}$$ for $$n\geq 3$$ and $$\rho$$ is arbitrary but fixed for $$n\leq 2$$, the authors prove the existence and uniqueness of global strong and weak solutions; and moreover, the exponential stability of the total energy associated to the strong and weak solutions is obtained. The main ingredients in the proof are the use of the Galerkin method, energy estimates and Lions-Aubin’s compactness theorem, and the construction of a suitable Lyapunov functional. ##### MSC: 35A05 General existence and uniqueness theorems (PDE) (MSC2000) 35L70 Second-order nonlinear hyperbolic equations 35B40 Asymptotic behavior of solutions to PDEs Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653420209884644, "perplexity": 357.05451826179035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00384.warc.gz"}
https://www.investopedia.com/terms/c/chi-square-statistic.asp
## What Is a Chi-Square Statistic? A chi-square (χ2) statistic is a test that measures how a model compares to actual observed data. The data used in calculating a chi-square statistic must be random, raw, mutually exclusive, drawn from independent variables, and drawn from a large enough sample. For example, the results of tossing a fair coin meet these criteria. Chi-square tests are often used in hypothesis testing. The chi-square statistic compares the size any discrepancies between the expected results and the actual results, given the size of the sample and the number of variables in the relationship. For these tests, degrees of freedom are utilized to determine if a certain null hypothesis can be rejected based on the total number of variables and samples within the experiment. As with any statistic, the larger the sample size, the more reliable the results. ### Key Takeaways • A chi-square (χ2) statistic is a measure of the difference between the observed and expected frequencies of the outcomes of a set of events or variables. • χ2 depends on the size of the difference between actual and observed values, the degrees of freedom, and the samples size. • χ2 can be used to test whether two variables are related or independent from one another or to test the goodness-of-fit between an observed distribution and a theoretical distribution of frequencies. ## The Formula for Chi-Square Is \begin{aligned}&\chi^2_c = \sum \frac{(O_i - E_i)^2}{E_i} \\&\textbf{where:}\\&c=\text{Degrees of freedom}\\&O=\text{Observed value(s)}\\&E=\text{Expected value(s)}\end{aligned} ## What Does a Chi-Square Statistic Tell You? There are two main kinds of chi-square tests: the test of independence, which asks a question of relationship, such as, "Is there a relationship between student sex and course choice?"; and the goodness-of-fit test, which asks something like "How well does the coin in my hand match a theoretically fair coin?" ### Independence When considering student sex and course choice, a χ2 test for independence could be used. To do this test, the researcher would collect data on the two chosen variables (sex and courses picked) and then compare the frequencies at which male and female students select among the offered classes using the formula given above and a χ2 statistical table. If there is no relationship between sex and course selection (that is, if they are independent), then the actual frequencies at which male and female students select each offered course should be expected to be approximately equal, or conversely, the proportion of male and female students in any selected course should be approximately equal to the proportion of male and female students in the sample. A χ2 test for independence can tell us how likely it is that random chance can explain any observed difference between the actual frequencies in the data and these theoretical expectations. ### Goodness-of-Fit χ2 provides a way to test how well a sample of data matches the (known or assumed) characteristics of the larger population that the sample is intended to represent. If the sample data do not fit the expected properties of the population that we are interested in, then we would not want to use this sample to draw conclusions about the larger population. For example consider an imaginary coin with exactly 50/50 chance of landing heads or tails and a real coin that you toss 100 times. If this real coin has an is fair, then it will also have an equal probability of landing on either side, and the expected result of tossing the coin 100 times is that heads will come up 50 times and tails will come up 50 times. In this case, χ2 can tell us how well the actual results of 100 coin flips compare to the theoretical model that a fair coin will give 50/50 results. The actual toss could come up 50/50, or 60/40, or even 90/10. The farther away the actual results of the 100 tosses is from 50/50, the less good the fit of this set of tosses is to the theoretical expectation of 50/50 and the more likely we might conclude that this coin is not actually a fair coin.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184993267059326, "perplexity": 451.6767575915885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00117.warc.gz"}
http://fr.mathworks.com/help/matlab/ref/gamma.html?s_tid=gn_loc_drop&requestedDomain=fr.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouse over text to see original. Click the button below to return to the English verison of the page. # gamma Gamma function ## Syntax `Y = gamma(X)` ## Description `Y = gamma(X)` returns the `gamma` function at the elements of `X`. `X` must be real. collapse all ### Gamma Function The `gamma` function is defined by the integral: `$\Gamma \left(x\right)={\int }_{0}^{\infty }{e}^{-t}{t}^{x-1}dt$` The `gamma` function interpolates the `factorial` function. For integer `n`: `gamma(n+1) = n! = prod(1:n)` ### Tall Array Support This function fully supports tall arrays. For more information, see Tall Arrays. ### Algorithms The computation of `gamma` is based on algorithms outlined in [1]. Several different minimax rational approximations are used depending upon the value of `A`. ## References [1] Cody, J., An Overview of Software Development for Special Functions, Lecture Notes in Mathematics, 506, Numerical Analysis Dundee, G. A. Watson (ed.), Springer Verlag, Berlin, 1976. [2] Abramowitz, M. and I.A. Stegun, Handbook of Mathematical Functions, National Bureau of Standards, Applied Math. Series #55, Dover Publications, 1965, sec. 6.5.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038971424102783, "perplexity": 4630.580008546294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661155.56/warc/CC-MAIN-20160924173741-00246-ip-10-143-35-109.ec2.internal.warc.gz"}
http://silveiraneto.net/2012/08/30/latex-test/
# Latex test This: i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right> Produce this: $i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$ If you are seeing a complicated math formula in a image then it worked.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973765015602112, "perplexity": 4137.485816577721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00157-ip-10-145-167-34.ec2.internal.warc.gz"}
https://economics.stackexchange.com/questions/27723/how-should-i-rebase-my-gdp
# How should I rebase my GDP? I have a constant GDP data from 1988 to 2009 in constant 1985 prices and GDP data from 2009 to 2017 in constant 2000 prices. My question is how should I rebase my GDP? Upon searching the web, i don't know if what I understand is right.. Should I just divide Both GDP in constant 2000 price to GDP in 2000 and then then multiply the answer of that to the entire series of 1988 to 2009?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512524962425232, "perplexity": 760.903482459563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00169.warc.gz"}
https://www.quantumstudy.com/practice-zone/mcq-probability/
# MCQ | Probability Practice Test-I 1 . Three persons A1, A2 and A3 are to speak at a function along with 5 other persons. If the persons speak in random order, the probability that A1 speaks before A2 and A2 speaks before A3 is (A) 1/6 (B) 3/5 (C) 3/8 (D) none of these Ans: (A) 2. Two persons A, and B, have respectively n + 1 and n coins, which they toss simultaneously. Then probability P that A will have more heads than B (A) P >1/2 (B) P = 1/2 (C) 1/4 < P < 1/2 (D) 0 < P < 1/4 Ans: (B) 3. On a toss of two dice, A throws a total of 5, then the probability that he will throw another 5 before he throws 7, is (A) 1/9 (B) 1/6 (C) 2/5 (D) 5/36 Ans: (C) 4. One of two events must occur. If the chance of one is of the other, then odd in favor of the other are (A) 1 : 3 (B) 3 : 1 (C) 2 : 3 (D) none of these Ans: (D) 5. In a convex polygon of 6 sides two diagonals are selected at random. The probability that they intersect at an interior point of the polygon is (A)2/5 (B)5/12 (C)7/12 (D)3/5 Ans: (B) 6. A and B are two events such that P(A) = 0.2 and P(A∪B) = 0.7. If A and B are independent events then P(B’) equals (A) 2/7 (B) 7/9 (C) 3/8 (D) none of these Ans: (C) 7. A fair coin is tossed 99 times. Let X be the number of times heads occurs. Then P(X=r) is maximum when r is (A) 49 (B) 52 (C) 51 (D) None of these Ans: (A) 8. The numbers 1, 2, 3,…, n are arranged in random order. The probability that the digits 1, 2, 3…k (k < n) appear as neighbours in that order is (A) 1/n! (B) k!/n! (C) (n-k)!/n! (D) None of these Ans: (D) 9. Entries of a 2 x 2 determinant are chosen from the set {1, 1}. The probability that determinant has zero value is (A) 1/4 (B) 1/3 (C) 1/2 (D) none of these Ans: (C) 10. A bag contains 14 balls of two colours, the number of balls of colour being equal, seven balls are drawn at random one by one. The ball in hand is returned to the bag before each new draw. The probability that at least 3 balls of each colour are drawn, is (A) 1/2 (B) >1/2 (C) < 1/2 (D) none of these Ans: (A) 11. A business man is expecting two telephone calls. Mr Walia may call any time between 2 p.m and 4 p.m. while Mr Sharma is equally likely to call any time between 2.30 p.m. and 3.15 p.m. The probability that Mr Walia calls before Mr Sharma is (A) 1/18 (B) 1/6 (C) 1/6 (D) none of these Ans: (C) 12. Let A, B, C be three events such that A and B are independent and P(C) = 0, then events A, B, C are (A) independent (B) pairwise independent but not totally independent (C) P(A) = P(B) = P(C) (D) none of these Ans: (A) 13. In a bag there are 15 red and 5 white balls. Two balls are chosen at random and one is found to be red. The probability that the second one is also red is (A)12/19 (B)13/19 (C)14/19 (D)15/19 Ans: (C) 14. A die is thrown a fixed number of times. If probability of getting even number 3 times is same as the probability of getting even number 4 times, then probability of getting even number exactly once is (A) 1/4 (B) 3/128 (C) 5/64 (D) 7/128 Ans: (D) 15. A man is know to speak the truth 3 out if 4 times. He throws a die and reports that it is a six. The probability that it is actually a six is (A) 3/8 (B) 1/5 (B) 3/4 (D) None of these Ans: (A) 16. A student appears for test I, II and III. The student is successful if he passes either in test I, II or I, III. The probability of the student passing in test I, II and III are respectively p. q and 1/2. If the probability of the student to be successful is 1/2 then (A) p = q = 1 (B) p = q = 1/2 (C) p = 1 , q = 0 (D) p = 1, q = 1/2 Ans: (C) 17. Three of six faces of a regular hexagon are chosen at random. The probability that the triangle with three vertices is equilateral equal to (A)1/2 (B)1/5 (C)1/10 (D)1/20 Ans: (C) 18. A fair coin is tossed repeatedly. If tail appear on 1st four tosses, then the probability of head appearing on 5th toss equals to (A)1/2 (B)1/32 (C)31/32 (D)1/5 Ans: (A) 19. A number is chosen at random from the numbers 10 to 99. By seeing the number a man will laugh if product of the digits is 12. If he choose three numbers with replacement then the probability that he will laugh at least once is (A) 1 –(3/5)3 (B) (43/45)3 (C) 1 –(4/25)3 (D) 1 –(43/45)3 Ans: (D) 20. If two events A and B are such that P (A) > 0 and P (B)  1, then P is equal to (A) 1 – P (A/B) (B) 1 – P(A’/B) (C) 1 – P [(AUB)/B’] (D) P (A/B’)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039195537567139, "perplexity": 682.7692131538488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00004.warc.gz"}
http://www.zazzle.ca/inverse+tshirts
Showing All Results 130 results Page 1 of 3 Related Searches: treetop spider's web, rearward, backward Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 130 results Page 1 of 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219311237335205, "perplexity": 4484.398101535889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164004057/warc/CC-MAIN-20131204133324-00099-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/need-sum-help-information-theory.227586/
# Need sum help (information theory) 1. Apr 8, 2008 need sum help plz:) (information theory) Hello. I'm working on an information theory-related problem that involves doing a nasty sum. The problem is this: in a widget factory there is a conveyor belt with N widgets on it, and an unknown fraction $\xi = a/N$ of them are defective. You examine a sample of n widgets and find that a fraction $\eta = b/n$. What is mutual information $I(\eta : \xi)$ between the random variables $\eta$ and $\xi$ ? The idea, i think, is to see how large a sample n you need to take so that the sample defect rate gives you information about the actual defect rate. Let A_a be the event that there are a defective parts in the whole lot and B_b be the event that there are b defective parts in the sample. Then the formula for mutual information is: $$I (\eta : \xi) = \sum_{a=1}^{N} \sum_{b=1}^{n} P(A_{a}) P(B_{b} | A_{a}) \log_{2} { \frac{P(B_{b} | A_{a}) }{P(B_{b})} }$$ which is always nonnegative. Here's what I've got so far: $P(A_{a}) = 1/N$ by principle of insufficient reason (a could be anything from 1 to N with equal probability), and $$P(B_{b} | A_{a}) = \frac{ \left( ^{a}_{b} \right) \left( ^{N-a}_{n-b} \right) }{ \left(^{N}_{n} \right) } = \frac{ \left( ^{n}_{b} \right) \left( ^{N-n}_{a-b} \right) }{ \left(^{N}_{a} \right) }$$ $$P(B_{b}) = \sum_{a=1}^{N} P(A_{a}) P(B_{b} | A_{a}) = \sum_{a=1}^{N} \frac{1}{N} \frac { \left( ^{n}_{b} \right) \left( ^{N-n}_{a-b} \right) }{ \left(^{N}_{a} \right) } \approx \int_{0}^{1} \left( ^{n}_{b} \right) x^{b} (1 - x)^{n-b} dx = \frac {\left( ^{n}_{b} \right)}{ \left( ^{n}_{b} \right) (n+1)} = \frac {1}{n+1}$$ if you pretend it is a Riemann sum and assume that $N \gg n$ and $a \gg b$, which I'm not sure is OK to do. I'm guessing the idea is to get some asymptotic formula for the mutual information as N becomes large, but how do you retain the dependence on N in the sum? For instance, if I apply the "large N" approximation for $P(B_{b} | A_{a})$, which is $\left( ^{n}_{b} \right) \left(\frac{a}{N}\right)^{b} \left(1 - \frac{a}{N} \right)^{n-b}$, and do the Riemann sum I get an expression that has no dependence on N and apparently diverges to negative infinity (weird because mutual information is nonnegative). This is not a homework problem, just a "something to think about" problem I came across in an informal book on information theory. thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746829628944397, "perplexity": 302.13952116900515}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863519.49/warc/CC-MAIN-20180620124346-20180620144346-00494.warc.gz"}
https://zbmath.org/?q=an%3A0673.15006
× # zbMATH — the first resource for mathematics Uncoupling the Perron eigenvector problem. (English) Zbl 0673.15006 A method is given to find the unique normalized Perron vector $$\pi$$ satisfying $$A\pi =\rho \pi$$ where A is a nonnegative irreducible $$m\times m$$ matrix with spectral radius $$\rho$$, $$\pi =(\pi_ 1,-\pi_ m)^ T$$ and $$\pi_ 1+...+\pi_ m=1$$. The matrix is uncoupled into two or more smaller matrices $$P_ 1,P_ 2,...,P_ k$$ such that this sequence has the following properties: (1) Each $$P_ i$$ is irreducible and nonnegative and has a unique Perron vector $$\pi^{(i)}$$. (2) Each $$P_ i$$ has the spectral radius $$\rho$$. (3) The Perron vectors $$\pi^{(i)}$$ for $$P_ i$$ can be determined independently. (4) The smaller Perron vectors $$\pi^{(i)}$$ can easily be coupled back together to form the Perron vector $$\pi$$ for A. Reviewer: B.Ruffer-Beedgen ##### MSC: 15B48 Positive matrices and their generalizations; cones of matrices 15A18 Eigenvalues, singular values, and eigenvectors Full Text: ##### References: [1] Berman, A.; Plemmons, R.J., Nonnegative matrices in the mathematical sciences, (1979), Academic New York · Zbl 0484.15016 [2] Courtois, P.J., Decomposability: queueing and computer system applications, (1977), Academic New York · Zbl 0368.68004 [3] Gantmacher, F.R., Matrix theory, Vol. II, (1960), Chelsea New York · Zbl 0085.01001 [4] Johnson, C.R., Row stochastic matrices similar to doubly stochastic matrices, Linear and multilinear algebra, 10, 113-130, (1981) · Zbl 0455.15019 [5] Horn, R.A.; Johnson, C.R., Matrix analysis, (1985), Cambridge U.P New York · Zbl 0576.15001 [6] Meyer, C.D., Stochastic complementation, uncoupling Markov chains, and the Simon-Ando theory of nearly reducible systems, NCSU center res. sci. comp. tech. report 10018701, (1987), To appear in Siam Rev. [7] Simon, H.A.; Ando, A., Aggregation of variables in dynamic systems, Econometrica, 29, 2, 111-138, (1961) · Zbl 0121.15103 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531014919281006, "perplexity": 2206.229800870021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00250.warc.gz"}
http://mathhelpforum.com/advanced-statistics/101698-failure-devices-prob.html
1. ## Failure devices prob. There are 30 devices, it is known that 20 of them are failure and 10 are ok. There are chosen 4 of them. What is the probability that the chosen devices are not all of them failure/ok? res. p = 0.611 OK so I tried finding the prob. that 4 of them are failure. I think it is $p^4=(\frac{2}{3})^4=0.197531.$ So the required prob. must be $p=1-p^4=0.802469$. Is it wrong? hehe obviously the res. is different 2. I like your p value. You should consider the binomial theorem here, it is: $P(X=k) = \binom{n}{k}p^k(1-p)^{n-k}$ In this case $n = 30$, $p = \frac{2}{3}$ and $k = 4$ now substituting in these values we get $P(X=4) = \binom{30}{4} \left(\frac{2}{3}\right)^4\left(1- \frac{2}{3}\right)^{30-4}$ Can you finish it from here? This will be the probabilty that 4 devices fail, I think that is what the question is asking? 3. Originally Posted by pickslides I like your p value. You should consider the binomial theorem here, it is: $P(X=k) = \binom{n}{k}p^k(1-p)^{n-k}$ pickslides, The binomial theorem is not applicable here. These are not independent trials. The answer for not all failures is $1-\frac{\binom{20}{4}}{\binom{30}{4}}$. 4. Originally Posted by Plato pickslides, The binomial theorem is not applicable here. These are not independent trials. The answer for not all failures is $1-\frac{\binom{20}{4}}{\binom{30}{4}}$. I tried using binomial but with n=4, and that leaded me to p^4 Ok Plato some explanations would be appreciated, and why it doesn't fit with the result in my book? 5. Originally Posted by pickslides I like your p value. You should consider the binomial theorem here, it is: $P(X=k) = \binom{n}{k}p^k(1-p)^{n-k}$ In this case $n = 30$, $p = \frac{2}{3}$ and $k = 4$ now substituting in these values we get $P(X=4) = \binom{30}{4} \left(\frac{2}{3}\right)^4\left(1- \frac{2}{3}\right)^{30-4}$ Can you finish it from here? This will be the probabilty that 4 devices fail, I think that is what the question is asking? No the question is to find the probability that not all of them(4) fail/are ok. 6. Originally Posted by javax Ok Plato some explanations would be appreciated, and why it doesn't fit with the result in my book? It is not at all clear what you are asking. If the question is, "Select four items from thirty of which exactly twenty are defective. What is the probability that not all four are defective nor all four are non-defective" The probabillity that all are defective is $\frac{\binom{20}{4}}{\binom{30}{4}}$. The probabillity that all are non-defective is $\frac{\binom{10}{4}}{\binom{30}{4}}$. Now note that those two are disjoint events. The probabillity that all are defective OR all are non-defective is $\left(\frac{\binom{20}{4}}{\binom{30}{4}}+\frac{\b inom{10}{4}}{\binom{30}{4}}\right)$. The probabillity that not all are defective and not all are non-defective is $1-\left(\frac{\binom{20}{4}}{\binom{30}{4}}+\frac{\b inom{10}{4}}{\binom{30}{4}}\right)$. 7. Originally Posted by Plato It is not at all clear what you are asking. If the question is, "Select four items from thirty of which exactly twenty are defective. What is the probability that not all four are defective nor all four are non-defective" The probabillity that all are defective is $\frac{\binom{20}{4}}{\binom{30}{4}}$. The probabillity that all are non-defective is $\frac{\binom{10}{4}}{\binom{30}{4}}$. Now note that those two are disjoint events. The probabillity that all are defective OR all are non-defective is $\left(\frac{\binom{20}{4}}{\binom{30}{4}}+\frac{\b inom{10}{4}}{\binom{30}{4}}\right)$. The probabillity that not all are defective and not all are non-defective is $1-\left(\frac{\binom{20}{4}}{\binom{30}{4}}+\frac{\b inom{10}{4}}{\binom{30}{4}}\right)$. Mate the question is exactly how you said. Your answer still not fitting the given result but I trust you. Thanks 8. Plato one more question. As I mentioned I tried to find it using $1-p_1^4*p2^4,$ where $p_1=\frac{2}{3}$ and $p_2=\frac{1}{3}$ which is quite close to your result. Is it ok finding it like this? 9. Originally Posted by javax Mate the question is exactly how you said. Your answer still not fitting the given result but I trust you. It may be a matter of translation. We may not be doing the same question. On the other hand, your textbook may be simply be wrong. 10. "Select four items from thirty of which exactly twenty are defective. What is the probability that not all four are defective nor all four are non-defective" you fully understood my question. It's ok I believe the given result is wrong (Y) 11. Originally Posted by javax Plato one more question. As I mentioned I tried to find it using $1-p_1^4*p2^4,$ where $p_1=\frac{2}{3}$ and $p_2=\frac{1}{3}$ which is quite close to your result. Is it ok finding it like this? Unless the trials, the events, are independent the binomial formula is not applicable. Selecting four objects from thirty is in no way independent. But say we have batch twenty black balls and ten white balls. We pick a random ball from that collection. Record its color and return it to the batch. We do that four times. Those outcomes are independent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704308867454529, "perplexity": 580.4630361003741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133042.90/warc/CC-MAIN-20170824062820-20170824082820-00205.warc.gz"}
https://www.coursehero.com/file/51430413/Lecture-26pdf/
Lecture_26.pdf - Spline Interpolation Given(n 1 observations or data pairs(x0 f0(x1 f1(x2 f2 …(xn fn This gives a mesh of nodes ⋯ on the independent # Lecture_26.pdf - Spline Interpolation Given(n 1... • Notes • 28 This preview shows page 1 out of 28 pages. #### You've reached the end of your free preview. Want to read all 28 pages? Unformatted text preview: Spline Interpolation Given: (n + 1) observations or data pairs [(x0, f0), (x1, f1), (x2, f2) … (xn, fn)] This gives a mesh of nodes , , , ⋯ on the independent variable and the corresponding function values as , , ,⋯ Goal: fit an independent polynomial in each interval (between two points) with certain continuity requirements at the nodes. Linear spline: continuity in function values, C0 continuity Quadratic spline: continuity in function values and 1st derivatives, C1 continuity Cubic spline: continuity in function values, 1st and 2nd derivatives, C2 continuity Denote for node i or at xi: functional value fi, first derivative ui, second derivative vi Spline Interpolation: Cubic xi+2, fi+2 xi , f i xi-1, fi-1 qi-1(x) qi(x) qi+1(x) xi+1, fi+1 A cubic polynomial in each interval: (n+1) points, n cubic polynomials, 4n unknowns Available conditions: (n + 1) function values, (n - 1) function continuity, (n - 1) 1st derivative continuity conditions and (n - 1) 2nd derivative continuity conditions, total 4n - 2 conditions. 2 free conditions to be chosen by the user! Spline Interpolation: Cubic Cubic Spline in the interval , : is a set of linear splines. Let us denote the 2nd derivative (v) of the function at the ith node as Therefore: We may write: ℎ 6ℎ and ℎ 6ℎ ℎ 6 ℎ 6 Spline Interpolation: Cubic 6ℎ ℎ 6 ; ℎ 6 ℎ ℎ , 6 ℎ ℎ ℎ 6ℎ ℎ ℎ 6 6 ℎ ℎ ℎ ℎ 6 ℎ 6 ℎ 6 ℎ 6 ℎ We need to estimate (n + 1) unknown vi. We have (n – 1) conditions from the continuity of the first derivative. Spline Interpolation: Cubic 6 ℎ ℎ In eq. 1 , 6 3 ℎ 6 ℎ 6 ℎ 3 ℎ , 1 … . 1 ℎ 3 ℎ 3 ℎ 6 ℎ 3 ℎ 6 ℎ 1, 2, 3, ⋯ ℎ ℎ 6 ℎ 3 6 ℎ ; ℎ 3 ℎ 6 , ℎ 6 ℎ , , , , , ℎ , , , ℎ Spline Interpolation: Cubic , , 1, 2, 3, ⋯ So, (n – 1) equations, (n + 1) unknowns, two conditions have to be provided by the users. They decide the type of cubic splines Natural Spline: 0 Parabolic Runout: and Not-a-knot: ⟹ ⟹ ℎ ℎ ℎ ℎ Periodic: ; and First one comes from the data (if not satisfied, the periodic spline is not appropriate); the next two give the other two equations. 1. Spline Interpolation: Cubic , , 1, 2, 3, ⋯ 1. So, (n – 1) equations, (n + 1) unknowns, two conditions have to be provided by the users. They decide the type of cubic splines Clamped Spline: and 3 ℎ 6 ℎ 6 ℎ ℎ , 6 3 3 6 ℎ ℎ 3 ⟹ 3 ℎ 6 ℎ ⟹ ℎ , β ℎ , , 3 ℎ 6 , ℎ β , Example Problem: Q4 of Tutorial 9 Consider the function exp(x) sampled at points x 0, 0.5, 1.0, 1.5 and 2 Estimate the function value at x = 1.80 by interpolating the function using - (a) natural cubic spline and (b) not-a-knot cubic spline. Calculate the true percentage error for both the splines. Which is the better spline for this problem and why? Solution: Posted online along with Tutorial 9 solutions Example Problem: Heat Transfer in Lake Lakes in temperate zone can become thermally stratified during the summer. As depicted below, warm, buoyant water near the surface overlies colder, denser bottom water. Such stratification effectively divides the lake into two layers: the epilimnion and the hypolimnion separated by a plane called the thermocline. Temperature (oC) 0.0 0 Depth, z(m) 5 10 15 20 25 30 5.0 10.0 15.0 20.0 25.0 z (m) 0 2.3 4.9 9.1 13.7 18.3 22.9 27.2 o T( C) 22.8 22.8 22.8 22.6 13.9 11.7 11.1 11.1 Example Problem: Heat Transfer in Lake The location of the thermocline can be defined as the inflection point of the T-z curve; i.e. where = 0. It is also the point at which the absolute value of the first derivative or gradient is maximum. Use cubic splines to determine the thermocline depth of this lake. Also use splines to determine the value of the gradient at the thermocline. Spline Interpolation: Using Local Coordinate ∈ , in , → ∈ 0, 1 → in 0, 1 At each node i, we denote the following: Location: xi Functional value: fi Intervals: ℎ and ℎ Derivatives: First derivative ui and the 2nd derivative vi Transformations: 1 ℎ 1 ℎ ⟹ 1 ℎ 1 ℎ Spline Interpolation: Using Local Coordinate C0 – Continuity: 1 C1 – Continuity: 1 1 ℎ C2 – Continuity: 1 1 ℎ 0 1 0 ℎ 1 0 ℎ Linear and Quadratic Splines: Local Coordinate Linear Spline: C0 – Continuous ⇒ 0 , ⟹ 1 , Quadratic Spline: C1 – Continuous ⇒ 0 , 1 Using the definition of ui: 1 0 ⇒ ℎ , ℎ ℎ ℎ , ℎ Using C1 – Continuity: 1 1 ℎ 1 0 ℎ ⇒ 2 , Cubic Spline: Using Local Coordinate Cubic Spline: C2 – Continuous Using C0 – Continuity: 0 , 1 Now we have two options: Option 1: Using the 1st derivative ui as unknown and C2 – Continuity to estimate them Option 2: Using the 2nd derivative vi as unknown and C1 – Continuity to estimate them Cubic Spline: Using Local Coordinates Option 1: Using the 1st derivative ui as unknown and C2 – Continuity to estimate them , 1 1 0 ; 1 ℎ ℎ ℎ ℎ ℎ 3 , Using C2 – Continuity: 1 ℎ ℎ 2 ℎ 1 ℎ 3 2 ℎ 2 , 2 1 6 2 0 ⇒ ℎ ℎ ℎ 3ℎ , 1, 2, 3, ⋯ 1 2 ℎ 3ℎ , Using the two other conditions, one may obtain similar splines of different types! Cubic Spline: Using Local Coordinates Natural Spline: 0 ℎ 2 ℎ 6ℎ 2 0; 1 ℎ 2 , 3 , 0 ℎ 3 , 6 2 0 0 ℎ 2ℎ 2 3 , 2 2 3 , Clamped Spline: and β Parabolic Runout: 0 ℎ 0 ℎ ℎ 1 and 2 6 2 ⟹ ⟹ ℎ ℎ ℎ 2 6 1 2 ⟹ ⟹ ℎ ℎ 2 , 2 , 0 Cubic Spline: Using Local Coordinates Not-a-knot: ; ⟹ ⟹ ℎ ℎ ℎ ℎ Periodic: and First one comes from the data (if not satisfied, the periodic spline is not appropriate); the next two give the other two equations. Formulation of these two is left as homework! Cubic Spline: Using Local Coordinates Option 2: Using the 2nd derivative vi as unknown and C1 – Continuity to estimate them 1 0 ℎ ℎ 6 , 2 ; ℎ ; Using C1 – Continuity: 1 ℎ ℎ 1 2 ℎ 1 6 2 1 ℎ ℎ ℎ ℎ , 6 1 3 2 0 ⇒ ℎ ℎ ℎ ℎ 6 , 1, 2, 3, ⋯ 1 2 ℎ 6 , This is the same equation that was obtained using Lagrange polynomials! Boundary conditions are also the same! ESO 208A: Computational Methods in Engineering Numerical Differentiation Abhas Singh Department of Civil Engineering IIT Kanpur Acknowledgements: Profs. Saumyen Guha and Shivam Tripathi (CE) Numerical Differentiation Let us compute dy/dx or df/dx at node i Denote the difference operators: ∆ Numerical Differentiation: Finite Difference Approximate the function between , as: ∆ ∆ Forward Difference: ∆ Approximate the function between ∆ ∆ , as: Backward Difference: Numerical Differentiation: Finite Difference Approximate the function between three points: , , ∆ ∆ ∆ ∆ Now, evaluate df/dx at x = xi: ∆ ∆ ∆ ∆ ∆ ∆ Numerical Differentiation: Finite Difference Central Difference: ∆ ∆ ∆ ∆ ∆ ∆ For regular or uniform grid: Let us assume regular grid with a mesh size of h Numerical Differentiation: Finite Difference Approximate the function between three points: 2ℎ ℎ 2ℎ , , Now, evaluate central difference approximations of df/dx and d2f/dx2 at x = xi: 2ℎ ℎ 2ℎ 2 ℎ 2ℎ Numerical Differentiation: Finite Difference Similarly, one can approximate the function between and obtain the forward three points difference expressions of the first and second derivatives at x = xi as follows: This is left for homework practice! Numerical Differentiation: Finite Difference Similarly, one can approximate the function between and obtain the backward three points difference expressions of the first and second derivatives at x = xi as follows: This is left for homework practice! Numerical Differentiation: Finite Difference Accuracy: How accurate is the numerical differentiation scheme with respect to the TRUE differentiation? Truncation Error analysis Modified Wave Number, Amplitude Error and Phase Error analysis for periodic functions Recall: True Value (a) = Approximate Value + Error (ε) Consistency: A numerical expression for differentiation or a numerical differentiation scheme is consistent if it converges to the TRUE differentiation as h → 0. ... View Full Document • Fall '19 ### What students are saying • As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. Kiran Temple University Fox School of Business ‘17, Course Hero Intern • I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero. Dana University of Pennsylvania ‘17, Course Hero Intern • The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time. Jill Tulane University ‘16, Course Hero Intern
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606729507446289, "perplexity": 2588.255444510591}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00555.warc.gz"}
http://www.codecogs.com/library/maths/calculus/differential/index.php
I have forgotten • https://me.yahoo.com # Differential ## Introduction Differential equations are an powerful mathematical tool that help us understand nature and finance, allowing us to make accurate calculations, including: • movement of an object • the collision of two cars • trajectories of planets As an example: Imagine a particle that is projected horizontally (Gravity is neglected): • Velocity is given by • Acceleration is • It is assumed that the drag is proportional to Applying Newton's second Law: "Force = Mass x Acceleration" Therefore if we are interested in the distance x or If we are interested in time then: ## Definitions Particular solutions • When and • When and Differential Equations which involve only one independent variable are called Ordinary. In these equations x is the independent variable and y is the dependent variable. For example : Equations which involve two or more independent variables and partial differential coefficients with respect to them are called Partial. For example : • When The Laplace equation ### Order Equations that involve a second differential coefficient but none of higher orders is said to be Second Order. • When and • When and • When For example : First order Second order Third order ### Degree The degree of an equation is the power of the highest differential coefficient once the equation has been made rational and integral as far as the differential coefficients are concerned. For example : • When and • When First degree Second degree Note that this definition of degree does not require x or y to occur rationally or integrally. ## The Geometrical Meaning Of A Differential Equation This section presents geometric characteristics of the solution of a differential equation . ### Linear Solution A linear function is a function such as where If then A quadric function is a function such as where If then Now consider the equation Therefore • If A = 0 the graph is as above. • If A = 1 • If A = -2 ### Exponential Solution An exponential function is a function such as where Now consider the following equation: This can be rearranged as: The variables have now been separated and : From which the explicit form is given by: ## The Formation Of Differential Equations By Elimination If from the following equation we eliminate the arbitrary constant we get the following: Extending this concept, if we started with n arbitrary constants, we could eliminate them by n differentiations. The result would be a differential equation of the order. Conversley if we are given a differential equation of the order we can, in general, obtain an equivalent relationship containing no derivatives but n arbitrary constants. This relationship is called "The General Solution" For Example where w i aconstant Integrating with respect to x gives And so on until Where A, B, C and E are all arbitrary constants ## The Complete Primitive; Particular Integral; And Singular Solution The solution of a differential equation containing the full number of arbitrary constants is called "The Complete Primative". Any solution derived from the complete Primitive by giving particular values to these constants is called a "A Particular Integral" For example A Particular solution of is given by (Obtained by putting A,B,C,E = 0) or Example: ##### Example - The use of Differential Equations to Solve Problems in Dynamics Problem A cricket ball is thrown vertically upwards with a velocity of v ft/sec. The retardation is or . Find the maximum height reached (Y) and the time of flight to the vertex (T). Prove that the Initial velocity u is given by: Workings The acceleration = -kv - g To find the time of Flight T thus When t = 0 v = u Thus At the vertex t = T and v = 0 so i.e. For Height Y But when y = 0, v = u so At the Vertex v = 0 Solution The flight time is: Max Height is: ## Differential Equations Which Include Trigonometrical Functions The Right Hand Side In the following worked examples is usually re-written as . For those unused to this type of trigonometrical manipulation, the following notes should help. Example: ##### Example - Basic examples . Problem Basic trigonometrical examples Workings The reference page on Trigonometrical Formulae includes:- Considering the first equation , this can be re-written as:- Now if during the solution of a differential equation we arrive at :- we can compare the right hand side with the right hand side of (5) and we can see that they are of the same form but has been replaced by "3" and by "4". Clearly this can not be correct as the Sine and Cosine can not have a value above unity but if we draw the following right angled triangle. Values of Sine and Cosine can be obtained which can be put into equation (5) This can be re-arranged to satisfy the requirements of equation (6) ## Mixed Examples The next examples will present some mixed differential equations (containing exponential, polynomials, sine, cosine). Example: ##### Example - Using the lambda example Problem Solve Workings Using the D operator Solution Therefore the General Solution is given by:- ## Section Pages #### Taylor Computes the first and second derivatives of a function using the Taylor formula. double taylor1 (double (*f)(double), double x, double h, double gamma = 1.0)[inline] double taylor2 (double (*f)(double), double x, double h, double gamma = 1.0)[inline] #### Taylor Table Computes the first and second derivatives of a function at multiple points. std::vector taylor1_table (double (*f)(double), std::vector &points, double h, double gamma = 1.0) std::vector taylor2_table (double (*f)(double), std::vector &points, double h, double gamma = 1.0) #### First Order First Order Differential Equations with worked examples #### Linear with Constant Coefficient A guide to linear equations of second and higher degrees #### Separable This section contains worked examples of the type of differential equation which can be solved by integration #### The D operator Solving Differential Equations using the D operator #### Homogeneous The solution of homogeneous differential equations including the use of the D operator #### Linear Simultaneous Equations Linear simultaneous differential equations #### Partial An Introduction to Partial Differential Equations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123048782348633, "perplexity": 1346.6986199335888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/84893-caculate-dervitiv-using-exponential-series.html
# Thread: Caculate Dervitiv Using Exponential Series? 1. ## Caculate Dervitiv Using Exponential Series? Let f(x) = e^x^3 Calculate the 9th derivative f (9)(0) using the exponential series and Taylor's formula. Kay am not sure if idid this correect but the series for e^x is infinte series n=0 to infinty x^n/n! do i just plug in the equation and solve i know its zero but how?? 2. Originally Posted by zangestu888 Let f(x) = e^x^3 Calculate the 9th derivative f (9)(0) using the exponential series and Taylor's formula. Kay am not sure if idid this correect but the series for e^x is infinte series n=0 to infinty x^n/n! do i just plug in the equation and solve i know its zero but how?? zero, huh? $e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ...$ $e^{x^3} = 1 + x^3 + \frac{x^6}{2!} + \frac{x^9}{3!} + ...$ ... after 9 derivatives, the third term will be a constant equal to $\frac{9!}{3!} = 60480$. every subsequent term will have factors of x (which will become 0 when evaluating the 9th derivative at x = 0) 3. Can you please explain what you did? is thier like a shorcut to finding it using the exponential series and Taylor formula
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508364200592041, "perplexity": 2853.914820272189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758566/warc/CC-MAIN-20131218054918-00081-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.ck12.org/geometry/Slope-in-the-Coordinate-Plane/lesson/Slope-in-the-Coordinate-Plane/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> # Slope in the Coordinate Plane ## Steepness of a line between two given points. 0% Progress Practice Slope in the Coordinate Plane Progress 0% Slope in the Coordinate Plane What if you were given the coordinates of two points? How would you determine the steepness of the line they form? After completing this Concept, you'll be able to find the slope of a line through two points. ### Guidance Recall from Algebra I that slope is the measure of the steepness of a line. Two points (x1,y1)\begin{align*}(x_1, y_1)\end{align*} and (x2,y2)\begin{align*}(x_2, y_2)\end{align*} have a slope of m=(y2y1)(x2x1)\begin{align*}m = \frac{(y_2-y_1)}{(x_2-x_1)}\end{align*}. You might have also learned slope as riserun\begin{align*}\frac{rise}{run}\end{align*}. This is a great way to remember the formula. Also remember that if an equation is written in slope-intercept form, y=mx+b\begin{align*}y=mx+b\end{align*}, then m\begin{align*}m\end{align*} is always the slope of the line. Slopes can be positive, negative, zero, or undefined as shown in the pictures below: Positive: Negative: Zero: Undefined: #### Example A What is the slope of the line through (2, 2) and (4, 6)? Use (2, 2) as (x1,y1)\begin{align*}(x_1, y_1)\end{align*} and (4, 6) as (x2,y2)\begin{align*}(x_2, y_2)\end{align*}. m=6242=42=2 #### Example B Find the slope between (-8, 3) and (2, -2). Use (-8, 3) as (x1,y1)\begin{align*}(x_1, y_1)\end{align*} and (2, -2) as (x2,y2)\begin{align*}(x_2, y_2)\end{align*}. m=232(8)=510=12 #### Example C The picture shown is the California Incline, a short road that connects Highway 1 with Santa Monica. The length of the road is 1532 feet and has an elevation of 177 feet. You may assume that the base of this incline is zero feet. Can you find the slope of the California Incline? In order to find the slope, we need to first find the horizontal distance in the triangle shown. This triangle represents the incline and the elevation. To find the horizontal distance, we need to use the Pythagorean Theorem (a concept you will be introduced to formally in a future lesson), \begin{align*}a^2+b^2 = c^2\end{align*}, where \begin{align*}c\end{align*} is the hypotenuse. The slope is then \begin{align*}\frac{177}{1521.75}\end{align*}, which is roughly \begin{align*}\frac{3}{25}\end{align*}. --> ### Guided Practice 1. Find the slope between (-5, -1) and (3, -1). 2. What is the slope of the line through (3, 2) and (3, 6)? 3. Find the slope between (-5, 2) and (3, 4). 1. Use (-5, -1) as \begin{align*}(x_1, y_1)\end{align*} and (3, -1) as \begin{align*}(x_2, y_2)\end{align*}. The slope of this line is 0, or a horizontal line. Horizontal lines always pass through the \begin{align*}y-\end{align*}axis. The \begin{align*}y-\end{align*}coordinate for both points is -1. So, the equation of this line is \begin{align*}y = -1\end{align*}. 2. Use (3, 2) as \begin{align*}(x_1, y_1)\end{align*} and (3, 6) as \begin{align*}(x_2, y_2)\end{align*}. The slope of this line is undefined, which means that it is a vertical line. Vertical lines always pass through the \begin{align*}x-\end{align*}axis. The \begin{align*}x-\end{align*}coordinate for both points is 3. So, the equation of this line is \begin{align*}x = 3\end{align*}. 3. Use (-5, 2) as \begin{align*}(x_1, y_1)\end{align*} and (3, 4) as \begin{align*}(x_2, y_2)\end{align*}. ### Explore More Find the slope between the two given points. 1. (4, -1) and (-2, -3) 2. (-9, 5) and (-6, 2) 3. (7, 2) and (-7, -2) 4. (-6, 0) and (-1, -10) 5. (1, -2) and (3, 6) 6. (-4, 5) and (-4, -3) 7. (-2, 3) and (-2, -3) 8. (4, 1) and (7, 1) For 9-10, determine if the statement is true or false. 1. If you know the slope of a line you will know whether it is pointing up or down from left to right. 2. Vertical lines have a slope of zero. ### Vocabulary Language: English Spanish Slope Slope Slope is a measure of the steepness of a line. A line can have positive, negative, zero (horizontal), or undefined (vertical) slope. The slope of a line can be found by calculating “rise over run” or “the change in the $y$ over the change in the $x$.” The symbol for slope is $m$
{"extraction_info": {"found_math": true, "script_math_tex": 26, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 3, "texerror": 0, "math_score": 0.8991199731826782, "perplexity": 950.3590545047566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682773.27/warc/CC-MAIN-20151001215802-00182-ip-10-137-6-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/30428/a-r-algebra-rightarrow-m-na-cong-m-nr-otimes-a
# $A$…$R$-algebra $\Rightarrow M_n(A)\cong M_n(R)\otimes A$ I would like to prove the following statement: if $A$ is an $R$-algebra (for a commutative ring $R$ with $1$), then the $R$-algebras $M_n(A)$ and $M_n(R)\otimes A$ are isomorphic. By a proposition (Grillet, Abstract Algebra, p. 529) , we have the situation where $\varphi((r_{i,j})_{i,j=1}^n):=(r_{i,j}\cdot 1_A)_{i,j=1}^n$ and $\psi(a):= (\text{matrix with }a\text{ in } 1,1\text{-th entry and }0\text{ elsewhere})$ and $\iota,\kappa$ are canonical. By the proposition, we have a unique algebra homomorphism $\chi$, such that $\chi\circ\iota=\varphi$ (1), $\chi\circ\kappa=\psi$ (2) and $\forall (r_{i,j})\!\otimes\!a: \chi((r_{i,j})\!\otimes\!a)=\varphi((r_{i,j}))\cdot\psi(a)$ (3). Let us prove that $\chi$ is bijective (and therefore an isomorphism). surjective: By (1) and (2) we know that $im(\varphi)\subseteq im(\chi)$ and $im(\psi)\subseteq im(\chi)$. Let $E_{i,j}$ denote the matrix with $1_R$ at $i,j$-th entry and $0$ elsewhere. Since $E_{i,j}\in im(\varphi)$, $aE_{1,1}\in im(\psi)$ and $E_{i,j}E_{k,l}=\delta_{j,k}E_{i,l}$, it follows that $aE_{i,j}\in im(\chi)$, hence all matrices with entries in $A$ are in $im(\chi)$. How can I prove that $\chi$ is injective? - ugh, help, how can I write diagrams? – Leon Apr 2 '11 at 1:31 ## 2 Answers Instead of showing that $\chi$ is injective, you could attempt to find $\hat\chi$ such that $\hat\chi\circ\chi$ is the identity-maps, then $\chi(u)=\chi(v)$ will imply that $u = \hat\chi\circ \chi(u) = \hat\chi\circ\chi (v) = v$, in other words $\chi$ is injective. For instance (actually not for instance since it should be the only possibility), consider $$\hat\chi \colon M_n(A) \to M_n(R)\otimes A \colon (c_{ij}) \mapsto \sum_{ij} E_{ij} \otimes c_{ij}.$$ -- Edit: Note that you should take $\psi(a)$ equal to $a\cdot 1$ (which is the diagonal matrix with all $a$'s) for this to work. The problem is that if you do it the way you did, I'm not sure that $im(\psi)$ and $im(\varphi)$ will commute, which they should if you want to apply the universal property. If you take $\psi(a)=a\cdot 1$ they will, because $1$ is central. - Excellent, a very good idea. Just one tiny question: why is $\chi\circ\hat{\chi}=id$ already enough and not $\chi\circ\hat{\chi}=id=\hat{\chi}$\circ\chi? – Leon Apr 2 '11 at 15:43 @Leon Lampreet: Actually $\chi\circ\hat\chi$ is not enough, but $\hat\chi\circ\chi$ is. This is because if you know that $\hat\chi\circ \chi = id$ you can proof injectivity of $\chi$ like this: $\chi(u)=\chi(v)\implies\hat\chi(\chi(u))=\hat\chi(\chi(v))\implies u=v$. Now we have that $\chi$ is surjective and injective indeed, therefore $\chi$ is a bijective morphism, as requested. – Myself Apr 2 '11 at 15:50 thank you, it was quite educational for me :) – Leon Apr 2 '11 at 17:17 You can argue more generally that $\hom_R(P,Q)\otimes_RA\cong\hom_A(P\otimes_RA,Q\otimes_RB)$ when $P$ is finitely presented over $R$ and that the isomorphism is compatible with composition of maps, and then take $P=Q=R^n$. - (Where A = B?) Is this general case easier then? – Myself Apr 2 '11 at 2:07 @Myself: there are less things that you can do wrong :) – Mariano Suárez-Alvarez Apr 2 '11 at 2:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804950952529907, "perplexity": 178.53214544175424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700326.64/warc/CC-MAIN-20160428164140-00124-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/solubility-and-equilibrium-question.592520/
# Homework Help: Solubility and Equilibrium question 1. Apr 1, 2012 ### TeenieBopper 1. The problem statement, all variables and given/known data Calculate the molar solubility of Cu(OH)2, Ksp = 2.2 × 10^–20, in 0.87 M NH3. Don't forget to use the complexation reaction Cu2+ + 4 NH3 ⇌ Cu(NH3)42+, K = 5.0 × 10^13. 2. Relevant equations Ksp=[A]^m^n Keq=[product]/[reactants] 3. The attempt at a solution I wasn't sure where to begin. I figured that because Cu reacts with NH3, more of the Cu(OH)2 would "disappear" thus having a higher solubility. So I did an ICE table for Cu2+ + 4 NH3 ⇌ Cu(NH3)4 and set it up as x/(.87-x)^5=5 * 10^13, but the answers were non-real. I'm at a loss for where to begin, now. I know I need to do something with the Cu + NH3 reaction, too (otherwise, why would we even be given that information?), but I don't really know what. 2. Apr 1, 2012 ### Staff: Mentor Try to assume concentration of ammonia doesn't change. Note that OH- concentration is a function of ammonia concentration as well.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443483948707581, "perplexity": 3312.4896246550334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216718.53/warc/CC-MAIN-20180820160510-20180820180510-00544.warc.gz"}
https://fm.mizar.org/1993-4/fm4-1.html
Formalized Mathematics    (ISSN 0777-4028) Volume 4, Number 1 (1993): pdf, ps, dvi. 1. Katarzyna Zawadzka. The Product and the Determinant of Matrices with Entries in a Field, Formalized Mathematics 4(1), pages 1-8, 1993. MML Identifier: MATRIX_3 Summary: Concerned with a generalization of concepts introduced in \cite{MATRIX_1.ABS}, i.e. there are introduced the sum and the product of matrices of any dimension of elements of any field. 2. Yuji Sakai, Jaroslaw Kotowicz. Introduction to Theory of Rearrangement, Formalized Mathematics 4(1), pages 9-13, 1993. MML Identifier: REARRAN1 Summary: An introduction to the rearrangement theory for finite functions (e.g. with the finite domain and codomain). The notion of generators and cogenerators of finite sets (equivalent to the order in the language of finite sequences) has been defined. The notion of rearrangement for a function into finite set is presented. Some basic properties of these notions have been proved. 3. Andrzej Trybulec. Many-sorted Sets, Formalized Mathematics 4(1), pages 15-22, 1993. MML Identifier: PBOOLE Summary: The article deals with parameterized families of sets. When treated in a similar way as sets (due to systematic overloading notation used for sets) they are called many sorted sets. For instance, if $x$ and $X$ are two many-sorted sets (with the same set of indices $I$) then relation $x \in X$ is defined as $\forall_{i \in I} x_i \in X_i$.\par I was prompted by a remark in a paper by Tarlecki and Wirsing: Throughout the paper we deal with many-sorted sets, functions, relations etc. ... We feel free to use any standard set-theoretic notation without explicit use of indices'' \cite[p.~97]{Tar-Wir1}. The aim of this work was to check the feasibility of such approach in Mizar. It works.\par Let us observe some peculiarities: \begin{itemize} \item[-] empty set (i.e. the many sorted set with empty set of indices) belongs to itself (theorem 133), \item[-] we get two different inclusions $X \subseteq Y$ iff $\forall_{i \in I} X_i \subseteq Y_i$ and $X \sqsubseteq Y$ iff $\forall_x x \in X \Rightarrow x \in Y$ equivalent only for sets that yield non empty values. \end{itemize} Therefore the care is advised. 4. Ewa Burakowska. Subalgebras of the Universal Algebra. Lattices of Subalgebras, Formalized Mathematics 4(1), pages 23-27, 1993. MML Identifier: UNIALG_2 Summary: Introduces a definition of a subalgebra of a universal algebra. A notion of similar algebras and basic operations on subalgebras such as a subalgebra generated by a set, the intersection and the sum of two subalgebras were introduced. Some basic facts concerning the above notions have been proved. The article also contains the definition of a lattice of subalgebras of a universal algebra. 5. Bogdan Nowak, Andrzej Trybulec. Hahn-Banach Theorem, Formalized Mathematics 4(1), pages 29-34, 1993. MML Identifier: HAHNBAN Summary: We prove a version of Hahn-Banach Theorem. 6. Jolanta Kamienska, Jaroslaw Stanislaw Walijewski. Homomorphisms of Lattices, Finite Join and Finite Meet, Formalized Mathematics 4(1), pages 35-40, 1993. MML Identifier: LATTICE4 Summary: 7. Jolanta Kamienska. Representation Theorem for Heyting Lattices, Formalized Mathematics 4(1), pages 41-45, 1993. MML Identifier: OPENLATT Summary: 8. Jaroslaw Stanislaw Walijewski. Representation Theorem for Boolean Algebras, Formalized Mathematics 4(1), pages 45-50, 1993. MML Identifier: LOPCLSET Summary: 9. Andrzej Trybulec, Yatsuka Nakamura. Some Remarks on the Simple Concrete Model of Computer, Formalized Mathematics 4(1), pages 51-56, 1993. MML Identifier: AMI_3 Summary: We prove some results on {\bf SCM} needed for the proof of the correctness of Euclid's algorithm. We introduce the following concepts: \begin{itemize} \item[-] starting finite partial state (Start-At$(l)$), then assigns to the instruction counter an instruction location (and consists only of this assignment), \item[-] programmed finite partial state, that consists of the instructions (to be more precise, a finite partial state with the domain consisting of instruction locations). \end{itemize} We define for a total state $s$ what it means that $s$ starts at $l$ (the value of the instruction counter in the state $s$ is $l$) and $s$ halts at $l$ (the halt instruction is assigned to $l$ in the state $s$). Similar notions are defined for finite partial states. 10. Andrzej Trybulec, Yatsuka Nakamura. Euclid's Algorithm, Formalized Mathematics 4(1), pages 57-60, 1993. MML Identifier: AMI_4 Summary: The main goal of the paper is to prove the correctness of the Euclid's algorithm for {\bf SCM}. We define the Euclid's algorithm and describe the natural semantics of it. Eventually we prove that the Euclid's algorithm computes the Euclid's function. Let us observe that the Euclid's function is defined as a function mapping finite partial states to finite partial states of {\bf SCM} rather than pairs of integers to integers. 11. Grzegorz Bancerek, Piotr Rudnicki. Development of Terminology for \bf SCM, Formalized Mathematics 4(1), pages 61-67, 1993. MML Identifier: SCM_1 Summary: We develop a higher level terminology for the {\bf SCM} machine defined by Nakamura and Trybulec in \cite{AMI_1.ABS}. Among numerous technical definitions and lemmas we define a complexity measure of a halting state of {\bf SCM} and a loader for {\bf SCM} for arbitrary finite sequence of instructions. In order to test the introduced terminology we discuss properties of eight shortest halting programs, one for each instruction. 12. Grzegorz Bancerek, Piotr Rudnicki. Two Programs for \bf SCM. Part I -- Preliminaries, Formalized Mathematics 4(1), pages 69-72, 1993. MML Identifier: PRE_FF Summary: In two articles (this one and \cite{FIB_FUSC.ABS}) we discuss correctness of two short programs for the {\bf SCM} machine: one computes Fibonacci numbers and the other computes the {\em fusc} function of Dijkstra \cite{DIJKSTRA}. The limitations of current Mizar implementation rendered it impossible to present the correctness proofs for the programs in one article. This part is purely technical and contains a number of very specific lemmas about integer division, floor, exponentiation and logarithms. The formal definitions of the Fibonacci sequence and the {\em fusc} function may be of general interest. 13. Grzegorz Bancerek, Piotr Rudnicki. Two Programs for \bf SCM. Part II -- Programs, Formalized Mathematics 4(1), pages 73-75, 1993. MML Identifier: FIB_FUSC Summary: We prove the correctness of two short programs for the {\bf SCM} machine: one computes Fibonacci numbers and the other computes the {\em fusc} function of Dijkstra \cite{DIJKSTRA}. The formal definitions of these functions can be found in \cite{PRE_FF.ABS}. We prove the total correctness of the programs in two ways: by conducting inductions on computations and inductions on input data. In addition we characterize the concrete complexity of the programs as defined in \cite{SCM_1.ABS}. 14. Grzegorz Bancerek. Joining of Decorated Trees, Formalized Mathematics 4(1), pages 77-82, 1993. MML Identifier: TREES_4 Summary: This is the continuation of the sequence of articles on trees (see \cite{TREES_1.ABS}, \cite{TREES_2.ABS}, \cite{TREES_3.ABS}). The main goal is to introduce joining operations on decorated trees corresponding with operations introduced in \cite{TREES_3.ABS}. We will also introduce the operation of substitution. In the last section we dealt with trees decorated by Cartesian product, i.e. we showed some lemmas on joining operations applied to such trees. 15. Takaya Nishiyama, Yasuho Mizuhara. Binary Arithmetics, Formalized Mathematics 4(1), pages 83-86, 1993. MML Identifier: BINARITH Summary: Formalizes the basic concepts of binary arithmetic and its related operations. We present the definitions for the following logical operators: 'or' and 'xor' (exclusive or) and include in this article some theorems concerning these operators. We also introduce the concept of an $n$-bit register. Such registers are used in the definition of binary unsigned arithmetic presented in this article. Theorems on the relationships of such concepts to the operations of natural numbers are also given. 16. Pauline N. Kawamoto, Yasushi Fuwa, Yatsuka Nakamura. Basic Concepts for Petri Nets with Boolean Markings, Formalized Mathematics 4(1), pages 87-90, 1993. MML Identifier: BOOLMARK Summary: Contains basic concepts for Petri nets with Boolean markings and the firability$\slash$firing of single transitions as well as sequences of transitions \cite{Nakamura:5}. The concept of a Boolean marking is introduced as a mapping of a Boolean TRUE$\slash$FALSE to each of the places in a place$\slash$transition net. This simplifies the conventional definitions of the firability and firing of a transition. One note of caution in this article - the definition of firing a transition does not require that the transition be firable. Therefore, it is advisable to check that transitions ARE firable before firing them. 17. Grzegorz Bancerek, Piotr Rudnicki. On Defining Functions on Trees, Formalized Mathematics 4(1), pages 91-101, 1993. MML Identifier: DTCONSTR Summary: The continuation of the sequence of articles on trees (see \cite{TREES_1.ABS}, \cite{TREES_2.ABS}, \cite{TREES_3.ABS}, \cite{TREES_4.ABS}) and on context-free grammars (\cite{LANG1.ABS}). We define the set of complete parse trees for a given context-free grammar. Next we define the scheme of induction for the set and the scheme of defining functions by induction on the set. For each symbol of a context-free grammar we define the terminal, the pretraversal, and the posttraversal languages. The introduced terminology is tested on the example of Peano naturals. 18. Beata Madras. Product of Family of Universal Algebras, Formalized Mathematics 4(1), pages 103-108, 1993. MML Identifier: PRALG_1 Summary: The product of two algebras, trivial algebra determined by an empty set and product of a family of algebras are defined. Some basic properties are shown. 19. Malgorzata Korolkiewicz. Homomorphisms of Algebras. Quotient Universal Algebra, Formalized Mathematics 4(1), pages 109-113, 1993. MML Identifier: ALG_1 Summary: The first part introduces homomorphisms of universal algebras and their basic properties. The second is concerned with the construction of a quotient universal algebra. The first isomorphism theorem is proved. 20. Beata Perkowska. Free Universal Algebra Construction, Formalized Mathematics 4(1), pages 115-120, 1993. MML Identifier: FREEALG Summary: A construction of the free universal algebra with fixed signature and a given set of generators. 21. Agnieszka Banachowicz, Anna Winnicka. Complex Sequences, Formalized Mathematics 4(1), pages 121-124, 1993. MML Identifier: COMSEQ_1 Summary: Definitions of complex sequence and operations on sequences (multiplication of sequences and multiplication by a complex number, addition, subtraction, division and absolute value of sequence) are given. We followed \cite{SEQ_1.ABS}. 22. Zbigniew Karno. Maximal Discrete Subspaces of Almost Discrete Topological Spaces, Formalized Mathematics 4(1), pages 125-135, 1993. MML Identifier: TEX_2 Summary: Let $X$ be a topological space and let $D$ be a subset of $X$. $D$ is said to be {\em discrete}\/ provided for every subset $A$ of $X$ such that $A \subseteq D$ there is an open subset $G$ of $X$ such that $A = D \cap G$\/ (comp. e.g., \cite{KURAT:2}). A discrete subset $M$ of $X$ is said to be {\em maximal discrete}\/ provided for every discrete subset $D$ of $X$ if $M \subseteq D$ then $M = D$. A subspace of $X$ is {\em discrete}\/ ({\em maximal discrete}) iff its carrier is discrete (maximal discrete) in $X$.\par Our purpose is to list a number of properties of discrete and maximal discrete sets in Mizar formalism. In particular, we show here that {\em if $D$ is dense and discrete then $D$ is maximal discrete}; moreover, {\em if $D$ is open and maximal discrete then $D$ is dense}. We discuss also the problem of the existence of maximal discrete subsets in a topological space.\par To present the main results we first recall a definition of a class of topological spaces considered herein. A topological space $X$ is called {\em almost discrete}\/ if every open subset of $X$ is closed; equivalently, if every closed subset of $X$ is open. Such spaces were investigated in Mizar formalism in \cite{TDLAT_3.ABS} and \cite{TEX_1.ABS}. We show here that {\em every almost discrete space contains a maximal discrete subspace and every such subspace is a retract of the enveloping space}. Moreover, {\em if $X_{0}$ is a maximal discrete subspace of an almost discrete space $X$ and $r : X \rightarrow X_{0}$ is a continuous retraction, then $r^{-1}(x) = \overline{\{x\}}$ for every point $x$ of $X$ belonging to $X_{0}$}. This fact is a specialization, in the case of almost discrete spaces, of the theorem of M.H. Stone that every topological space can be made into a $T_{0}$-space by suitable identification of points (see \cite{STONE:3}). 23. Zbigniew Karno. On Nowhere and Everywhere Dense Subspaces of Topological Spaces, Formalized Mathematics 4(1), pages 137-146, 1993. MML Identifier: TEX_3 Summary: Let $X$ be a topological space and let $X_{0}$ be a subspace of $X$ with the carrier $A$. $X_{0}$ is called {\em boundary}\/ ({\em dense}) in $X$ if $A$ is boundary (dense), i.e., ${\rm Int}\,A = \emptyset$ ($\overline{A} =$ the carrier of $X$); $X_{0}$ is called {\em nowhere dense}\/ ({\em everywhere dense}) in $X$ if $A$ is nowhere dense (everywhere dense), i.e., ${\rm Int}\,\overline{A} = \emptyset$ ($\overline{{\rm Int}\,A} =$ the carrier of $X$) (see \cite{TOPS_3.ABS} and comp. \cite{KURAT:2}).\par Our purpose is to list, using Mizar formalism, a number of properties of such subspaces, mostly in non-discrete (non-almost-discrete) spaces (comp. \cite{TOPS_3.ABS}). Recall that $X$ is called {\em discrete}\/ if every subset of $X$ is open (closed); $X$ is called {\em almost discrete}\/ if every open subset of $X$ is closed; equivalently, if every closed subset of $X$ is open (see \cite{TDLAT_3.ABS}, \cite{TEX_1.ABS} and comp. \cite{KURAT:2},\cite{KURAT:3}). We have the following characterization of non-discrete spaces: {\em $X$ is non-discrete iff there exists a boundary subspace in $X$}. Hence, {\em $X$ is non-discrete iff there exists a dense proper subspace in $X$}. We have the following analogous characterization of non-almost-discrete spaces: {\em $X$ is non-almost-discrete iff there exists a nowhere dense subspace in $X$}. Hence, {\em $X$ is non-almost-discrete iff there exists an everywhere dense proper subspace in $X$}.\par Note that some interdependencies between boundary, dense, nowhere and everywhere dense subspaces are also indicated. These have the form of observations in the text and they correspond to the existential and to the conditional clusters in the Mizar System. These clusters guarantee the existence and ensure the extension of types supported automatically by the Mizar System.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9810052514076233, "perplexity": 1178.0576352929152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00429.warc.gz"}
https://mathyug.com/example-12-find-the-conjugate/
# Example 12 Find the conjugate ## Chapter 5 Complex Numbers Class 11 Maths NCERT Chapter 5 Complex Numbers Example 12 Find the conjugate of {\displaystyle \frac{(3-2i)(2+3i)}{(1+2i)(2-i)} } 03:02 ## Question 14 Find the real numbers x and y if (x−iy)(3+5i) is the conjugate of −6−24i NCERT Miscellaneous Exercise Question 14. Find the real numbers x and y if {\displaystyle (x-iy)(3+5i) } is the conjugate of {\displaystyle -6-24i } . 04:43 ## Find the modulus and the arguments of each of the complex numbers in NCERT Exercise 5.2 Find the modulus and the arguments of each of the complex numbers in Exercises 1 to 2. Question - 1: {\displaystyle z=-1-i \sqrt{3} } .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235407710075378, "perplexity": 1367.0351467948835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00103.warc.gz"}
https://cs.stackexchange.com/questions/97179/how-to-define-the-natural-numbers-as-a-w-type
# How to define the natural numbers as a W-type? I'm having trouble understanding the rules for W-types in type theory as defined here: https://ncatlab.org/nlab/show/W-type#wtypes_in_type_theory Can someone give an example of how these rules could be used to define a simple W-type such as the natural numbers? What would $A$ and $B$ be in the formation rule for instance? The formation and introduction rules for W-types, as given on n-cat lab, are: $$\frac{A:Type\quad x:A⊦B:Type}{(W x:A)B(x):Type}-\text{Formation}$$ $$\frac{a:A\quad t:B(a)\rightarrow W}{sup(a,t):W}-\text{Introduction}$$ You can define the natural numbers by setting $A=Bool$ and $B(a) = \text{if}\; a\; \text{then}\; ⟂\; \text{else}\; Unit$. Where $⟂$ is the empty type and $Unit$ is the unit type. Zero is then $sup(a,t)$ where $a=true$ and $t: ⟂ \rightarrow Nat = λn. abort(n)$. eg. we use the zero constructor (by choosing $a=true$) and fill in the only possible value of a function from $⟂$ to $Nat$. The successor of $p$ can be defined as $a=false$, $t:Unit \rightarrow Nat = λx.p$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835591316223145, "perplexity": 183.8355146255931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00364.warc.gz"}
http://link.springer.com/article/10.1007%2FBF01801202
, Volume 12, Issue 2, pp 322-339 # Quantum gravity and the structure of scientific revolutions Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Summary In a case study Kuhn's morphology of scientific revolutions is put to the test in confronting it with the contemporary developments in physics. It is shown in detail, that Kuhn's scheme is not compatible with the situation in physics today.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221940994262695, "perplexity": 1556.184445240753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831098.94/warc/CC-MAIN-20140820021351-00016-ip-10-180-136-8.ec2.internal.warc.gz"}
https://forum.allaboutcircuits.com/threads/need-help-with-magnetic-circuit-mean-path-length.152881/
# Need help with magnetic circuit (mean path length) #### 김찬우 1 Joined Oct 8, 2018 11 ri=3.4cm, ro=40.cm and the question is calculating the mean core length, and the answer is but i cant understand how this calculation came out. i think it should be 2*pi*(r0+ri)/2=pi(r0+ri),, i need explanations for this problem. thanks Last edited: #### wayneh Joined Sep 9, 2010 16,400 Deriving the answer requires a good application of some calculus skills. Otherwise you have to accept the solution, that the mean circumference depends on the mean radius, which is at least easy to remember. Oops I missed that the given answer is crap. Last edited: #### 김찬우 1 Joined Oct 8, 2018 11 you mean #### ebp Joined Feb 8, 2018 2,332 EDIT - made a mess of this the first time; should be OK now Your equation is correct. If g is zero le = π(Ro+Ri) Here's a datasheet for a powdered iron core I've used many times: micrometalsarnoldpowdercores.com/pdf/T106-52-DataSheet.pdf Note that the specifications for dimensions are the inner and outer diameters in millimetres and the path length is specified in centimetres. Your equation gives the corrrect path length. Last edited: #### 김찬우 1 Joined Oct 8, 2018 11 thanks for the replying. it helped me a lot. edit - can i ask you one more? the question says that the iron is of infinite permeability and neglect the effects of magnetic leakage and fringing. Do this assumption affects the calculation of mean core length? #### wayneh Joined Sep 9, 2010 16,400 thanks for the replying. it helped me a lot. edit - can i ask you one more? the question says that the iron is of infinite permeability and neglect the effects of magnetic leakage and fringing. Do this assumption affects the calculation of mean core length? Nope. If the permeability was not uniform with radius, that could have an effect. #### MrAl Joined Jun 17, 2014 7,849 View attachment 161191 ri=3.4cm, ro=40.cm and the question is calculating the mean core length, and the answer is View attachment 161193 but i cant understand how this calculation came out. i think it should be 2*pi*(r0+ri)/2=pi(r0+ri),, i need explanations for this problem. thanks Hello, Hard to verify exactly if you dont give the value of 'g', but they may be implying that g=0.2 perhaps. We also have to assume that 40 is really 4.0 in that question. With g=0 yes the mean mag path length is: pi*(Ro+Ri) To determine if the mean mag path length is: pi*(Ro+Ri)-g where the two core face surfaces are perfectly parallel we'd have to dig a little deeper, but for very small 'g' i think it should be a good enough estimate. The fact that the core permeability is infinite may change things too though because then the gap doesnt have any effect. That is if this is a little bit of a tricky question where it really is infinite and not just "large". #### wayneh Joined Sep 9, 2010 16,400 The fact that the core permeability is infinite may change things too though because then the gap doesnt have any effect. I don't think you mean that. The gap is still a critical piece and it's size matters. #### wayneh Joined Sep 9, 2010 16,400 you mean It might, by coincidence, produce the right result in this example. But the formula is incorrect. If anyone challenges you on that, tell them to imagine a toroid with Ro of 100 and Ri of 99. Do they really believe the mean core length is just 2π? #### MrAl Joined Jun 17, 2014 7,849 With reference to the gap size being insignificant if the permeability is infinite... I don't think you mean that. The gap is still a critical piece and it's size matters. Hi, Well go through the calculation of inductance and see what you end up with. I may have inverted the logic here, where instead the air gap now has total control over the average permeability. From the point of view of reluctance it looks like the air gap takes control over the total reluctance completely. So instead of zero control it has total control I'll look at this too in more detail later.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393294453620911, "perplexity": 1745.9274203301534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00110.warc.gz"}
https://investigadores.uandes.cl/es/publications/stochastic-model-calculation-for-the-carbon-monoxide-oxidation-on-3
# Stochastic model calculation for the carbon monoxide oxidation on iridium(111) surfaces Jaime Cisternas*, Daniel Escaff, Orazio Descalzi, Stefan Wehner *Autor correspondiente de este trabajo Resultado de la investigación: Contribución a una revistaArtículorevisión exhaustiva 9 Citas (Scopus) ## Resumen We study the effect of external noise on the catalytic oxidation of CO on an Iridium(111) single crystal under ultrahigh vacuum conditions. This reaction can be considered as a model of catalysis used in industry. In the absence of noise the reaction exhibits one or two stable stationary states, depending on control parameters such as temperature and partial pressures. When noise is added, for instance, by randomly varying the quality of the influx mixture, the system exhibits stochastic reaction rate and switching. In this work, we present two approaches: one for the monostable regime, and another for the bistable situation that relies on a white noise approximation. Both approaches rest on the assumption that spatial patterns of coverage on the Iridium plate can be neglected on a first approximation. Using mathematical models, it is possible to reconstruct stationary probability distribution functions that match experimental observations and provide support for the existence of a thermodynamic potential. Idioma original Inglés 3461-3472 12 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 19 10 https://doi.org/10.1142/S0218127409024906 Publicada - oct. 2009 ## Palabras clave • Bistability • Noise • Stochastic differential equations • Surface reactions ## Huella Profundice en los temas de investigación de 'Stochastic model calculation for the carbon monoxide oxidation on iridium(111) surfaces'. En conjunto forman una huella única.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253195881843567, "perplexity": 3455.8339674354897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00191.warc.gz"}
http://math.stackexchange.com/questions/4521/how-can-i-prove-this-trigonometric-equality
# How can I prove this trigonometric equality? $$\arccos\left(\frac{1-x^2}{1+x^2}\right) = 2\arctan{x}$$ for $x \geq 0$. I'm not even sure what kind of math to try? :( - Put $t=\arctan x$. Then $x=\tan t$. You have to prove that $2t=\arccos(1-x^2)/(1+x^2)$. This is more-or-less the same as $(1-x^2)/(1+x^2)=\cos2t$. If you put $x=\tan t$ into $(1-x^2)/(1+x^2)$ what do you get? - Setting $x=\tan a$ your equality $$\arccos \dfrac{1-x^{2}}{1+x^{2}}=2\arctan x\qquad (1)$$ becomes $$\cos 2a=\dfrac{1-\tan ^{2}a}{1+\tan ^{2}a}.\qquad (2)$$ To prove (2) which is listed in the wikipedia we can take the duplication formula $$\cos 2a=\cos ^{2}a-\sin ^{2}a$$ Dividing by $\cos ^{2}a+\sin ^{2}a=1$, we establish $$\cos 2a=\dfrac{\cos ^{2}a-\sin ^{2}a}{\cos ^{2}a+\sin ^{2}a}=\dfrac{1-\tan ^{2}a}{1+\tan ^{2}a}.$$ Edit: One must note that $\arctan x$ is an odd function, while $\arccos \frac{1-x^{2}}{1+x^{2}}$ is an even one. Only for $x\geq 0$ are both sides of $(1)$ equal. Identity $(2)$ is valid for both positive and negative values of $a$. Blue: $y=\arccos((1-x^2)/(1+x^2))$ for $x\ge 0$; Red: $y=2\arctan x$ - Thats what Robin answered? –  anonymous Sep 13 '10 at 12:51 @Chandru1: similar, yes. I based my deduction in a general method I learned that all direct trigonometric function of the angle $2a$ may be expressed as rational function of $\tan a$. –  Américo Tavares Sep 13 '10 at 12:59 @Chandru: The book is "Compêndio de Trigonometria" by J. Jorge Calado, Lisbon, Empresa Literária Fluminence, 1967. –  Américo Tavares Sep 13 '10 at 13:03 Differentiate the difference $$\arccos\left(\frac{1-x^2}{1+x^2}\right) - 2\arctan{x},$$ simplify the resulting expression and find $0$ when $x>0$, concluding that the difference is constant. Now evaluate at $1$, to see that the difference is in fact constantly zero. N.B.: This is of course quite unenlightenling, but you always know that if two things are equal, their difference is constant so you never lose much by trying to show that it is constant and zero. Specially if you have a computer algebra system at hand that can do the derivatives and simplifications for you! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954366147518158, "perplexity": 457.6211057711494}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/warc/CC-MAIN-20150728002301-00078-ip-10-236-191-2.ec2.internal.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=0102&L=LATEX-L&D=0&P=2932194
## [email protected] Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>] Joseph Wright skrev: > Lars Hellström wrote: >> Finally, there is the issue that a processor has to put the argument in >> a toks register. I understand this is for generality (only sane way to >> pass along # tokens), but my experience with this type of API is that >> one should make it easy to use commands that happen to already exist. In >> this case, it would mean to also support a processor that would store >> its result in a macro rather than a toks register, since I'm quite sure >> this is what people tend to do unless they definitely need a toks register. > > Maybe I'm missing something here, but if we need the processed value > returned in a named variable it should not matter whether it is a toks > or a tl (or indeed anything else). The point is that when specifying a processor, it's kind of a drag having to introduce a helper function just for the purpose of glueing xparse's syntax to that of an existing command really implementing the operation; it would be much better if the syntaxes fitted from the start. For operations that produce a sequence of tokens as result, I believe the most common syntax would be the same as for \MakeHarmless, i.e., \MakeHarmless<tl-to-set-to-result>{<input>} (other examples more or less matching that syntax are \def and \edef). Token registers, in my experience, is something you avoid using as variables unless you have a specific need for them. > All that needs to happen is > > \toks_set:NV \l_xparse_arg_toks <variable-used-in-processor> > > which is the same action if <variable-used-in-processor> is a tl or a > toks. Yes, but it is awkward to put that piece of code in the >{...} modifier, since it must be executed *after* the actual processing step has been carried out -- it pretty much requires a helper function to rearrange things. Hence my suggestion that /xparse/ should supply that piece of code, rather than rely on the user to do it. Actually, a thing that worries me about the >{} syntax discussed so far is that it presumes the programmer doing \DeclareDocumentCommand has \ExplSyntaxOn, which I don't think is a safe bet; we're talking about something which is somewhat like \newcommand. A better syntax might be >{<variable>}{<processor>} with the semantics that xparse will execute <processor>{<argument>} and expects to afterwards find the result in the <variable>,[*] which it can then transfer to \l_xparse_arg_toks or perhaps pass directly to the next processor. This adds the ability to use processors that leave their result in a fixed place, but more importantly it avoid tying the argspec syntax to implementation details of xparse. [*] Okay, here that automagic "V" expansion type actually turns out to be useful. :-o Lars Hellström
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076931238174438, "perplexity": 4888.768267610959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00374.warc.gz"}
http://web.colby.edu/thegeometricviewpoint/2014/11/23/the-long-line/
# The Geometric Viewpoint geometric and topological excursions by and for undergraduates ## The Long Line Topology can be best described as the study of certain “spaces” and the properties they have. Now it is important to figure out what spaces are essentially the “same” and which are different. We define two spaces to be the “same” if we can transform one into the other continuously, and the transformation we preformed can be undone continuously as well. Now you may be asking, “what exactly do you mean by a ‘space’?” A topological space is defined as a set $X$ with an associated set $\mathscr{T}$ consisting of subsets of $X$ which satisfies certain properties. The elements of $\mathscr{T}$ are declared to be the open sets of our topological space $(X,\mathscr{T})$. Now a major aspect of topology is to define properties that different spaces could have and what these properties should “say” about a space. For example, one may want to determine when a space is connected together or when it can be broken up into different pieces. Or one may wish to determine when any two points in a space can be connected together by a path through the space. Now at first glance these two definitions seem to be describing the same property but in fact they aren’t. The first definition describes when a space is “connected” and the other when a space is “path-connected.” It turns out that “path-connected” is a stronger claim about a space than just “connected.” In other words, there are space which are connected but not path-connected, for example The Topologist’s Sine Curve . However every path-connected space is also connected. One may begin to wonder what other properties in topology share this type of connection. In other words, which properties imply other properties and which do not. Now to show that one property implies another, one must start from the most general assumptions and come up with some mathematical proof. However to show that some property does not imply another, one must simply come up with a counter example. It is the later strategy that this blog is concerned with. I will be discussing a particular topological space, “the long line,” that can be used as a counter example to certain properties of a space, namely different levels of “compactness.” Before we begin discussing the long line it will be useful to have an overview of an order topology. Now on the real line $\mathbb{R},$ the order topology ends up coinciding with the usual definition of open set, namely the union of open intervals. The key property of the real line that allows us to have this definition of open intervals is that it is totally ordered. This simply means that given any two points $a,b \in \mathbb{R}$, either $a\leq b$ or $b\leq a$. It turns out that we can define an order topology on any set which is totally ordered by some relation $\leq.$ Given a totally ordered set $(X,\leq)$ we can define an order topology on $X$ by first letting $\mathscr{T}''$ be the set consisting of all sets of the form $\{y : y < a\}$ and $\{y : b for any $a,b \in X$. We then create another set $\mathscr{T}'$ of all the finite intersection of elements of $\mathscr{T}''$ and then let our topology $\mathscr{T}$ be all the sets which can be expressed as the arbitrary union and finite intersection of elements of $\mathscr{T}'$. The reader should think about this construction in terms of the real line and note that we end up producing all the open intervals and unions of open intervals of the real line. Refer to the image below to see how the construction comes together. Example of the creation of an open interval with elements of T”. We are almost ready to construct the long line, but first we must make one more detour, into the world of set theory. The principle object we will use in our construction of the long line is the ordinal. Ordinals are just a very special type of well ordered set. Now a well ordered set is very similar to a totally ordered set with the additional property that every non-empty subset has a minimum element. An amazing fact from set theory is that every set can be well ordered. It turns out that the ordinals can be thought of as the standard well ordered sets. In fact, every well ordered set can be put into a bijective correspondence which preserves order with a unique ordinal. Now you may be wondering what’s so special about these ordinals. An ordinal  $X$ can be defined as a well ordered set with the property that each element $a \in X$ is exactly the set of all elements in $X$ which precede $a$. In other words, $X$ is an ordinal if for every element $a \in X$, $a = \{x \in X : x. The you may be wondering if such a thing even exists. Refer to the images for a glimpse of the finite and the first few infinite ordinals. You may be interested in a more thorough description of ordinals, which can be found here. The first few finite ordinals. The first few infinite ordinals. Now it turns out that inclusion will always be the well order on an ordinal. It also turns out that there are a lot of ordinals. In fact the collection of all ordinals is not even a set! Naively we can think that there are just too many ordinals for them to be a set. Now as seen above there are ordinals with an infinite number of elements. In fact the first infinite ordinal can be used to make sense of the natural numbers. We often denote the first infinite ordinal as $w$. Amazingly there are different sized infinities, and $w$ can be thought of as the “smallest” infinity. Sets which can be put in a bijective correspondence with it are deemed “countable.” The next size of infinity is known as “uncountable” and we will let $\Omega$ stand for the first uncountable ordinal. After the long build up we are finally ready to define the long line. The long line can be thought of as taking uncountably many copies of the interval $\lbrack 0,1)$ and “stacking” them end to end. For comparisons sake, we can think of the positive real line as countably many copies of the same interval “stacked” end to end. The long line must be very long indeed! While this definition may provide a good image, it leaves little to work with as far as properties go. Here is a more precise definition, the long line $L$ is the cartesian product $\Omega \times \lbrack 0,1)$ where the elements are ordered lexicographically. In other words, given two elements $(\alpha, a)$ and $(\beta,b)$ with $(\alpha, a)\leq (\beta,b)$, either $\alpha \leq \beta$ or in the case of equality, $a \leq b$. It is easy to see that this is a total order and so we can construct the order topology on $L.$ While this definition may seem strange, it is actually very easy to visualize. Take the set of all non-negative real numbers as an example. We can think of this set as the cartesian product $\mathbb{N} \times \lbrack 0,1)$ (keeping in mind that the natural number $\mathbb{N}$ can be fully described by the ordinal $w$). First note that we can think of elements $(n,d)$ of our set as telling us first the integer part of the number and then the decimal part. Now I ask the reader to consider how they would compare the size of two different positive real numbers. First you would compare the integer parts, and if they were equal you would then move on to the decimal parts. That’s comparing lexicographically! So the long line $L$ is just a much “longer” version of that example. The image below provides a description of our analogy to the long line, unfortunately it is very difficult to create a visual for an uncountable well ordered set. The real line as a cartesian product. Now you may wonder, “how much longer is the long line?” Perhaps the best way to compare the “length” of these lines is by looking at sequences. It is common knowledge that any strictly increasing sequence of real numbers does not converge. This is easily seen and accepted and it may be easy to conclude the same fact about the long line as well, however that would be a mistake. Lemma 1: Every increasing sequence converges in the Long Line. Proof: Suppose $(x_n)$ is an increasing sequence in the long line. Now consider the first element of each term of our sequence (remember elements of the long line are doubles, the first being the ordinal, and the second being the decimal part). Let $(\alpha_n)$ be the sequence of first elements. Therefore $(\alpha_n)$ is an increasing sequence of ordinal numbers. Now I will present it as fact that every increasing sequence of ordinal numbers has a limit point. Also $\Omega$ can never be the limit point of a sequence of countable ordinals. Therefore $(\alpha_n)$ must converge to a countable ordinal and therefore an ordinal that is represented in the long line. Now, if $(\alpha_n)$ never reaches a point where it remains constant, then we never have to consider the decimal part of the sequence since the limit point of the sequence of ordinals together with 0 will be the limit point of our sequence. So suppose that eventually $(\alpha_n)$ becomes a constant sequence. Let $\alpha'$ be the eventual constant term. Now we will consider the sequence of decimal parts of all terms after the sequence becomes constant in terms of the ordinals. Let $(d_m)$ be this sequence. Since $\lbrack 0,1)$ is bounded and $(d_m)$ must be increasing, it is easy to conclude that it converges, possibly to 1 in which case we take the point $(\alpha'+1,0)$ as our limit point for the original sequence. Therefore we can conclude that every increasing sequence converges in the long line. $\square$ Now amazingly with this one fact we glean even more information about the long line. For instance the long line is sequentially compact. First a quick definition, a topological space is sequentially compact if every sequence in the space has a convergent sub-sequence. Before I prove this I will prove a quick lemma about sequences in a totally ordered set. Lemma 2: Every sequence in a totally ordered set has a monotone sub-sequence. Proof: Suppose $(X,\leq)$ is a totally ordered set. Now let $(x_n)$ be a sequence in $X$. Let $x_p$ be a peak of the sequence if for all $n\geq p : x_n \leq x_p$. Now clearly there are two cases to consider, either $(x_n)$ has infinitely many such peaks or it has finitely many. First suppose that $(x_n)$ has infinitely many peaks, $\{x_{n_1}, x_{n_2},\ldots \}$. Therefore we can take $(x_{n_p})$ as our sub-sequence and clearly by definition this sequence must be decreasing. Now suppose our sequence has only finitely many peaks. Therefore there is a last peak $x_{n_{0-1}}$. Now consider the term $x_{n_0}$. Since this term is not a peak there must exist another term $x_{n_1}$ which is greater than $x_{n_0}$. We can then find a term $x_{n_2} \geq x_{n_1}$ since $x_{n_1}$ was not a peak. We can continue this process, creating a sub-sequence $(x_{n_m})$ which is increasing. $\square$ Now we are ready to prove that the Long Line $L$ is sequentially compact. Proof: Suppose $(x_n)$ is a sequence in $L.$ By the second lemma we know we can find a monotone sub-sequence $(x_{n_m})$. Now first suppose that $(x_{n_m})$ is increasing. Then by the first lemma $(x_{n_m})$ converges. Now suppose that $(x_{n_m})$ is decreasing. Now since the long line is bounded below we know that $(x_{n_m})$ must converge. So $(x_n)$ has a convergent sub-sequence. $\square$ So the long line is sequentially compact. Of course the next question to ask is if the long line is compact. Interestingly it is not. Now the topological definition is a little different then the one given in most calculus classes. A topological space $X$ is compact if every open cover $\mathscr{U}$ of $X$ has a finite sub-cover. Now a cover is just a collection of open sets where the union of the collection is equal to the entire space. Lemma 3: The Long Line $L$ is not compact. Proof: Consider the collection of open sets $\mathscr{U'} = \{ (\alpha, \alpha+1) : \alpha \in \Omega \}$. We can then add to this collection sets of the form $((\alpha,\frac{2}{3}),(\alpha+1, \frac{1}{3}))$ giving us a cover $\mathscr{U}$. To see that this is a cover just note that the only points $\mathscr{U'}$ only misses the points with no decimal part, and note that the added collection of sets catch all the ordinals with no decimal part. Finally note that if any set of $\mathscr{U}$ is removed, we will no longer have a cover of $L$ and since $\mathscr{U}$ is clearly infinite we can conclude that $L$ is not compact. $\square$ This then leads us to one more interesting aspect of the long line; the long line is not metrizable. Now before I can explain what a metrizable space is, I will give a brief description of a metric space. A metric space is a set together with a “distance” function that determines how far away two points are. Open sets are then the union of open balls, which is just the set of all points strictly less than some radius from a given point. For instance, the usual distance function, $d$, on $\mathbb{R}$ is just $d(a,b) = |a-b|$. Open balls then correspond to sets of the form $\{x\in \mathbb{R} : |a-x| < \epsilon\}$. Now a metrizable space is a topological space where one can create a distance function on the set that then creates precisely the same open sets that were in the original topology. Now metric spaces, and therefore metrizable spaces, usually have “nicer” properties than general topological spaces, especially when it comes to equivalent features. The property that interests us here is than in a metric space, and so a metrizable space, the concept of sequentially compact and compact coincide. In other words, if a metric space is compact, then it is sequentially compact, and likewise in reverse. So immediately we can see that the long line cannot be metrizable since it is sequentially compact but not compact. So it would be impossible to create a “distance” function, which made sense, on the long line which lead to the construction of all the open sets we have. Now you may be wondering what’s the point of creating the long line. You may ask yourself why anyone should care. Aside from just being able to work with a weird topological space, the creation and examination of the long line has multiple benefits. First off, it gives us a concrete counter example to properties that seem so similar. If all you ever work with are metrizable spaces, you won’t ever be able to really see how sequentially compact and compact are different. The long line is just one example of the importance of searching for counter examples. One may think that it is possible for spaces to have some properties and not have others, but until a concrete counter example is created, it’s all just conjecture. For more information on the long line refer to Counter Examples in Topology, Steen and Seebach. Editor’s note: The author of this post Josh Hews was a student in the Fall 2014 Topology course at Colby College taught by Scott Taylor. Submissions to the blog of essays by and for undergraduates on subjects pertaining to geometry and topology are welcome. For more information see the “Submit and Essay” tab above. This entry was posted in Uncategorized. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 101, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958020031452179, "perplexity": 139.19841171900413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00148.warc.gz"}
http://physics.stackexchange.com/questions/4364/does-the-positive-mass-conjecture-indicate-a-necessity-of-interactions-in-our-un?answertab=votes
# Does the positive mass conjecture indicate a necessity of interactions in our universe? The positive mass conjecture was proved by Schoen and Yau and later reproved by Witten. Total mass in a gravitating system must be positive except in the case of flat Minkowski space, where energy is zero. Since QG is intended to be a theory of interaction with force particles called gravitons, one may begin to wonder if the interactions are in fact the important defining features of the space in question. So does a theory with interactions also require that space be curved? - Dear Humble, because non-gravitational yet interacting field theories such as QCD or the Standard Model exist and they don't predict a curved space, the answer to your question is clearly No, interactions don't imply that the spacetime has to be curved. However, the curved spacetime follows from many other assumptions - or combinations of assumptions - for example from the requirement that the gravitational force (respecting the equivalence principle) simultaneously exists with the relativistic Lorentz invariance. The theorems you mentioned clearly assumed that the spacetime is allowed to become curved. So far, I assumed that you agree that the existence of interactions is a property of a theory, not a property of a configuration. But what about the possibility that you meant the "existence of interactions" to be a property of a state, or a configuration? Because the positive-energy theorem implies that the energy is strictly positive with the single exception of an empty Minkowski space, it follows that if you also agree that the Minkowski space "has no interactions in it", then every state that has interactions "in it" has nonzero energy and consequently has to lead to a curved spacetime. - Lubos, I have always assumed that a QG theory replaced any curvature (and hence any curvature in geodesics) with gravitons interactions between matter fields. Is this assumption a wrong one? If that is so, then the role of the graviton in such theory would be VERY different than in other QFT. In fact, then why do we need gravitons in the first place then? –  lurscher Feb 1 '11 at 15:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573874473571777, "perplexity": 375.28083105809105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345777160/warc/CC-MAIN-20131218054937-00041-ip-10-33-133-15.ec2.internal.warc.gz"}
https://nonlinearalgebra.wordpress.com/2016/06/28/wnt-signaling-pathway/
# Wnt Signaling Pathway In a previous post we discussed the significance of chemical reaction networks, the equations that arise from such networks, and our goals in solving them.  Since then, the group has been working on creating a Macaulay2 package that takes a chemical reaction network as input and through various commands gives output reflecting the steady-state equations, a basis for the stoichiometric subspace, etc.  We have created building blocks within the package which correspond to the motifs described in this paper.  The idea is that these motifs can be used to create new reaction networks without the need to input every single reaction. As an additional example in the package we added the shuttle model of the Wnt signaling pathway described in this paper.  These are the chemical reactions in the model: Trying to input these reactions (which are not based on any of the motifs we have already created) in our package we ran into a problem when trying to understand how to deal with the empty set present in 4 of the reactions above. From an algebraic point of view, the two reactions of the form $x_{**} \xrightarrow{k} x_{**}+\emptyset$ can be viewed without the empty set.  Its significance in the reaction network is that there is  degradation of the protein $\beta$-catenin; however, deleting the $"+ \emptyset"$ part will not change anything in the equations or the properties of the variety. For reactions of the form $X \xrightarrow{k} \emptyset~~~(1)$    and    $\emptyset \xrightarrow{k} Y~~(2)$ we have to be more careful.  We think of each complex in the reaction network as a monomial represented by an exponent vector.  In the case above we have 19 species, so each exponent vector will be of dimension $19\times 1$.  For example, the complex $x_2+x_4$ will have ones in the second and fourth position and all other entries will be zero.  But how do we deal with the empty set?  We can think of it as the monomial 1, with exponent vectors of all zeros.  This is significant for reactions of the form $(1)$, since the rate constant $k$ will appear in the steady-state equation for $\dot{X}$.  Aside from this, the empty set does not participate in the stoichiometric or steady-state equations. From a biology standpoint, the reaction of type $(1)$ represents the degradation of the protein $\beta$-catenin, and the reaction of type $(2)$  – the production of $\beta$-catenin. From a technical point of view, we need to revise our package so that it accepts an empty set symbol and associates an exponent vector of zeros with it, however, it does not add it to the species list.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9438071846961975, "perplexity": 474.01502429851945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511174.69/warc/CC-MAIN-20190221213219-20190221235219-00233.warc.gz"}
http://mathhelpforum.com/algebra/29167-algebra.html
1. ## Algebra I am doing proof by induction for my discrete math class by I'm horrible at algebra and I can't figure this last part out. Thanks in advance for the advice. I need to make: (10^k) -1 + (9)(10^k) look like this: 10^(k+1) -1 I also need help with this one: I need to make: 1 - 1/(k+1) + 1/(k+1)(k+2) (1 is numerator, rest is Denom) look like this: 1 - 1/(k+2) 2. Originally Posted by jzellt ... I need to make: (10^k) -1 + (9)(10^k) look like this: 10^(k+1) -1 ... $\displaystyle 10^k - 1 +9 \cdot 10^k = 10^k - 1 +(10 -1) \cdot 10^k = 10^k - 1 +10 \cdot 10^k - 10^k= -1 + 10 \cdot 10^k = 10^{k+1}-1$ 3. Thank you! Any advice for the second problem? 4. Originally Posted by jzellt ... I also need help with this one: I need to make: 1 - 1/(k+1) + 1/(k+1)(k+2) (1 is numerator, rest is Denom) look like this: 1 - 1/(k+2) $\displaystyle 1-\frac1{k+1}+\frac1{(k+1)(k+2)}= 1-\frac{k+2}{(k+1)(k+2)} + \frac{1}{(k+1)(k+2)} = 1- \frac{k+2-1}{(k+1)(k+2)}$ Nowyou can cancel (k+1). You should add to the final result that $\displaystyle k \ne -1$ 5. I got up until the final step you have. After I find the common denominator of (k+1)(k+2), I add up the number and get (k+2) + 1. How did you get (k+2) - 1? 6. Originally Posted by jzellt I got up until the final step you have. After I find the common denominator of (k+1)(k+2), I add up the number and get (k+2) + 1. How did you get (k+2) - 1? the minus sign in front of the $\displaystyle \frac {k + 2}{(k + 1)(k + 2)}$ is what yielded the -1 note: $\displaystyle - \frac {k + 2}{(k + 1)(k + 2)} + \frac 1{(k + 1)(k + 2)} = \frac {1 - (k + 2)}{(k + 1)(k + 2)} = - \frac {(k + 2) - 1}{(k + 1)(k + 2)}$ got it? can you continue?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003365993499756, "perplexity": 1435.0933077863958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948089.47/warc/CC-MAIN-20180426051046-20180426071046-00111.warc.gz"}
https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/1228
Loughborough University Leicestershire, UK LE11 3TU +44 (0)1509 263171 # Loughborough University Institutional Repository Please use this identifier to cite or link to this item: https://dspace.lboro.ac.uk/2134/1228 Title: Time discretization of functional integrals Authors: Samson, J.H. Keywords: quantum physicsstatistical mechanics Issue Date: 2000 Abstract: Numerical evaluation of functional integrals usually involves a finite (L-slice) discretization of the imaginary-time axis. In the auxiliary-field method, the L-slice approximant to the density matrix can be evaluated as a function of inverse temperature at any finite L as $\rho_L(\beta)=[\rho_1(\beta/L)]^L$, if the density matrix $\rho_1(\beta)$ in the static approximation is known. We investigate the convergence of the partition function $Z_L(\beta)=Tr\rho_L(\beta)$, the internal energy and the density of states $g_L(E)$ (the inverse Laplace transform of $Z_L$), as $L\to\infty$. For the simple harmonic oscillator, $g_L(E)$ is a normalized truncated Fourier series for the exact density of states. When the auxiliary-field approach is applied to spin systems, approximants to the density of states and heat capacity can be negative. Approximants to the density matrix for a spin-1/2 dimer are found in closed form for all L by appending a self-interaction to the divergent Gaussian integral and analytically continuing to zero self-interaction. Because of this continuation, the coefficient of the singlet projector in the approximate density matrix can be negative. For a spin dimer, $Z_L$ is an even function of the coupling constant for L<3: ferromagnetic and antiferromagnetic coupling can be distinguished only for $L\ge 3$, where a Berry phase appears in the functional integral. At any non-zero temperature, the exact partition function is recovered as $L\to\infty$. Description: This is a pre-print. It is also available at: http://arxiv.org/abs/quant-ph/0003109. The definitive version: SAMSON, 2000. Time discretization of functional integrals. Journal of Physics A: Mathematical and General, 33, 3111-3120, is available at: http://www.iop.org/EJ/journal/JPhysA. URI: https://dspace.lboro.ac.uk/2134/1228 Appears in Collections: Pre-Prints (Physics) Files associated with this item: File Description SizeFormat
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475408792495728, "perplexity": 986.4537588417173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930443.64/warc/CC-MAIN-20150521113210-00136-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.mathmaa.com/apset-syllabus3-4.html
online user counter MATHMAA Only Search Your Site # APSET Syllabus3&4 ## Probability Introduction : APSET Syllabus3&4 units continuation UNIT – 3 Ordinary Differential Equations (ODEs): Existence and uniqueness of solutions of initial value problems for first order ordinary differential equations, singular solutions of first order ODEs, system of first order ODEs. General theory of homogenous and non-homogeneous linear ODEs, variation of parameters, Sturm-Liouville boundary value problem, Green’s function. Partial Differential Equations (PDEs): Lagrange and Charpit methods for solving first order PDEs, Cauchy problem for first order PDEs. Classification of second order PDEs, General solution of higher order PDEs with constant coefficients, Method of separation of variables for Laplace, Heat and Wave equations. Numerical Analysis: Numerical solutions of algebraic equations, Method of iteration and Newton-Raphson method, Rate of convergence, Solution of systems of linear algebraic equations using Gauss elimination and Gauss-Seidel methods, Finite differences, Lagrange, Hermite and spline interpolation, Numerical differentiation and integration, Numerical solutions of ODEs using Picard, Euler, modified Euler and Runge-Kutta methods. Calculus of Variations: Variation of a functional, Euler-Lagrange equation, Necessary and sufficient conditions for extrema. Variational methods for boundary value problems in ordinary and partial differential equations. Linear Integral Equations: Linear integral equation of the first and second kind of Fredholm and Volterra type, Solutions with separable kernels. Characteristic numbers and eigenfunctions, resolvent kernel. Classical Mechanics: Generalized coordinates, Lagrange’s equations, Hamilton’s canonical equations, Hamilton’s principle and principle of least action, Two-dimensional motion of rigid bodies, Euler’s dynamical equations for the motion of a rigid body about an axis, theory of small oscillations. APSET Syllabus3&4 UNIT – 4 Descriptive statistics, exploratory data analysis Sample space, discrete probability, independent events, Bayes theorem. Random variables and distribution functions (univariate and multivariate); expectation and moments. Independent random variables, marginal and conditional distributions. Characteristic functions. Probability inequalities (Tchebyshef, Markov, Jensen). Modes of convergence, weak and strong laws of large numbers, Central Limit theorems (i.i.d. case). Markov chains with finite and countable state space, classification of states, limiting behavior of n-step transition probabilities, stationary distribution, Poisson and birth-and-death processes. Standard discrete and continuous univariate distributions. sampling distributions, standard errors and asymptotic distributions, distribution of order statistics and range. Methods of estimation, properties of estimators, confidence intervals. Tests of hypotheses: most powerful and uniformly most powerful tests, likelihood ratio tests. Analysis of discrete data and chi-square test of goodness of fit. Large sample tests. Simple nonparametric tests for one and two sample problems, rank correlation and test for independence. Elementary Bayesian inference. Gauss-Markov models, estimability of parameters, best linear unbiased estimators, confidence intervals, tests for linear hypotheses. Analysis of variance and covariance. Fixed, random and mixed effects models. Simple and multiple linear regression. Elementary regression diagnostics. Logistic regression. Multivariate normal distribution, Wishart distribution and their properties. Distribution of quadratic forms. Inference for parameters, partial and multiple correlation coefficients and related tests. Data reduction techniques: Principle component analysis, Discriminant analysis, Cluster analysis, Canonical correlation. Simple random sampling, stratified sampling and systematic sampling. Probability proportional to size sampling. Ratio and regression methods. Completely randomized designs, randomized block designs and Latin-square designs. Connectedness and orthogonality of block designs, BIBD. 2K factorial experiments: confounding and construction. Hazard function and failure rates, censoring and life testing, series and parallel systems. Linear programming problem, simplex methods, duality. Elementary queuing and inventory models. Steady-state solutions of Markovian queuing models: M/M/1, M/M/1 with limited waiting space, M/M/C, M/M/C with limited waiting space, M/G/1 This is the APSET Syllabus3&4 for mathematical science.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029930591583252, "perplexity": 3541.438362857635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00253.warc.gz"}
http://xps.apmonitor.com/wiki/index.php/Apps/GibbsFreeEnergy
Apps Gibbs Free Energy In thermodynamics, the Gibbs free energy is a thermodynamic potential that measures the "useful" or process-initiating work obtainable from an isothermal, isobaric thermodynamic system. Gibbs energy is also the chemical potential that is minimized when a system reaches equilibrium at constant pressure and temperature. As such, it is a convenient criterion of spontaneity for processes with constant pressure and temperature. Every system seeks to achieve a minimum of free energy. Out of this general natural tendency, a quantitative measure as to how near or far a potential reaction is from this minimum is when the calculated energetics of the process indicate that the change in Gibbs free energy is negative. In essence, this means that such a reaction will be favoured and will release energy. The energy released equals the maximum amount of work that can be performed as a result of the chemical reaction. In contrast, if conditions indicated a positive change in Gibbs free energy, then energy, in the form of work, would have to be added to the reacting system to make the reaction go. The equation can also be seen from the perspective of both the system and its surroundings (the universe). For the purposes of calculation, we assume the reaction is the only reaction going on in the universe. Thus the entropy released or absorbed by the system is actually the entropy that the environment must absorb or release respectively. Thus the reaction will only be allowed if the total entropy change of the universe is equal to zero (an equilibrium process) or positive. The input of heat into an "endothermic" chemical reaction (e.g. the elimination of cyclohexanol to cyclohexene) can be seen as coupling an inherently unfavourable reaction (elimination) to a favourable one (burning of coal or the energy source of a heat source) such that the total entropy change of the universe is more than or equal to zero, making the Gibbs free energy of the coupled reaction negative. This optimization problem determines the mole fractions of a mixture of common combustion species at equilibrium by minimizing the Gibbs Energy. Name Lower Value Upper gibbs.t --- 1.0000E+03 --- gibbs.p --- 2.0000E+00 --- gibbs.x[1] 1.0000E-04 3.9421E-01 --- gibbs.x[2] 1.0000E-04 1.0316E+00 --- gibbs.x[3] 1.0000E-04 1.2432E+00 --- gibbs.x[4] 1.0000E-04 3.6261E-01 --- gibbs.x[5] 1.0000E-04 5.1800E+00 --- gibbs.z 1.0000E-01 9.9953E-01 --- gibbs.y[1] --- 4.8007E-02 --- gibbs.y[2] --- 1.2563E-01 --- gibbs.y[3] --- 1.5139E-01 --- gibbs.y[4] --- 4.4158E-02 --- gibbs.y[5] --- 6.3081E-01 --- References Lwin, Y., Chemical Equilibrium by Gibbs Energy Minimization on Spreadsheets, Int. J. Engng Ed. Vol. 16, No. 4, pp. 335-339, 2000.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087912440299988, "perplexity": 330.320125329044}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00517.warc.gz"}
https://brilliant.org/problems/geometric-progressions-2/
# Geometric progressions Algebra Level 3 Consider a geometric progression which:- • Starts with $$10!$$. • Has a common ratio of 10. Which term of this geometric progression is $$9! \times{10}^{10}$$? Notation: $$!$$ denotes the factorial notation. For example, $$8! = 1\times2\times3\times\cdots\times8$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997696280479431, "perplexity": 3140.903484297697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.cuemath.com/ncert-solutions/q-4-exercise-13-3-surface-areas-and-volumes-class-9-maths/
# Ex.13.3 Q4 Surface Areas and Volumes Solution - NCERT Maths Class 9 Go back to  'Ex.13.3' ## Question A conical tent is $$10\,\rm m$$ high and the radius of its base is $$24\,\rm m.$$ Find i. Slant height of the tent. ii. Cost of the canvas required to make the tent, if the cost of $$1 \, \rm {m^2}$$ canvas is $$\rm Rs. \,70.$$ Video Solution Surface-Areas-And-Volumes Ex exercise-13-3 | Question 4 ## Text Solution Reasoning: Curved surface area of the cone of base radius $$r$$ and slant height $$l$$ is\begin{align}\pi rl \end{align}. Where, \begin{align}l = \sqrt {{r^2} + {h^2}} \end{align} using the Pythagoras Theorem. And cost of canvas required will be the product of area and cost per meter square of canvas. What is known? Height of the cone and its base radius. What is unknown? i. Slant height of the tent. Steps: Slant height $$l = \sqrt {{r^2} + {h^2}}$$ Radius $$(r) = 24\rm\, m$$ Height $$(h) = 10\rm \, m$$ \begin{align}l &= \sqrt {{r^2} + {h^2}} \\ l &= \sqrt {{{(24)}^2} + {{(10)}^2}} \\ & = \sqrt {576 + 100} \\ &= \sqrt {676} \end{align} Slant height of the conical tent  $$= 26\, \rm m$$ ii. Cost of canvas required to make if $$1 \rm{m^2}$$ canvas is $$\rm Rs\, 70.$$ Canvas required to make the tent is equal to the curved surface area of the cone. Curved surface area of the cone $$= \pi rl$$ Radius $$(r) = 24\rm\, m$$ Slant height \begin{align}(l) = 26\,\, \rm m \end{align} \begin{align}CSA = \frac{{22}}{7} \times 24 \times 26\,\, \rm {m^2} \end{align} \begin{align}\therefore \end{align} Cost of \begin{align}\frac{{22}}{7} \times 24 \times 26\,\rm {m^2} \end{align} canvas \begin{align} = \frac{{22}}{7} \times 24 \times 26 \times 70 = \rm Rs\,\,137280 \end{align} Cost of the canvas required to make the tent $$= \rm Rs. 137280$$ (i) Slant height of the tent is $$26\,\rm m.$$ (ii) The cost of the canvas is $$\rm Rs. 137280$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 16, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999947547912598, "perplexity": 4302.817975326804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00443.warc.gz"}
http://scitation.aip.org/content/aip/journal/jmp/47/4/10.1063/1.2188209
• journal/journal.article • aip/jmp • /content/aip/journal/jmp/47/4/10.1063/1.2188209 • jmp.aip.org 1887 No data available. No metrics data to plot. The attempt to plot a graph for these metrics has failed. A Morse-theoretical analysis of gravitational lensing by a Kerr-Newman black hole USD 10.1063/1.2188209 View Affiliations Hide Affiliations Affiliations: 1 TU Berlin, Sekr. PN 7-1, 10623 Berlin, Germany and Wilhelm Foerster Observatory, Munsterdamm 90, 12169 Berlin, Germany 2 TU Berlin, Sekr. PN 7-1, 10623 Berlin, Germany a) Electronic mail: [email protected] b) Electronic mail: [email protected] J. Math. Phys. 47, 042503 (2006) /content/aip/journal/jmp/47/4/10.1063/1.2188209 http://aip.metastore.ingenta.com/content/aip/journal/jmp/47/4/10.1063/1.2188209 View: Figures ## Figures FIG. 1. The surfaces (top) and (bottom) are drawn here for the case and . The picture shows the (half-)plane , with on the horizontal and on the vertical axis. The spheres of radius and are indicated by dashed lines; they meet the equatorial plane in the photon circles. The boundary of the ergosphere coincides with the surface and is indicated in the bottom figure by a thick line; it meets the equatorial plane at . FIG. 2. The regions , , and defined in Proposition 1 are shown here for the case and . Again, as in Fig. 1, we plot on the horizontal and on the vertical axis. Some of the spherical lightlike geodesics that fill the photon region are indicated. meets the equatorial plane in the photon circles at and and the axis at radius given by . This picture can also be found as Fig. 21 in the online article (Ref. 29). /content/aip/journal/jmp/47/4/10.1063/1.2188209 2006-04-14 2014-04-24 Article content/aip/journal/jmp Journal 5 3 ### Most cited this month More Less This is a required field
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072673082351685, "perplexity": 2009.2963900956768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
https://marcofrasca.wordpress.com/2013/07/13/waiting-for-eps-hep-2013-some-thoughts/
## Waiting for EPS HEP 2013: Some thoughts On 18th July the first summer HEP Conference will start in Stockholm. We do not expect great announcements from CMS and ATLAS as most of the main results from 2011-2012 data were just unraveled. The conclusions is that the particle announced on 4th July last year is a Higgs boson. It decays in all the modes foreseen by the Standard Model and important hints favor spin 0. No other resonance is seen at higher energies behaving this way. It is a single yet. There are a lot of reasons to be happy: We have likely seen the guilty for the breaking of the symmetry in the Standard Model and, absolutely for the first time, we have a fundamental particle behaving like a scalar. Both of these properties were looked upon for a long time and now this search is finally ended. On the bad side, no hint of new physics is seen anywhere and probably we will have to wait the restart of LHC on 2015. The long sought SUSY is at large yet. Notwithstanding this hopeless situation for theoretical physics, my personal view is that there is something that gives important clues to great novelties that possibly will transmute into something of concrete at the restart. It is important to note that there seem to exist some differences between CMS and ATLAS  and this small disagreement can hide interesting news for the future. I cannot say if, due to the different conception of this two detectors, something different should be seen but is there. Anyway, they should agree in the end of the story and possibly this will happen in the near future. The first essential point, that is often overlooked due to the overall figure, is the decay of the Higgs particle in a couple of W or Z. WW decay has a significantly large number of events and what CMS claims is indeed worth some deepening. This number is significantly below one. There is  a strange situation here because CMS gives $0.76\pm 0.21$ and in the overall picture just write $0.68\pm 0.20$ and so, I cannot say what is the right one. But they are consistent each other so not a real problem here. Similarly, ZZ decay yields $0.91^{+0.30}_{-0.24}$. ATLAS, on the other side, yields for WW decay $0.99^{+0.31}_{-0.28}$ and for ZZ decay $1.43^{+0.40}_{-0.35}$. Error bars are large yet and fluctuations can change these values. The interesting point here, but this has the value of a clue as these data agree with Standard Model at $2\sigma$, is that the lower values for the WW decay can be an indication that this Higgs particle could be a conformal one. This would mean room for new physics. For ZZ decay apparently ATLAS seems to have a lower number of events as this figure is somewhat larger and the error bar as well. Anyway, a steady decrease has been seen for the WW decay as a larger dataset was considered. This decrease, if confirmed at the restart, would mean a major finding after the discovery of the Higgs particle. It should be said that ATLAS already published updated results with the full dataset (see here). I would like to emphasize that a conformal Standard Model can imply SUSY. The second point is a bump found by CMS in the $\gamma\gamma$ channel (see here).  This is what they see but ATLAS sees nothing there and this is possibly a fluke. Anyway, this is about $3\sigma$ and so CMS reported about on a publication of them. Finally, it is also possible that heavier Higgs particles could have depressed production rates and so are very rare. This also would be consistent with a conformal Standard Model. My personal view is that all hopes to see new physics at LHC are essentially untouched and maybe this delay to unveil it is just due to the unlucky start of the LHC on 2008. Meantime, we have to use the main virtue of a theoretical physicist: keeping calm and being patient. Update: Here is the press release from CERN. ATLAS Collaboration (2013). Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC arXiv arXiv: 1307.1427v1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277880191802979, "perplexity": 709.3151315783256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099755.63/warc/CC-MAIN-20150627031819-00247-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/244947/standing-wave-problem-in-deep-water-limit-h-to-infty-show-that-omega2-gk
# Standing Wave problem-In deep water limit $h\to\infty$ show that $\omega^2=gk$ The equations I have are $\phi=(-ag/\omega)\cos(kx)\sin(\omega t)e^{kz}$ and $\eta=a\cos(kx)\cos(\omega t)$ I know that $d\phi/dz=d\eta/dt$ but when I partially differentiate and rearrange I get $gk e^{kz}= \omega^2$ and I don't know how to get rid of the exponential function. - How does $h$ enter your equations? Do the parameters depend on $h$? –  Johan Nov 26 '12 at 13:23 The original equation is $\phi=-ag/\omega cos(kx)sin(\omega t) (cosh(k[z+h])/ cosh(kH))$ but as h tends to inifity $\phi$ simplifies to the equation in the question. –  Adam Nov 26 '12 at 13:37 Do you know why the original equation simplifies as h tends to infinity? How do I show that the original equations simplifies as h tends to infinity? –  user52290 Dec 8 '12 at 23:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626029133796692, "perplexity": 505.8887050400482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011082123/warc/CC-MAIN-20140305091802-00065-ip-10-183-142-35.ec2.internal.warc.gz"}
https://math.libretexts.org/Courses/Long_Beach_City_College/Book%3A_Intermediate_Algebra/Text/08%3A_Rational_Expressions_and_Equations/8.5%3A_Simplify_Complex_Rational_Expressions
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 8.5: Simplify Complex Rational Expressions $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ##### Learning Objectives By the end of this section, you will be able to: • Simplify a complex rational expression by writing it as division • Simplify a complex rational expression by using the LCD ##### Note Before you get started, take this readiness quiz. If you miss a problem, go back to the section listed and review the material. 1. Simplify: $$\frac{\frac{3}{5}}{\frac{9}{10}}$$. If you missed this problem, review Exercise 1.6.25. 2. Simplify: $$\frac{1−\frac{1}{3}}{4^2+4·5}$$. If you missed this problem, review Exercise 1.6.31. Complex fractions are fractions in which the numerator or denominator contains a fraction. In Chapter 1 we simplified complex fractions like these: $\begin{array}{cc} {\frac{\frac{3}{4}}{\frac{5}{8}}}&{\frac{\frac{x}{2}}{\frac{xy}{6}}}\\ \nonumber \end{array}$ In this section we will simplify complex rational expressions, which are rational expressions with rational expressions in the numerator or denominator. ##### Definition: COMPLEX RATIONAL EXPRESSION A complex rational expression is a rational expression in which the numerator or denominator contains a rational expression. Here are a few complex rational expressions: $$\frac{\frac{4}{y−3}}{\frac{8}{y^2−9}}$$ $$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{x}{y}−\frac{y}{x}}$$ $$\frac{\frac{2}{x+6}}{\frac{4}{x−6}−\frac{4}{x^2−36}}$$ Remember, we always exclude values that would make any denominator zero. We will use two methods to simplify complex rational expressions. ## Simplify a Complex Rational Expression by Writing it as Division We have already seen this complex rational expression earlier in this chapter. $$\frac{\frac{6x^2−7x+2}{4x−8}}{\frac{2x^2−8x+3}{x^2−5x+6}}$$ We noted that fraction bars tell us to divide, so rewrote it as the division problem $$(\frac{6x^2−7x+2}{4x−8})÷(\frac{2x^2−8x+3}{x^2−5x+6})$$ Then we multiplied the first rational expression by the reciprocal of the second, just like we do when we divide two fractions. This is one method to simplify rational expressions. We write it as if we were dividing two fractions. ##### Example $$\PageIndex{1}$$ $$\frac{\frac{4}{y−3}}{\frac{8}{y^2−9}}$$. $$\frac{\frac{4}{y−3}}{\frac{8}{y^2−9}}$$ Rewrite the complex fraction as division. $$\frac{4}{y−3}÷\frac{8}{y^2−9}$$ Rewrite as the product of first times the reciprocal of the second. $$\frac{4}{y−3}·\frac{y^2−9}{8}$$ Multiply. $$\frac{4(y^2−9)}{8(y−3)}$$ Factor to look for common factors. $$\frac{4(y−3)(y+3)}{8(y−3)}$$ Simplify. $$\frac{y+3}{2}$$ Are there any value(s) of y that should not be allowed? The simplified rational expression has just a constant in the denominator. But the original complex rational expression had denominators of y−3 and $$y^2−9$$. This expression would be undefined if y=3 or y=−3 ##### Example $$\PageIndex{2}$$ $$\frac{\frac{2}{x^2−1}}{\frac{3}{x+1}}$$. $$\frac{2}{3(x−1)}$$ ##### Example $$\PageIndex{3}$$ $$\frac{\frac{1}{x^2−7x+12}}{\frac{2}{x−4}}$$. $$\frac{1}{2(x−3)}$$ ##### Example $$\PageIndex{4}$$ $$\frac{\frac{1}{3}+\frac{1}{6}}{\frac{1}{2}−\frac{1}{3}}$$. Simplify the numerator and denominator. Find the LCD and add the fractions in the numerator. Find the LCD and add the fractions in the denominator. Simplify the numerator and denominator. Simplify the numerator and denominator, again. Rewrite the complex rational expression as a division problem. Multiply the first times by the reciprocal of the second. Simplify. ##### Example $$\PageIndex{5}$$ $$\frac{\frac{1}{2}+\frac{2}{3}}{\frac{5}{6}+\frac{1}{12}}$$. $$\frac{14}{11}$$ ##### Example $$\PageIndex{6}$$ $$\frac{\frac{3}{4}−\frac{1}{3}}{\frac{1}{8}+\frac{5}{6}}$$. $$\frac{10}{23}$$ How to Simplify a Complex Rational Expression by Writing it as Division ##### Example $$\PageIndex{7}$$ $$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{x}{y}−\frac{y}{x}}$$. ##### Example $$\PageIndex{8}$$ $$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{1}{x}−\frac{1}{y}}$$. $$\frac{y+x}{y−x}$$ ##### Example $$\PageIndex{9}$$ $$\frac{\frac{1}{a}+\frac{1}{b}}{\frac{1}{a^2}−\frac{1}{b^2}}$$. $$\frac{ab}{b−a}$$ ##### Definition: SIMPLIFY A COMPLEX RATIONAL EXPRESSION BY WRITING IT AS DIVISION. 1. Simplify the numerator and denominator. 2. Rewrite the complex rational expression as a division problem. 3. Divide the expressions. ##### Example $$\PageIndex{10}$$ $$\frac{n−\frac{4n}{n+5}}{\frac{1}{n+5}+\frac{1}{n−5}}$$ Simplify the numerator and denominator. Find the LCD and add the fractions in the numerator. Find the LCD and add the fractions in the denominator. Simplify the numerators. Subtract the rational expressions in the numerator and add in the denominator. Rewrite as fraction division. Multiply the first times the reciprocal of the second. Factor any expressions if possible. Remove common factors. Simplify. ##### Example $$\PageIndex{11}$$ $$\frac{b−\frac{3b}{b+5}}{\frac{2}{b+5}+\frac{1}{b−5}}$$. b(b+2) ##### Example $$\PageIndex{12}$$ $$\frac{1−\frac{3}{c+4}}{\frac{1}{c+4}+\frac{c}{3}}$$. 3c+3 ## Simplify a Complex Rational Expression by Using the LCD We “cleared” the fractions by multiplying by the LCD when we solved equations with fractions. We can use that strategy here to simplify complex rational expressions. We will multiply the numerator and denominator by LCD of all the rational expressions. Let’s look at the complex rational expression we simplified one way in Example. We will simplify it here by multiplying the numerator and denominator by the LCD. When we multiply by $$\frac{LCD}{LCD}$$ we are multiplying by 1, so the value stays the same. ##### Example $$\PageIndex{13}$$ Simplify: $$\frac{\frac{1}{3}+\frac{1}{6}}{\frac{1}{2}−\frac{1}{3}}$$. The LCD of all the fractions in the whole expression is 6. Clear the fractions by multiplying the numerator and denominator by that LCD. Distribute. Simplify. ##### Example $$\PageIndex{14}$$ Simplify: $$\frac{\frac{1}{2}+\frac{1}{5}}{\frac{1}{10}+\frac{1}{5}}$$. $$\frac{7}{3}$$ ##### Example $$\PageIndex{15}$$ Simplify: $$\frac{\frac{1}{4}+\frac{3}{8}}{\frac{1}{2}−\frac{5}{16}}$$. $$\frac{7}{3}$$ How to Simplify a Complex Rational Expression by Using the LCD ##### Example $$\PageIndex{16}$$ Simplify: $$\frac{\frac{1}{x}+\frac{1}{y}}{\frac{x}{y}−\frac{y}{x}}$$. ##### Example $$\PageIndex{17}$$ Simplify: $$\frac{\frac{1}{a}+\frac{1}{b}}{\frac{a}{b}−\frac{b}{a}}$$. $$\frac{b+a}{a^2+b^2}$$ ##### Example $$\PageIndex{18}$$ Simplify: $$\frac{\frac{1}{x^2}−\frac{1}{y^2}}{\frac{1}{x}−\frac{1}{y}}$$. $$\frac{y−x}{xy}$$ ##### Definition: SIMPLIFY A COMPLEX RATIONAL EXPRESSION BY USING THE LCD. 1. Find the LCD of all fractions in the complex rational expression. 2. Multiply the numerator and denominator by the LCD. 3. Simplify the expression. Be sure to start by factoring all the denominators so you can find the LCD. ##### Example $$\PageIndex{19}$$ Simplify: $$\frac{\frac{2}{x+6}}{\frac{4}{x−6}−\frac{4}{x^2−36}}$$. Find the LCD of all fractions in the complex rational expression. The LCD is (x+6)(x−6) Multiply the numerator and denominator by the LCD. Simplify the expression. Distribute in the denominator. Simplify. Simplify. To simplify the denominator, distribute and combine like terms. Remove common factors. Simplify. Notice that there are no more factors common to the numerator and denominator. ##### Example $$\PageIndex{20}$$ Simplify: $$\frac{\frac{3}{x+2}}{\frac{5}{x−2}−\frac{3}{x^2−4}}$$. $$\frac{3x−6}{5x+7}$$ ##### Example $$\PageIndex{21}$$ Simplify: $$\frac{\frac{2}{x−7}−\frac{1}{x+7}}{\frac{6}{x+7}−\frac{1}{x^2−49}}$$. $$\frac{x+21}{6x+43}$$ ##### Example $$\PageIndex{22}$$ Simplify: $$\frac{\frac{4}{m^2−7m+12}}{\frac{3}{m−3}−\frac{2}{m−4}}$$. Find the LCD of all fractions in the complex rational expression. The LCD is (m−3)(m−4) Multiply the numerator and denominator by the LCD. Simplify. Simplify. Distribute. Combine like terms. ##### Example $$\PageIndex{23}$$ Simplify: $$\frac{\frac{3}{x^2+7x+10}}{\frac{4}{x+2}+\frac{1}{x+5}}$$. $$\frac{3}{5x+22}$$ ##### Example $$\PageIndex{24}$$ Simplify: $$\frac{\frac{4y}{y+5}+\frac{2}{y+6}}{\frac{3y}{y^2+11y+30}}$$. $$\frac{6y+34}{3y}$$ ##### Example $$\PageIndex{25}$$ Simplify: $$\frac{\frac{y}{y+1}}{1+\frac{1}{y−1}}$$. Find the LCD of all fractions in the complex rational expression. The LCD is (y+1)(y−1) Multiply the numerator and denominator by the LCD. Distribute in the denominator and simplify. Simplify. Simplify the denominator, and leave the numerator factored. Factor the denominator, and remove factors common with the numerator. Simplify. ##### Example $$\PageIndex{26}$$ Simplify: $$\frac{\frac{x}{x+3}}{1+\frac{1}{x+3}}$$. $$\frac{x}{x+4}$$ ##### Example $$\PageIndex{27}$$ Simplify: $$\frac{1+\frac{1}{x−1}}{\frac{3}{x+1}}$$. $$\frac{x(x+1)}{3(x−1)}$$ ## Key Concepts • To Simplify a Rational Expression by Writing it as Division 1. Simplify the numerator and denominator. 2. Rewrite the complex rational expression as a division problem. 3. Divide the expressions. • To Simplify a Complex Rational Expression by Using the LCD 1. Find the LCD of all fractions in the complex rational expression. 2. Multiply the numerator and denominator by the LCD. 3. Simplify the expression. ## Glossary complex rational expression A complex rational expression is a rational expression in which the numerator or denominator contains a rational expression. 8.5: Simplify Complex Rational Expressions is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by OpenStax.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940531849861145, "perplexity": 1111.1353821652613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00562.warc.gz"}
https://www.askmattrab.com/notes/389-lens-maker-formula
# Lens Maker Formula Lens maker formula is an expression that shows relation between the refractive index, the focal length and the radii of curvatures of the lens. 1f = (n1)(1R1 + 1R2) where, f = focal length of the lens n = refractive index of the lens R1 ,R2 = radii of curvatures of the lens Assumptions: 1. The lens is thin 2. deviation produced by the thin lens is similar to that of a small angle prism 3. angle made by incident ray and refracted ray with principle axis is small : from fig: In  ΔXOF, tanδ = OX/OF or, tanδ = h/f for small angle tanδ ≈ δ .˙. δ = h/f.............:1 we know, δ = A(n-1)............:2 from equations 1 and 2 h/f =A(n-1)............:3 from geometry, angle MXG = A or, A = a+a' or, A = h/R1 + h/R2 or A =h(1/R1 + 1/R2).............:4 now, from equation 3 or, h/f = h(n-1)(1/R1 + 1/R2) 1/f = (n-1)(1/R1 + 1/R2)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336740732192993, "perplexity": 2953.963896339264}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00532.warc.gz"}
https://tex.stackexchange.com/questions/392113/two-equations-with-same-label/392116
# Two equations with same label I have to translate a dissertation to a researcher. There is a problem I couldn't solve. Namely, there is an introduction chapter where the author has named an equation as (A). Then in the chapter one there is again an equation named as (A). Those equations are not the same in sense that first one has parameter z but another has parameter e^z. So how can I name two different equations with the same label? For example e^{ix}+1=0 (A) and $e^{iy}=-1$ (A) • Welcome to TeX SX! Virtually they're the same. You might add the chapter number in front of the second equation, for instance. – Bernard Sep 19 '17 at 9:16 If you are using the amsmath package, you can specify the displayed label using the \tag macro: \documentclass{article} \usepackage{amsmath} \begin{document} $$\tag{A} e^{ix}+1=0$$ $$\tag{A} e^{iy}+1=0$$ \end{document} Let's look at a possible scenario. The text has an unnumbered first chapter, where equations are identified by letters, while the body of the book has equations numbered like (chapter.equation). \documentclass[oneside]{book} \usepackage{amsmath} % this and 'oneside' is just for making small pictures \usepackage[a6paper]{geometry} \numberwithin{equation}{chapter} \begin{document} \frontmatter \renewcommand\theequation{\Alph{equation}} \chapter{Introduction} Some text $$\label{eq:Euler} e^{ix}+1=0$$ some text \mainmatter \renewcommand\theequation{\thechapter.\arabic{equation}} \chapter{Title} Some text followed by an equation $$\label{eq:easy} 1+1=2$$ and here we use an equivalent formulation of an equation in the introduction $$\tag{\ref{eq:Euler}} e^{iy}=-1$$ Some other text \end{document} Using \ref in the recalled equation allows to make this independent of the actual number used in the introduction. The author might have not used \renewcommand\theequation and have assigned “A” manually with \tag; but the result would be the same % in the introduction $$\label{eq:Euler}\tag{A} e^{ix}+1=0$$ % in the body $$\tag{\ref{eq:Euler}} e^{iy}=-1$$ If you want to reset the number of the equations conter you could use: \setcounter{equation}{0} right before the second equation (A). If you want to use the equation number A just for that equation and keep with normal numbering you could use: $$\tag{A} e^{iy}=-1$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.8966493606567383, "perplexity": 1174.8566013533311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00298.warc.gz"}
http://math.stackexchange.com/questions/335995/how-can-i-prove-big-oh-relation-between-log-2-log-2-n-and-sqrt-log-2-n
How can I prove big-oh relation between $\log_2(\log_2 n)$ and $\sqrt{\log_2 n}$ How can I prove big-O relation between $f=\log_2(\log_2 n)$ and $g=\sqrt{\log_2 n}\,$? I want to find the constants, $c, N$ such that $\ g(x) \leq cf(x)$ for all $x>N$. - First you can find when $\log_2\log_2 N=\sqrt{\log_2 N}$. What happens if you take $x>N$ after that? –  Ian Coley Mar 20 '13 at 16:36 A useful result, if $\lim_{n\to \infty} \frac{g(n)}{f(n)}=a$, then $g=O(f)$. –  Mhenni Benghorbal Mar 20 '13 at 16:49 As an additional comment, you need only check the relation between $\log_2x$ and $\sqrt x$. You may solve for $c,N'$ in this case let $N=2^{N'}$. –  Ian Coley Mar 20 '13 at 16:53 You can't. Did you mean $f(x) \le c g(x)$? –  Aryabhata Apr 3 '13 at 9:06 @FrankMcGovern I fail to see the point of solving $\log_2x=\sqrt{x}$. –  Did Apr 3 '13 at 9:26 The derivative of the usual logarithm function is less than $1$ on $(1,+\infty)$ hence $\ln x\leqslant x-1$ on $x\geqslant1$. This implies $\log_2x\leqslant2x$ on $x\geqslant1$. Since $\log_2x=2\log_2\sqrt{x}$, $\log_2x\leqslant4\sqrt{x}$ on $x\geqslant1$. Appplying this to $x=\log_2n$, one sees that $f(n)\leqslant4g(n)$ for every $n\geqslant2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911825060844421, "perplexity": 288.3935738929651}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00107-ip-10-179-60-89.ec2.internal.warc.gz"}
https://undergroundmathematics.org/hyperbolic-functions/square-wheels/solution
Under construction Resources under construction may not yet have been fully reviewed. Food for thought ## Solution (This resource is still in draft form.) This applet shows a square with slide length $2$ rolling over an upside-down catenary with equation $y=-\cosh x$. When the square is horizontal, the centre of its base touches the vertex of the catenary. Move the slider to roll the square. Brief solutions (require more detail): • What is the locus of the centre of the square? The locus of the centre of the square is a straight line along the $x$-axis. • How far can the square roll with the same side still touching the catenary? The square can rotate until the vertex of the square touches the catenary. At this point, the arc length of the catenary from the vertex of the catenary to this point equals half of the square’s side length, which is $1$. The arc length from $(0,-1)$ to $(x, -\cosh x)$ is $\sinh x$, so this occurs when $\sinh x=1$, or $x=\arsinh 1$. Using the formula for $\arsinh$ or solving $\sinh x=1$ directly gives an alternative expression for this: $x=\ln(1+\sqrt{2})$. Furthermore, at this point, the gradient of the catenary is $y'=-\sinh x=-1$. This means that the square has rotated by $45^\circ$, and thus the centre of the square is also at $x=\arsinh 1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633239269256592, "perplexity": 322.437170070248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00482.warc.gz"}
https://csgillespie.wordpress.com/tag/inverse-cdf/
# Why? ## November 28, 2010 ### Random variable generation (Pt 1 of 3) Filed under: AMCMC, R — Tags: , , , , — csgillespie @ 7:35 pm As I mentioned in a recent post, I’ve just received a copy of Advanced Markov Chain Monte Carlo Methods. Chapter 1.4 in the book (very quickly) covers random variable generation. ## Inverse CDF Method A standard algorithm for generating random numbers is the inverse cdf method. The continuous version of the algorithm is as follows: 1. Generate a uniform random variable $U$ 2. Compute and return $X = F^{-1}(U)$ where $F^{-1}(\cdot)$ is the inverse of the CDF. Well known examples of this method are the exponential distribution and the Box-Muller transform. ## Example: Logistic distribution I teach this algorithm in one of my classes and I’m always on the look-out for new examples. Something that escaped my notice is that it is easy to generate RN’s using this technique from the Logistic distribution. This distribution has CDF $\displaystyle F(x; \mu, s) = \frac{1}{1 + \exp(-(x-\mu)/s)}$ and so we can generate a random number from the logistic distribution using the following formula: $\displaystyle X = \mu + s \log\left(\frac{U}{1-U}\right)$ Which is easily converted to R code: myRLogistic = function(mu, s){   u = runif(1)   return(mu + s log(u/(1-u))) }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843347430229187, "perplexity": 985.8097709324949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652631.96/warc/CC-MAIN-20150417045732-00206-ip-10-235-10-82.ec2.internal.warc.gz"}
http://membran.at/Projekte_2014?q=node/125
# Nanofiltration as key technology for the separation of LA and AA • Posted on: 12 June 2018 • By: mmiltner Title Nanofiltration as key technology for the separation of LA and AA Publication Type Journal Article Year of Publication 2012 Authors Ecker J, Raab T., Harasek M Journal Journal of Membrane Science Volume 389 Pagination 389-398 Keywords Amino acids, Green Biorefinery, Lactic acid, Nanofiltration Abstract Nanofiltration as state-of-the-art technology was used for the separation of lactic acid (LA) and amino acids (AA) in a ‘Green Biorefinery’ pilot plant. For this process, the performances of six different nanofiltration membranes were compared by experiments in lab scale. In this work the focus was on the separation of the two products, LA and AA. Enhanced differences in the retentions were required to produce two purified process streams, LA enriched permeate and amino acid enriched retentate. In the reference experiment, performed with original solution from the ‘Green Biorefinery’ pilot plant, the retention values were about 60% for LA, and about 88% for AA, this hindered good performance in the separation of the main components. Process optimization with pH value variations and different diafiltration-modes were investigated; one experiment was done with original solution, two tests dealt with varying pH-values, two with different diafiltration rates. A pH-variation from 3.9 (reference solution) down to 2.5 transferred the chemical structure of LA, which reduced the retention of the LA significantly from 67% to 42% for the membrane DL (Osmonics). Beside the separation, further attention was given to the flux behaviour. All screening scenarios were compared with a reference experiment done with original solution and standard process parameters as used in the plant itself to evaluate the efficiency trends shown in the tests. It was shown that a nanofiltration unit allowed the separation of sufficient degree for further treatment technologies between AA and LA, a membrane screening for the optimization of this process ensured best performance in practice. URL https://www.sciencedirect.com/science/article/pii/S0376738811008118 DOI 10.1016/j.memsci.2011.11.004
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833128809928894, "perplexity": 4604.501386597358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00533.warc.gz"}
https://forum.dopdf.com/troubleshooting-f4/spacing-problem-with-tamil-unicode-character-%E0%AE%B5-t3097.html
## Spacing problem with Tamil UNICODE character வீ Post here if you have problems installing or using doPDF. umapathy Posts: 3 Joined: Thu Mar 01, 2012 7:52 pm I have come across a peculiar problem with spacing w.r.t Tamil UNICODE character வீ (pronounced like whee)when used in words like வீதி (pronounced like wheethi means Road). When I use Office 2010 and if I use the built in feature I have not come across the problem but if I use dopdf it causes inconvenience. I have created a document in Tamil UNICODE and wanted to converted to pdf where I have come across this issue. I shall be thankful if this can be sorted out. {Since dopdf is the easy way to convert 2 A4 sheet document to 1 single A4 page that alteast as of now cannot be done in Microsoft Office products.} Claudiu (Softland) Posts: 1506 Joined: Thu May 23, 2013 7:19 am Hello, Please send us the Tamil font you are using to our support team at [email protected] so we can install it and better troubleshoot the issue. We have manage to reproduce it locally using the word you mentioned but we need the exact font type installation to know what to include in the application as embedded. Thank you. umapathy Posts: 3 Joined: Thu Mar 01, 2012 7:52 pm Dear Softland, I have send you the files. I also got a response back from you. In any case if you need any other information let me know. I shall be thnakful if this could be fixed. Claudiu (Softland) Posts: 1506 Joined: Thu May 23, 2013 7:19 am Hello, This has been fixed in our latest doPDF build. You can download the aplication from the homepage. Thank you.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597137928009033, "perplexity": 2567.681285924197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902745.75/warc/CC-MAIN-20201029040021-20201029070021-00313.warc.gz"}
https://testbook.com/blog/verbal-reasoning-quiz-1-for-ssc-and-railways-exams/
• Save • Share # Verbal Reasoning Quiz 1 for SSC & Railways Exams 1 year ago . If you are preparing for Government Recruitment or Entrance exams, you will likely need to solve a section on Reasoning. Verbal Reasoning Quiz 1 for SSC and Railways Exams will help you learn concepts on important topics in Logical Reasoning – Verbal Reasoning. This Verbal Reasoning Quiz 1 is important for exams such as SSC CGL, CHSL, Stenographer, Railways RRB NTPC. ## Verbal Reasoning Quiz 1 for SSC and Railways Exams – Que. 1 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: This video is invested to guide the layman to learn C programming in the absence of a teacher. Assumptions: I.  A teacher of C-programming may not be available to everyone. II. C-programming can be learnt with the help of videos. 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit. Que. 2 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: More than the quality, good advertisements boost the sale of a product. Assumptions: 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit. Que. 3 Directions: In the following question, one statement is given followed by two assumptions, I and II. You have to consider the statements to be true, even if they seem to be at variance from commonly known facts. You are to decide which of the given assumptions can definitely be drawn from the given statements. Indicate your answer. Statement: Regular reading of newspaper enhances one’s general knowledge. Assumptions: I. Newspaper contains a lot of general knowledge. II. Enhancement of general knowledge enables success in life. 1. Only I is implicit 2. Only II is implicit 3. Both I and II are implicit 4. Neither I nor II is implicit Que. 4 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: The odd-even traffic system to fight increased air pollution has received mixed response from people. Assumptions: I. Air pollution has decreased due to odd-even system. II. Every citizen has welcomed the odd-even system. 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit Que. 5 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: Clean India campaign has evoked good response from all parts of the country. Assumptions: I. People are interested in the campaign. II. India is a very clean country. 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit Que. 6 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: The older generation of people prefer basic phones instead of touch screen phones. Assumptions: I. Basic phones are easy to operate. II. Touch-screen phones are much available these days. 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit Que. 7 In the question given below is given a statement followed by two assumptions numbered I and II. An assumption is something taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: Many global companies started to invest in India after the economic liberalization undertaken by the Congress government in 1994. Assumptions: 1. The economic liberalization allowed the investment of foreign companies which was restricted earlier 2. The Indian markets were extremely beneficial for these companies 1. Only assumption I is implicit 2. Only assumption II is implicit 3. Neither assumption I nor II is implicit 4. Both assumptions I and II are implicit Que. 8 In the question  given below is given a statement followed by two assumptions numbered I and II. An assumption is something taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statement: The applicant was told that he was appointed as a programmer with a probation period of one year and that his performance would be reviewed at the end of the period for confirmation. Assumptions: 1. The performance of an individual was generally not known at the time of appointment offer. 2. Generally an individual tries to prove his worth during the probation period. 1. Only assumption I is implicit 2. Only assumption II is implicit 3. Neither assumption I nor II is implicit 4. Both assumptions I and II are implicit Que. 9 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statements: More commuters now travel by this route, but there is no public demand for more buses. Assumptions: I. The number of buses depends upon the number of passengers. II. Usually people do not tolerate inconvenience. 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit. Que. 10 In the question below is given a statement followed by two assumptions numbered I and II. An assumption is something supposed or taken for granted. You have to consider the statement and the following assumptions and decide which of the assumptions is implicit in the statement. Statements: Detergents should be used to clean clothes. Assumptions: I. Detergents form more lather. II. Detergents help to dislodge grease and dirt. 1. Only assumption I is implicit. 2. Only assumption II is implicit. 3. Neither assumption I nor II is implicit. 4. Both assumptions I and II are implicit. Did you like this Verbal Reasoning Quiz 1 for SSC and Railways Exams? Let us know! You may also like – •  Save 2 hours ago 1 day ago 2 days ago 3 days ago
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218635320663452, "perplexity": 1864.10605253874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00624.warc.gz"}
http://mathhelpforum.com/pre-calculus/147590-find-equation-parallel-perpendicular-lines.html
# Math Help - Find Equation of Parallel and Perpendicular Lines 1. ## Find Equation of Parallel and Perpendicular Lines Find an equation of the following parallel and perpendicular lines. A.) The line parallel to $x+3=0$ and passing through $(-6, -7)$ B.) The line perpendicular to $y-4=0$ passing through $(-1, 6)$ Is there a specific formula I can use to to solve these? 2. Originally Posted by larry21 Find an equation of the following parallel and perpendicular lines. A.) The line parallel to $x+3=0$ and passing through $(-6, -7)$ B.) The line perpendicular to $y-4=0$ passing through $(-1, 6)$ Is there a specific formula I can use to to solve these? The equation of a line passing through a point $(x_1, y_1)$ is given by: $y-y_1 = m(x-x_1)$ where m is the slope. Now, two lines are parallel if their slopes are equal. and two lines are perpendicular if the product of their slopes is -1. 3. Originally Posted by harish21 $y-y_1 = m(x-x_1)$ For reference, this equation is known as the point-slope equation. 4. 1)required line is parallel to x+3=0 and passes through (-6,-7) equation of such line will be x=c. since it passes through (-6,-7) -6=c or x=-6 x+6=0 [NOTE: line x+k=0 is perpendicular to x axis so any line parallel to it must also be perpendicular to x axis and in the form x+k=0] 2) line perpendicular to y-4=0 will be in the form x=c. since it passes through(-1,6) we have -1=c or c=-1 x=-1 x+1=0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158038258552551, "perplexity": 480.38632313327224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987034.19/warc/CC-MAIN-20150728002307-00174-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/35942/simplify-k-matrix
# Simplify K-matrix + 3 like - 0 dislike 337 views 2+1D Abelian topologically ordered states are believed to be described by multicomponent $U(1)$ Chern-Simons theories, with Lagrangian \mathcal{L}=\frac{K_{IJ}}{4\pi}\epsilon^{\mu\nu\lambda}a_\mu^I\partial_\nu a_\lambda^J-\frac{1}{2\pi}t_I\epsilon^{\mu\nu\lambda}A_\mu\partial_\nu a_\lambda^I where $K$ is a invertible symmetric matrix with integer entries, and $t$ is a vector with integer entries. Different $K$-matrices differing by a matrix in $GL(N,\mathbb{Z})$ are equivalent, namely, if $K_1$ and $K_2$ satisfy K_1=W^TK_2W where $W$ is a matrix with integer entries and unit determinant, then $K_1$ and $K_2$ describe the same physics. Sometimes it is useful to simplify a $K$-matrix by using an appropriate $W$ matrix, i.e. $K\rightarrow W^TKW$, so that the resulting new $K$ is block diagonal. I do not know any general procedure of doing this, and I appreciate if anyone can help. If a general procedure is too hard, it is also helpful just to show how to find the appropriate $W$ that simplifies the following specific $K$-matrix: $$$K= \left( \begin{array}{cccc} 2&-1&0&0\\ -1&0&2&1\\ 0&2&0&0\\ 0&1&0&1 \end{array} \right)$$$ edited Apr 26, 2016 Every matrix can be transformed by an equivalence transformation of the kind you state to bring it into diagonal form, e.g., by a change of basis to an orthogonal eigensystem of $K$. A corrsponding change of the annihilation operators then simplifies that action wiithout changing the physics. So is there any general procedure of doing it? Always if you don't insist that the new $K$ is integral, too. (I hadn't noticed the integrality constraint when I wrote my first comment.) Integer congruence transformation to diagonal or block diagonal form do not always exist, only if the lattice defined by $K$ splits into a direct sum of smaller lattices. Could you please give a reference where the action appears, so that I can understand the origin of the integrality condition. Is $K$ known to be positive definite, or known to be indefinite? The origin of the integrality is charge quantization, or more formally, the compactness of the gauge field. Being compact, we require the partition function of the Chern-Simons theory to be invariant under $a\rightarrow a+2\pi$, and it turns out that only if the matrix $K$ is integral will this be satisfied. There is no positivity condition on $K$ though. For references, Xiao-Gang Wen's book Quantum Field Theory of Many-Body Systems may be good. Thank you. In the positive definite case, the problem of simplifying $K$ is the problem of finding a normal form for the Gram matrix of an integral lattice. This is a well-studied (though in high dimensions very difficult) problem in number thoery. I suggest that you look into the book Sphere Packings, Lattices and Groups by Conway and Sloane. I'll write a proper answer after having looked more at the context of your question - this may take a while. Many thanks! + 4 like - 0 dislike Let $K$ be a symmetric $n\times n$ matrix with integer coefficients. The additive abelian group of integer vectors of size $n$ gets a lattice structure by defining the (not necessarily definite) integral inner product $(x,y):=x^TKy$. It is conventional to call the integer $(x,x)$ the norm of $x$ (rather than its square root, as in Hilbert spaces). A standard referecne for lattices is the book Sphere Packings, Lattices and Groups by Conway and Sloane. (It covers the definite case only. For the indefinite case see, e.g., the book An introduction to the theory of numbers by Cassels. If $K$ is positive semidefinite, one has a Euclidean lattice in which all vectors have nonnegative integral norm $(x,x)$. The vectors of zero norm are just the integral null vectors of $K$; they form a subgroup that can be factored out, leaving a definite lattice of smaller dimensions where all nonzero points have positive norm. In a definite lattice, there are only finitely many lattice points of a given norm. coming in antipodal pairs that can be found by a complete enumeration procedure (typically an application of the LLL algorithm followed by Schnorr-Euchner search, or more advanced variations). The collection of all vectors of small norm define a graph whose edges are labelled by the nonzero inner products. Lattice isomorphism (corresponding to equivalence of the $K$ under $GL(n,Z)$) can be tested efficiently by testing these labelled graphs for isomorphism, e.g., using the graph isomorphism package nauty. (Of course one first checks whether the determinant of $K$, which is an invariant is the same.) This makes checking for decomposability a finite procedure (recursive in the dimension). It is not very practical in higher dimensions unless the lattice decomposes into a large number of lattices generated by vectors of norm 1 and 2. However, if some of these graphs are disconnected they suggest decompositions that can be used in a heuristic fashion. In an indefinite lattice (i.e., when $K$ is indefinite) there are always vectors of norm zero that are not null vectors of $K$. The norm zero vectors no longer form a subgroup. Classification and isomorphism testing is instead done by working modulo various primes, giving genera. Again one has a finite procedure. To solve the decomposition problem posed for given $K$, one shouldn't need to do a full classification of all lattices of dimension $n$. But I don't know a simpler systematic procedure that is guaranteed to work in general. In higher dimensions most lattices are indecomposable, though there are no simple criteria for indecomposability. The key to decomposition is to transform the basis in such a way that a subset of the basis vectors becomes orthogonal to the remaining ones, and to repeat this recursively as long as feasible. This gives rise to the following heuristics that works well for the specific matrix given. One transforms the basis so that it contains points $x$ of absolutely small nonzero norm (reflected by corresponding diagonal entries) and subtracts integral multiples of $x$ from the other basis vectors in order to make the absolute values of the inner products as small as possible. [Finding these short vectors is often trivial by inspection, but if the original diagonal entries are large lattice reduction methods (of which LLL is the simplest) must be used to find them or to show their nonexistence.] This is repeated as long as the sum of absolute values of the off-diagonal entries decreases. If a diagonal entry is $\pm1$ one can make in this way all off-diagonal entries zero and obtains a 1-dimensional sublattice that decomposes the given lattice. (For absolutely larger diagonal entries there is no such guarantee, but the case of norm $\pm2$ is usually tractable, too, since one can use the structure theory of root lattices to handle the sublattice generated by norm 2 vectors.) In the specific (indefinite) case given, the fourth unit vector $e^4$ has norm 1, and transforming the off-diagonals in the 4th column to zero produces the reduced matrix {2,-1,0,-1,-1,2;0,2,0]. Now the second unit vector has norm -1, and doing the same with column 2 gives the reduced matrix [3 -2;-2,4]. This matrix is definite, and one can enumerate all vectors of norm $\le 4$ to check that it is indecomposable. One can still improve the basis a little bit , replacing [3 -2;-2,4] by [3 1;1 3]. Collecting the transformations done one finds a unimodular matrix that transforms  $K$ into the direct sum of [3,-2;-2,,4], [-1], and [1]. Or  [3 1;1 3], [-1], and [1]. answered Apr 26, 2016 by (12,790 points) edited Apr 27, 2016 Thank you for your detailed answer. But it seems the simplified matrix you gave has a different determinant compared to the original one... Can you write down the $W$-matrix explicitly? @Mr.Gentleman: Sorry, silly mistake. I corrected my calculation. The determinant is now always $-8$. An explicit $W$ is given in the answer by Meng if you apply a column permutation interchanging coordinates 2 and 3. + 3 like - 0 dislike There is no way to do this in general. Here is some background http://www.maths.ed.ac.uk/~aar/papers/conslo.pdf Interestingly, there is a classification of *indefinite* bilinear forms over Z. answered Apr 23, 2016 by (1,875 points) ''No way'' is too much said. Every single dimension is decidable (with a finite number of equivalence classes). It just gets harder with the dimension. Is that a theorem? Maybe eventually it gets undecidable. Yes, it is  theorem. The absolute value of the determinant is an invariant. Using LLL reduction one can always find (for any fixed dimension and determinant) an explicit basis of bounded length. Then the number of possible Gram matrices in such a reduced basis is finite. Since one can decide lattice isomorphism, it follows that one can figure out the precise number of equivalence classes for each dimension and determinant. Thanks! Thank you for the comments. Because of my lack of background, I think I will have to learn these. However, at this moment I have a particular problem with $K=(2,-1,0,0;-1,0,2,1;0,2,0,0;0,1,0,1)$ (the comma separates elements in the same row and the semicolon separates different rows), can anyone help me block diagonalize it? @Arnold Neumaier: updated, thank you! + 3 like - 0 dislike Let $$W=\left( \begin{array}{cccc} 1 & 0 & 1 & 0 \\ -1 & 0 & 1 & 1 \\ 1 & -1 & 0 & -1 \\ -1 & 1 & 1 & 1 \\ \end{array} \right),$$ then $$W^T K W=\left(\begin{array}{cccc} 3 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 3 & 0 \\ 0 & 0 & 0 & -1 \\ \end{array}\right)$$ W is found by trial and error. answered Apr 27, 2016 by (550 points) How to try and err? There are infinitely many possibilities for $W$ and only a few work. So what was your heuristics to find $W$? Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysics$\varnothing$verflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.999897837638855, "perplexity": 184.6741494624017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00246.warc.gz"}
https://proofwiki.org/wiki/Definition:Subspace_Topology
# Definition:Topological Subspace ## Definition Let $T = \struct {S, \tau}$ be a topological space. Let $H \subseteq S$ be a non-empty subset of $S$. Define: $\tau_H := \set {U \cap H: U \in \tau} \subseteq \powerset H$ where $\powerset H$ denotes the power set of $H$. Then the topological space $T_H = \struct {H, \tau_H}$ is called a (topological) subspace of $T$. The set $\tau_H$ is referred to as the subspace topology on $H$ (induced by $\tau$). ## Also known as The subspace topology $\tau_H$ induced by $\tau$ can be referred to as just the induced topology (on $H$) if there is no ambiguity. The term relative topology can also be found. ## Also see • Results about topological subspaces can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808613061904907, "perplexity": 146.67539023072806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00331.warc.gz"}
https://www.physicsforums.com/threads/physics-homework-problem-stuck.65957/
# Physics Homework Problem-Stuck? 1. Mar 4, 2005 ### shawonna23 Physics Homework Problem--Stuck? A 75 kg water skier is being pulled by a horizontal force of 495 N and has an acceleration of 2.0 m/s2. Assuming that the total resistive force exerted on the skier by the water and the wind is constant, what force is needed to pull the skier at a constant velocity? I tried doing this to solve the problem: F=ma F=75kg x 2.0= 150N Then I added 150 and 495 to get 645 N but this is not the right answer. Can someone please tell me what I did wrong? 2. Mar 4, 2005 ### dextercioby What is the total resistive force...? Daniel. 3. Mar 4, 2005 ### Jameson $$F_{net} = ma$$ This stands for the net force.. not only one. Draw a force diagram and see where each one is going.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691165089607239, "perplexity": 894.8941180048523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00300-ip-10-171-6-4.ec2.internal.warc.gz"}
https://openreview.net/forum?id=hGdAzemIK1X
## Quantum Speedups of Optimizing Approximately Convex Functions with Applications to Logarithmic Regret Stochastic Convex Bandits Abstract: We initiate the study of quantum algorithms for optimizing approximately convex functions. Given a convex set $\mathcal{K}\subseteq\mathbb{R}^{n}$ and a function $F\colon\mathbb{R}^{n}\to\mathbb{R}$ such that there exists a convex function $f\colon\mathcal{K}\to\mathbb{R}$ satisfying $\sup_{x\in\mathcal{K}}|F(x)-f(x)|\leq \epsilon/n$, our quantum algorithm finds an $x^{*}\in\mathcal{K}$ such that $F(x^{*})-\min_{x\in\mathcal{K}} F(x)\leq\epsilon$ using $\tilde{O}(n^{3})$ quantum evaluation queries to $F$. This achieves a polynomial quantum speedup compared to the best-known classical algorithms. As an application, we give a quantum algorithm for zeroth-order stochastic convex bandits with $\tilde{O}(n^{5}\log^{2} T)$ regret, an exponential speedup in $T$ compared to the classical $\Omega(\sqrt{T})$ lower bound. Technically, we achieve quantum speedup in $n$ by exploiting a quantum framework of simulated annealing and adopting a quantum version of the hit-and-run walk. Our speedup in $T$ for zeroth-order stochastic convex bandits is due to a quadratic quantum speedup in multiplicative error of mean estimation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081717133522034, "perplexity": 378.08228935299394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00553.warc.gz"}
https://socialsci.libretexts.org/Courses/Butte_College/Exploring_Intercultural_Communication_(Grothe)/10%3A_Intercultural_Communication_Competence
Skip to main content Loading table of contents menu... # 10: Intercultural Communication Competence $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ This page titled 10: Intercultural Communication Competence is shared under a CC BY license and was authored, remixed, and/or curated by Tom Grothe. • Was this article helpful?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750217199325562, "perplexity": 2219.20786713468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00587.warc.gz"}
https://arxiv.org/abs/1405.6719
astro-ph.SR (what is this?) # Title: Stellar Abundances in the Solar Neighborhood: The Hypatia Catalog Abstract: We compile spectroscopic abundance data from 84 literature sources for 50 elements across 3058 stars in the solar neighborhood, within 150 pc of the Sun, to produce the Hypatia Catalog. We evaluate the variability of the spread in abundance measurements reported for the same star by different surveys. We also explore the likely association of the star within the Galactic disk, the corresponding observation and abundance determination methods for all catalogs in Hypatia, the influence of specific catalogs on the overall abundance trends, and the effect of normalizing all abundances to the same solar scale. The resulting large number of stellar abundance determinations in the Hypatia Catalog are analyzed only for thin-disk stars with observations that are consistent between literature sources. As a result of our large dataset, we find that the stars in the solar neighborhood may be reveal an asymmetric abundance distribution, such that a [Fe/H]-rich group near to the mid-plane is deficient in Mg, Si, S, Ca, Sc II, Cr II, and Ni as compared to stars further from the plane. The Hypatia Catalog has a wide number of applications, including exoplanet hosts, thick and thin disk stars, or stars with different kinematic properties. Comments: 66pgs, 32 figures, 6 tables, accepted for publication in the Astronomical Journal Subjects: Solar and Stellar Astrophysics (astro-ph.SR) DOI: 10.1088/0004-6256/148/3/54 Cite as: arXiv:1405.6719 [astro-ph.SR] (or arXiv:1405.6719v1 [astro-ph.SR] for this version) ## Submission history From: Natalie Hinkel [view email] [v1] Mon, 26 May 2014 20:00:16 GMT (4942kb,D)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979652523994446, "perplexity": 4307.539658367916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00141.warc.gz"}
http://math.stackexchange.com/questions/425970/system-of-three-equations-in-three-variables
# System of three equations in three variables? Fibonacci apparently found some solutions to this problem: Find rational solutions of: $$x+y+z+x^2=u^2$$ $$x+y+z+x^2+y^2=v^2$$ $$x+y+z+x^2+y^2+z^2=w^2$$ How would you find solutions to this using the mathematics available in Fibonaccis's time? (of course by this I mostly mean without using calculus, series, and modern maths. Also please exclude modular arithmetic notation if possible.) I was able to find little bits of information by adding and subtracting equations, such as $z^2=w^2-v^2$, $y^2=v^2-u^2$, and $y^2+z^2=w^2-u^2$, but I really do not know what to do. Thanks. - Is your goal to find all solutions or some solutions ? –  Ewan Delanoy Jun 21 '13 at 6:07 @EwanDelanoy I don't know if there are a finite number of solutions, but if there were an infinite number, a proof of that would be nice. –  Ovi Jun 21 '13 at 6:13 A preliminary analysis: you are searching for rational solutions $(x,y,z)$ s.t. $y^2+z^2=w^2-u^2$. If $w^2-u^2<0$ there are none; if $w^2-u^2=0$ one has the solutions $(x,0,0)$ with rational $x$ s.t. $x+x^2=u^2=w^2$. Existence of rational solutions of the 2nd degree polynomial in $x$ depends on $w$. One has 2 rational solutions if $w=\frac{q^2-1}{4}$ for some real $q$, otherwise there are none. It remains to study the case $w^2-u^2>0$ –  Avitus Jun 21 '13 at 6:46 @Avitus From OP's last derived equation (essentially a Pythagorean quadruple), then $w^2-u^2$ will be greater than $0$, unless $y$ and $z$ are trivially $0$. –  alex.jordan Jun 21 '13 at 7:01 @alex.jordan I am not sure about this because I know nothing about $w$ and $u$, which I presume just to be fixed. If $y=z=0$, then there exists still space for non trivial rational solutions $(x,0,0)$. –  Avitus Jun 21 '13 at 7:04 This is not a full answer in that not all solutions are described. But the discussion yields two infinite parametrized families of solutions. And the methods could possibly be studied longer to find more families, and possibly parametrize all solutions. As proof that this works before you invest in studying it, check that the solution it predicts at the end is valid. There is a known trick for parametrizing rational points on quadratic surfaces, that I think extends to hypersurfaces. Take the first equation. $(x,y,z,u)=(0,0,0,0)$ is a rational solution. Suppose $(X,Y,Z,U)$ is a different rational solution. Then the line connecting these two points in $4$-space is parametrized by $(x,y,z,u)=t(X,Y,Z,U)$. This line intersects the surface $x+y+z+x^2=u^2$ in precisely two places, since the intersection is found by solving for $t$ in $tX+tY+t^2Z^2=t^2U^2$. One solution is clearly given by $t=0$, and the other is given by $t=\frac{X+Y}{U^2-Z^2}$. Now since the line is parametrized by rational numbers, the intersection of this line with the plane $u=1$ has all rational coordinates: $(a,b,c,1)$. We can solve for $t$ to bring the fourth coordinate to $1$, and have $t=1/U$. So \begin{align}a&=X/U\\b&=Y/U\\c&=Z/U\end{align} This establishes a map from rational points on $x+y+z+x^2=u^2$ to rational points on $u=1$. But this map is reversible. Take any rational triple $(a,b,c,1)$ and consider the line connecting this point to $(0,0,0,0)$. This line is parametrized by $(x,y,z,u)=s(a,b,c,1)$, and intersects $x+y+z+x^2=u^2$ in two places. To find both, we substitute: $as+bs+cs+a^2s^2=s^2$, and along with $s=0$, the other solution is with $s=\frac{a+b+c}{1-a^2}$. So rational solutions to your first equation are given by \begin{align}x&=a\frac{a+b+c}{1-a^2}\\y&=b\frac{a+b+c}{1-a^2}\\z&=c\frac{a+b+c}{1-a^2}\\u&=\frac{a+b+c}{1-a^2}\end{align} where $a,b,c$ are any triple of rationals excluding $a=\pm1$. One infinite family of solutions to the system arises out of this if we take $b=c=0$: $(x,y,z,u,v,w)=\left(\frac{a^2}{1-a^2},0,0,\frac{a}{1-a^2},\pm\frac{a}{1-a^2},\pm\frac{a}{1-a^2}\right)$. We can see what happens if we throw these into the next equation. $$\frac{(a+b+c)^2}{1-a^2}+(a^2+b^2)\left(\frac{a+b+c}{1-a^2}\right)^2=v^2$$ Unfortunately this equation is degree 6: $$(1+b^2)(a+b+c)^2=v^2(1-a^2)^2$$ So trying to proceed as before but this time in $(a,b,c,v)$-space won't work. Lines will not be guaranteed to intersect the surface at two points, which is a crucial element of what we did above. If we are merely hunting families of solutions, and give up (for now) on finding all solutions, then it would help to have $1+b^2$ be a square. That is, to have $1+b^2=d^2$. We can do this by finding any primitive Pythagorean triple $(m^2-n^2)^2+(2mn)^2=(m^2+n^2)^2$ and dividing by one of the left terms. Say we choose the second term, so that for integers $m$ and $n$, we have \begin{align}b&=\frac{m^2-n^2}{2mn}\\d&=\frac{m^2+n^2}{2mn}\end{align} Now the earlier equation reduces to $$d(a+b+c)=v(1-a^2)$$ If we take $c=0$ (implying $z=0$) then we have another family of solutions to the system that arises out of this. Taking $m,n$ to be free nonzero integers, $a$ a free rational not equal to $1$, we have $$(x,y,z,u,v,w)=\left(a\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},\frac{m^2-n^2}{2mn}\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},0,\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},\frac{m^2+n^2}{2mn}\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2},\pm\frac{m^2+n^2}{2mn}\frac{a+\frac{m^2-n^2}{2mn}}{1-a^2}\right)$$ For example, $m=1$, $n=2$, $a=3/5$ yields $(-9/64, 45/256,0,-15/64, -75/256,75/256)$. It seems reasonable that some other family could be worked out this way that does not demand $z=0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462310671806335, "perplexity": 167.2228521322501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023862507/warc/CC-MAIN-20140305125102-00037-ip-10-183-142-35.ec2.internal.warc.gz"}