url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://ilopezgp.github.io/journal/3d-rad-clouds.html
# Lateral light scattering by clouds reduces Earth's albedo Clouds cover on average about two thirds of the globe and have a strong presence in Earth’s radiative energy budget. They cool the planet by reflecting about $50~\mathrm{W}~\mathrm{m}^{-2}$ of shortwave radiation back to space, or about 15% of incoming solar radiation. They also contribute to the greenhouse effect, by absorbing a sizable amount of infrared radiation emitted from Earth’s surface. Their net effect on the current climate is cooling, on the order of $20~\mathrm{W}~\mathrm{m}^{-2}$ (IPCC AR5, 2013). Climate models aim to forecast changes in the Earth system due to, primarily, anthropogenic greenhouse gas (GHG) emissions. These emissions result in an effective warming of $2.5-3.1~\mathrm{W}~\mathrm{m}^{-2}$ (Myhre et al. 2013). Given the large magnitude of cloud radiative effects, about 10 times stronger than the GHG forcing, biases in the representation of cloud-radiation interactions lead to large uncertainty in climate projections. The main source of this uncertainty is the representation of clouds themselves. Clouds cannot be resolved in climate models due to resolution constraints. Instead, parameterizations of unresolved processes must be used to predict the presence of clouds within each climate model grid-box. Another source of uncertainty is the interaction of clouds with aerosols, which can foster cloud formation and change their reflective properties. But subgrid-scale cloud processes and aerosol-cloud interactions are not the only sources of bias in cloud radiative effects. Climate models employ simplified radiative transfer solvers to compute the amount of radiation that clouds reflect back to space. An important simplification that radiative transfer solvers make is the computation of radiative fluxes exclusively in the vertical; a simplification known as the independent column approximation (ICA). Under the ICA, light scattered upwards can only be reflected back to Earth by clouds directly overhead. In reality, light scattered at an angle may encounter nearby clouds before it reaches the top of the atmosphere, returning back to the surface. How important is this lateral scattering effect in practice? Although three-dimensional radiative transfer is intractable at a global scale, it is computationally feasible to try to answer this question using a smaller domain. An advantage of using limited-size domains is that we can also afford to resolve all dynamical cloud processes. By comparing three-dimensional radiative transfer calculations to those obtained using the ICA through the same cloud field, we can estimate the magnitude of the bias caused by the approximation. High-resolution simulations of shallow cumulus (a,b), stratocumulus (c) and deep cumulonimbus clouds. These simulations can be used to estimate the effect of lateral light scattering on Earth’s albedo near the tropics (Singer et al. 2021) In an article published in the Journal of Atmospheric Sciences, we used an ensemble of cloud-resolving simulations to estimate the radiative effect of lateral light scattering (Singer et al. 2021). We found that, in the tropics, lateral light scattering reduces Earth’s albedo and results in a net warming effect of $3.1 \pm 1.6~\mathrm{W}~\mathrm{m}^{-2}$. Therefore, neglecting this effect in cloud-resolving models results in a cooling bias of the same magnitude. Annual mean radiative flux bias in cloud-resolving models that make the independent column approximation. The results are most robust equatorward of $\pm30^\circ$ (Singer et al. 2021). Although this bias is smaller than those induced by unresolved cloud processes in current climate models, it is comparable to the GHG warming signal of $2.5-3.1~\mathrm{W}~\mathrm{m}^{-2}$. Moreover, it is a bias that will need to be addressed for decades to come, even in global cloud-resolving models.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915367603302002, "perplexity": 1237.6869801528162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00313.warc.gz"}
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-sm165-2-2
PL EN Preferencje Język Widoczny [Schowaj] Abstrakt Liczba wyników Czasopismo Studia Mathematica 2004 | 165 | 2 | 111-124 Tytuł artykułu Denseness and Borel complexity of some sets of vector measures Autorzy Treść / Zawartość Warianty tytułu Języki publikacji EN Abstrakty EN Let ν be a positive measure on a σ-algebra Σ of subsets of some set and let X be a Banach space. Denote by ca(Σ,X) the Banach space of X-valued measures on Σ, equipped with the uniform norm, and by ca(Σ,ν,X) its closed subspace consisting of those measures which vanish at every ν-null set. We are concerned with the subsets $𝓔_{ν}(X)$ and $𝒜_{ν}(X)$ of ca(Σ,X) defined by the conditions |φ| = ν and |φ| ≥ ν, respectively, where |φ| stands for the variation of φ ∈ ca(Σ,X). We establish necessary and sufficient conditions that $𝓔_{ν}(X)$ [resp., $𝒜_{ν}(X)$] be dense in ca(Σ,ν,X) [resp., ca(Σ,X)]. We also show that $𝓔_{ν}(X)$ and $𝒜_{ν}(X)$ are always $G_{δ}$-sets and establish necessary and sufficient conditions that they be $F_{σ}$-sets in the respective spaces. Słowa kluczowe Kategorie tematyczne Czasopismo Rocznik Tom Numer Strony 111-124 Opis fizyczny Daty wydano 2004 Twórcy autor • Institute of Mathematics, Polish Academy of Sciences, Wrocław Branch, Kopernika 18, 51-617 Wrocław, Poland Bibliografia Typ dokumentu Bibliografia Identyfikatory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042879104614258, "perplexity": 2107.903591992048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00068.warc.gz"}
http://mathhelpforum.com/calculus/105821-maximizing-function-two-variables-print.html
# Maximizing a function of two variables • October 3rd 2009, 06:15 AM garymarkhov Maximizing a function of two variables $f(x,y)=(4-x^2)y-2y^2-1$ on the set $\{ x\geq 1, y\geq 1 \}$ Can someone tell me if this is right? $\frac{\partial f}{\partial x} = -2xy$ $\frac{\partial f}{\partial y} = 4-x^2-4y$ Setting the partials equal to zero, we can find that $y = 1 - \frac{x^2}{4}$ and $-2x(1 - \frac{x^2}{4}) = 0$ So $x(2x^2-8)=0$ and we can see that x is either 0, 2, or -2. So the inflection points are (0,1), (2,0), (-2,0). So $f(0,1)=1$ is the greatest value we can get. Checking the boundary conditions, we see that things only get worse if we make x or y larger. Forming the Hessian at this point, $\left(\begin{array}{cc} -2y & -2x \\ -2x & -4 \end{array}\right) = \left(\begin{array}{cc} -2 & 0 \\ 0 & -4 \end{array}\right)$ This is a symmetric matrix, so the diagonal values are the eigenvalues. Both eigenvalues being negative means that the matrix is negative definite and (0,1) is a max of this function. Put another way, the leading principle minors alternate in sign so the matrix must be negative definite. Can someone confirm that my calculations are right and make a note if I could have done something more efficiently? Thanks. • October 5th 2009, 05:01 AM CaptainBlack Quote: Originally Posted by garymarkhov $f(x,y)=(4-x^2)y-2y^2-1$ on the set $\{ x\geq 1, y\geq 1 \}$ Can someone tell me if this is right? $\frac{\partial f}{\partial x} = -2xy$ $\frac{\partial f}{\partial y} = 4-x^2-4y$ Setting the partials equal to zero, we can find that $y = 1 - \frac{x^2}{4}$ and $-2x(1 - \frac{x^2}{4}) = 0$ So $x(2x^2-8)=0$ and we can see that x is either 0, 2, or -2. So the inflection points are (0,1), (2,0), (-2,0). So $f(0,1)=1$ is the greatest value we can get. Checking the boundary conditions, we see that things only get worse if we make x or y larger. Forming the Hessian at this point, $\left(\begin{array}{cc} -2y & -2x \\ -2x & -4 \end{array}\right) = \left(\begin{array}{cc} -2 & 0 \\ 0 & -4 \end{array}\right)$ This is a symmetric matrix, so the diagonal values are the eigenvalues. Both eigenvalues being negative means that the matrix is negative definite and (0,1) is a max of this function. Put another way, the leading principle minors alternate in sign so the matrix must be negative definite. Can someone confirm that my calculations are right and make a note if I could have done something more efficiently? Thanks. None of your critical points are feasible, that is they are not within the region that the mininmum is required. In which case any extrema should lie on the boundary of the feasible region. CB • October 5th 2009, 06:09 AM garymarkhov Quote: Originally Posted by CaptainBlack None of your critical points are feasible, that is they are not within the region that the mininmum is required. In which case the minimum should lie on the boundary of the feasible region. CB Good point, but do you mean the "maximum should lie on the boundary of the feasible region"? I'm trying to maximize the function, not minimize it. • October 5th 2009, 06:38 AM HallsofIvy Yes. Both maximum and minimum must either lie at a point where the gradient is 0 or on the boundary of the set. Since the gradient is never 0 in this set, both maximum and minimum must be on the boundary. That means you must look at [tex]f(x,1)= 1- x^2[tex], $x\ge 1$ and $f(1, y)= 3y- 2y^2- 1$, $y\ge 1$. And, of course, you must check the point (1,1) itself. Since this region is not bounded, there do not necessarily exist points where the function is maximum or minimum. (In this case, the maximum does exist but there is no minimum.) • October 5th 2009, 07:15 AM garymarkhov Quote: Originally Posted by HallsofIvy Yes. Both maximum and minimum must either lie at a point where the gradient is 0 or on the boundary of the set. Since the gradient is never 0 in this set, both maximum and minimum must be on the boundary. That means you must look at [tex]f(x,1)= 1- x^2[tex], $x\ge 1$ and $f(1, y)= 3y- 2y^2- 1$, $y\ge 1$. And, of course, you must check the point (1,1) itself. Since this region is not bounded, there do not necessarily exist points where the function is maximum or minimum. (In this case, the maximum does exist but there is no minimum.) Why do you say "both maximum and minimum must be on the boundary" and then go on to say "In this case, the maximum does exist but there is no minimum"? In any case, I find that if $x=1$, the max for $y$ would be $\frac{3}{4}$. If $y=1$, the max for $x$ would be $0$. Since those combinations aren't within the boundary, $(1,1)$ is the winner. Correct? • October 5th 2009, 01:15 PM CaptainBlack Quote: Originally Posted by garymarkhov Why do you say "both maximum and minimum must be on the boundary" and then go on to say "In this case, the maximum does exist but there is no minimum"? In any case, I find that if $x=1$, the max for $y$ would be $\frac{3}{4}$. If $y=1$, the max for $x$ would be $0$. Since those combinations aren't within the boundary, $(1,1)$ is the winner. Correct? Put x=1 then maximise the objective under that constraint, then instead try y=1 and maximise under that constraint. If one or more of these maxima are feasible then the larger of these two maxima is the global maxima that you seek. If neither are feasible either (1,1) gives the maximum or there is no maximum. In this case the brute force approach of plotting the function over a reasonably sized part of the feasible quarter space shows that (1,1) is probably the point giving the maximum. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407020211219788, "perplexity": 270.5753560222507}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036112.6/warc/CC-MAIN-20150601214356-00064-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/lie-groups.151126/
# Lie Groups 1. Jan 12, 2007 ### cristo Staff Emeritus $\mathbb{R}^3$ has an associative multiplication $\mu:\mathbb{R}^3\times \mathbb{R}^3 \rightarrow \mathbb{R}^3$ given by $$\mu((x,y,z),(x',y',z'))=(x+x', y+y', z+z'+xy'-yx')$$ Determine an identity and inverse so that this forms a Lie group. Well, clearly e=(0,0,0) and the inverse element is (-x,-y,-z) Pick a basis for the Lie algebra of this Lie group and calculate their commutators, obtaining the structure constants of the Lie algebra. This is the part I'm having trouble with. I'm not really sure where to start! Any help would be much appreciated!! 2. Jan 12, 2007 ### matt grime Well, the lie algebra is R^3, as a vector space. What is the definition of the commutator? 3. Jan 12, 2007 ### cristo Staff Emeritus The commutator of two vector fields u,v in R^3 is [u,v]=u(v)-v(u). If the Lie algebra is R^3 as a vector space, then can we not pick the basis $$\frac{\partial}{\partial x^i}$$ i=1,2,3? I guess not, since then the commutators will be all be zero. 4. Jan 12, 2007 ### matt grime Why are the commutators zero? 5. Jan 12, 2007 ### cristo Staff Emeritus Well, $$\left[\frac{\partial}{\partial x},\frac{\partial}{\partial y}\right]=\frac{\partial}{\partial x}\frac{\partial}{\partial y}-\frac{\partial}{\partial y}\frac{\partial}{\partial x}=0$$ Is that not right? 6. Jan 13, 2007 ### matt grime No. If that were right, then surely the commutator of any two elements in any lie algebra would be zero. I said the algebra was R^3 as a vector space. I didn't say it was isomoprhic to R^3 with the trivial relations as an algebra. 7. Jan 13, 2007 ### cristo Staff Emeritus Ok, so the algebra is R^3 as a vector space, with basis $$\frac{\partial}{\partial x^i}$$ i=1,2,3. I'm really not sure about how to calculate these commutators though. I don't see how the commutator above isn't zero. Sorry, I'm probably missing something basic here. Thanks for your patience! 8. Jan 13, 2007 ### matt grime I didn't say it was or wasn't zero. I just think you should look in your notes to see how to work out the bracket. Every (real) lie algebra is isomorphic to R^n for some n. So picking a basis we can always choose d/dx^i. But this can't just mean that the commutator is zero. The commutator depends on the Lie structure. Find an example in your notes where they work out the commutators. 9. Jan 13, 2007 ### cristo Staff Emeritus Ok, well if we had two general vector fields, say, $$v=v^i\frac{\partial}{\partial x^i}, w=w^j\frac{\partial}{\partial x^j}$$, then the commutator would be $$[v,w]=v^i\frac{\partial}{\partial x^i}\left(w^j\frac{\partial}{\partial x^j}\right)-w^j\frac{\partial}{\partial x^j}\left(v^i\frac{\partial}{\partial x^i}\right)=\left(v^i\frac{\partial w^j}{\partial x^i}-w^i\frac{\partial v^j}{\partial x^i}\right) \frac{\partial}{\partial x^j}$$ since I was taught that d/dx^i differentiating d/dx^j is zero. Does this have something to do with the mapping mu? There's only one example in my notes, and it's different to the one here. He calculates an effective action first, then calculates "velocity vector fields of one-parameter families of transformations." From here he manages to find a basis for the LIe Algebra. 10. Jan 13, 2007 ### matt grime Try to adapt the proof to this case. 11. Jan 13, 2007 ### HallsofIvy Staff Emeritus You were taught what? $$\frac{\partial}{\partial x^i}\left(\frac{\partial}{\partial x^j}= \frac{\partial^2}{\partial x^i \partial x^j}$$ Of course, that's not a derivative, it's a second derivative which is why we use the commutator as the product for a Lie Algebra. If $\alpha= w(x,y)\frac{\partial}{\partial x}+ v(x,y)\frac{\partial}{\partial y}$ and $\beta= p(x,y)\frac{\partial}{\partial x}+ q(x,y)\frac{\partial}{\partial y}$ then $\alpha\beta$ will involve second derivatives, not just first derivatives. $\beta\alpha$ will also but since mixed second derivatives are equal, $[\alpha,\beta]= \alpha\beta- \beta\alpha$ will involve only first deriatives. 12. Jan 13, 2007 ### matt grime The lie algebra is the space of "left invariant vector fields". The left invariance is intimately related to the product \mu. http://planetmath.org/encyclopedia/LieGroup.html [Broken] Last edited by a moderator: May 2, 2017 13. Jan 13, 2007 ### cristo Staff Emeritus Sorry, what I said wasn't true. I remember my lecturer saying that [d/dx,d/xy]=0; not that the d/dx operating on d/dy was zero. He didn't really explain why, this is true, but of course one can still think of d/dx as partial derivative operators (as well as basis vectors) and so these obviously commute. Thanks for pointing that out! Well, here's the problem in the lecture notes. G=R x (R\{0}); $\mu$((x,y),(x',y'))=(x+yx',yy'). He finds the following effective action: $\lambda$((x,y),p)=x+yp. However, I can't seem to find an action for my mapping \mu above; it's not obvious like in his example! Similar Discussions: Lie Groups
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9032378792762756, "perplexity": 1025.363526590767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00658.warc.gz"}
http://observations.rene-grothmann.de/a-geometry-problem-with-three-circles-and-a-point/
## A Geometry Problem with Three Circles and a Point I friend gave me a geometric problem which turns out to boil down to the figure above. We have three arbitrary circles C1, C2, C3. Now we construct the three green lines through the two intersection points of each pair of these circles. We get lines g11, g12, g13. These lines intersect in one point. Why? As you know, the argument for the middle perpendicular lines on the sides of a triangle goes like this: Each middle perpendicular is the set of points which have the same distance to two of the corners. So if we intersect two of them in P then d(P,A)=d(P,B) and d(P,B)=d(P,C), which implies d(P,A)=d(P,C). As usual, d(P,A) denotes the distance from P to A. Thus P is also on the third middle perpendicular. Note that we need that P is on the middle perpendicular on AB if and only if d(P,A)=d(P,B). A similar argument is possible for the angle bisectors. These rays are the set of points with equal distance to two sides. For the heights, such an argument is not available. The standard proof goes by constructing a bigger triangle where the heights are middle perpendiculars. By the way, this proof stops working in Non-Euclidean hyperbolic geometry, where the fact still holds. Can we make up a proof similar to these proofs for our problem? It turns out that this is indeed possible. The correct value is the following: $$f(P,C) = d(P,M_C)^2-r_C^2$$ where r(C) is the radius of the circle C, and M(C) is its center. To complete the proof, we only need to show that the line through the intersection of two circles C1 and C2 is the set of all points P such that f(P,C1)=f(P,C2). Then the proof is as easy as the proofs above. There are several ways to see this. We could use a result that I call chord-secant-tangent theorem which deals with products of distances of a point on a secant or chord to the circle. But it is possible to get a proof using the Pythagoras only. In the image above we have $$d(P,Q)^2+d(M,Q)^2 = d(P,M)^2, \quad d(S,Q)^2 + d(M,Q)^2 = r^2$$ Thus $$f(P,C) = d(P,M)^2 – r^2 = d(P,Q)^2-d(S,Q)^2$$ where C is the circle. Now, if we have two intersection circles, the right-hand side of the equation is the same for both circles, and thus also the left-hand side. We have seen that f(P,C1)=f(P,C2) for all points on the green line. But we have to prove the converse too. For this, we observe that D(P,C1)=D(P,C2) implies that P is on a circle around M(C1) and on another circle around M(C2). The two circles meet only in points on the green line. There is also another way to see that f(P,C1)=f(P,C2) defines the green line. If you work out this equation analytically, you see that it is equivalent to an equation for a line. I leave that to you to check. Note that there is a second situation where the result does hold too. In this case, we need f(P,C) for P inside C. It will be negative, but f(P,C1)=f(P,C2) still holds for all points on the line through the intersection, and even if P is on the circles. There is the following special situation. It can be seen as a limit case of the previous situation. But it can also be proved by observing that all the tangents have the same length between the intersecting point and the tangent point. Here is another situation. The green lines are the sets of points such that f(P,C1)=f(P,C2) for two of the circles. It is quite interesting to construct these lines. I leave that to the reader. 25. Januar 2018 von mga010 Kategorien: Uncategorized | Schreibe einen Kommentar Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366148710250854, "perplexity": 335.69228181279067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202804.80/warc/CC-MAIN-20190323121241-20190323143241-00083.warc.gz"}
http://crypto.stackexchange.com/questions/8792/help-with-example-rsa-problem
# Help with example RSA problem While learning about RSA, I found this example problem. The answer is supposed to be "a 4-digit number that is a pattern of digits." I have computed it to be 16657 twice. OK, now to see if you understand the RSA decryption algorithm, suppose you are person A, and you have chosen as your two primes $p = 97$ and $q = 173$, and you have chosen $e = 5$. Thus you told B that $N = 16781$ (which is just $pq$) and you told him that $e = 5$. He encodes a message (a number) for you and tells you that the encoding is 5347. Can you figure out the original message? Hint--well, not really a hint, but a check of your final answer: it is a four-digit number that is a pattern of digits. I have failed to decode this to a 4-digit number. What am I doing wrong? p = 97 q = 173 N = 16781 dp = 16512 = (p-1)(q-1) e = 5 d = ? C = 5347 M = ? I computed d = 6605 since that seems to be the smallest value of d possible for: ed = 1(mod (p-1)(q-1)) 5d = 1(mod 96 * 172) 5d = 1(mod 16512) # I need a multiplier for 16512 that when added to 1 yields a number that # ends in a 5 or a 0 so it will evenly divide by e=5 # 1 + 2 times dp will yield a number that ends in a 5 d = 6605 Now I need to compute M = C^d(mod N): M = C^d(mod 16781) 5347^6605(mod 16781) 6605 = 4096 + 2048 + 256 + 128 + 64 + 8 + 4 + 1 5347^1 (mod 16781) = 5347 5347^2 (mod 16781) = 12366 (not included in final math) 5347^4 (mod 16781) = 9484 5347^8 (mod 16781) = 96 5347^64 (mod 16781) = 389 5347^128 (mod 16781) = 292 5347^256 (mod 16781) = 1359 5347^1024(mod 16781) = 3105 5347^2048(mod 16781) = 8731 5347^4096(mod 16781) = 11059 5347^6605(mod 16781) = 16657 I've also computed the same result in Excel by reproducing the same table above one power at a time for 6605 rows. Update: It turns out to have been a mistake. The original author corrected the problem, but not before the error was copied across the internet. - I suspect the problem is with d. –  Harvey Jun 21 '13 at 3:50 Your $d$ is OK, even if you did not derive it using a standard method, like the extended Euclidian algorithm. I concur with poncho's answer. I tried with 5347 as an octal number, that does not work either. And that's not the intend here, which is just wrong. –  fgrieu Jun 21 '13 at 3:59 @fgrieu, poncho: [Sorry to put this here] Would either of you be available for some short term consulting? I need a simplistic crypto design reviewed. I tried e-mailing Françous at two googled e-mail addresses, but I'm not sure if I got them right. –  Harvey Jun 21 '13 at 4:41 @poncho: same question. I can't figure out if stack exchange supports private messages –  Harvey Jun 21 '13 at 4:42 Ah, no private messages for stack exchange –  Harvey Jun 21 '13 at 4:50 $16657^5 \bmod 16781$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8355410695075989, "perplexity": 1112.019093042591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877306.87/warc/CC-MAIN-20140722025757-00188-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/196539-related-rates.html
# Math Help - Related Rates 1. ## Related Rates I have trouble with related rate word problems; half of the time I don't know if the formula I'm using is the right one. For example, In one of the hw problems: A right circular cone has a height h = 8 ft, and the base radius r is increasing. Find the rate of change of its surface area S with respect to r when r= 6ft. The textbook provides a formula of the surface area as: (pi) * r * the square root of r^2 + h^2 I know that I have to find dS/dr, but they give me two values, which confuses me [I'm still trying to learn how to right out problems and formulas using LaTex, I apologize. 2. ## Re: Related Rates Find h in terms of r and substitute back into SA. 3. ## Re: Related Rates Originally Posted by pickslides Find h in terms of r and substitute back into SA.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910309374332428, "perplexity": 799.6083403514106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929956.54/warc/CC-MAIN-20150521113209-00112-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/56007/how-to-read-permutation-symbols-like-123
# How to read permutation symbols like $(123)$? I'd be grateful for some help reading permutation symbols such as $(123)$. Does it mean, when applied to a target sequence such as $(x y z w)$, "replace the element in the first slot of the target with the element in the second slotof the target, the element in the second slot of the target with the element in the third slot of the target, and the element in the third slot of the target with the element in the first slot of the target," resulting in $(y z x w)$? If so, applied twice, $(123)$ would produce $(z x y w)$, which would mean that $(123).(123)=(321)$? If I'm getting it right, then I should imagine little leftward arrows inside the permutation symbol, as in $(\leftarrow 1 \leftarrow 2 \leftarrow 3)$, with the first arrow implicitly wrapping around to the last position of the permutation symbol. - ## 4 Answers If $(123)$ is in "cycle notation", then this means that $1$ maps to $2$, $2$ maps to $3$, and $3$ (the last term in the cycle) maps to $1$. That is, those "little arrows" should point the other way. But with composition, you have to be careful! It depends on whether you are composing right-to-left (like functions), or left-to-right. For instance, if you write $(12)(13)$, then composing left-to-right (apply $(12)$ first, then $(13)$) you get $(123)$, but if you compose right-to-left, you get $(132)$. Which composition convention is being used depends on the author. But cycles are never, in my experience, read "right to left" themselves; that is, $(123)$ never represents $3\mapsto 2\mapsto 1\mapsto 3$. If $(123)$ is in "one line notation", then this would mean that the permutation is applied to a 3 element set, with $1$ mapping to $1$, $2$ mapping to $2$, and $3$ mapping to $3$ (i.e., $(abc)$ means $1\mapsto a$, $2\mapsto b$, $3\mapsto c$). However, one-line notation is not common. - thanks much. Very clear. I induced the "right-to-left" convention from reading the top of page 5 of Gilmore's "Lie Groups, Physics, and Geometry," where he writes "The symbol $(123)$ means that the first root, $z_1$, is replaced by $z_2$, $z_2$ is replaced by $z_3$, and $z_3$ is replaced by $z_1$." This made me very queasy, and it's why I asked the question here. I guess the larger point is that if there is more than one way to define or interpret some notation, someone will use the other way! – Reb.Cabin Aug 6 '11 at 18:23 ok, so now I understand Qiaochu's comment better. I think what's going on is that Gilmore intends the usual left-to-right cycle notation, just applied to the indices of the roots. Then $z1$ would be replaced by $z2$ because $1\rightarrow 2$, etc. – Reb.Cabin Aug 6 '11 at 18:48 @Reb.Cain: Yes. If your cycle is $(z_1z_2z_3\ldots z_n)$, then $z_1$ "goes to" (is replaced by) $z_2$; then $z_2$ is mapped to ("is replaced by") $z_3$, etc. – Arturo Magidin Aug 6 '11 at 21:21 The convention I'm familiar with is that $(123)$ means $1 \to 2 \to 3 \to 1$. Composition is just the usual composition of functions, and isn't described well by this example, so here's another example: $(12) \cdot (123) = (32)$. Applying permutations to sequences is tricky. Are you permuting the entries of the sequence, or the indices of the entries? One is a left action, and the other is a right action, of the symmetric group, and in the latter case there are inverses you need to add in to get a left action. - yes, I noticed the ambiguity of applying the permutation to the entries versus applying the permutation to the indices. This made me even more queasy than the left-right ambiguity, though I didn't mention it in my first question. It can be painful to have to decode the intention of an author who just slings out symbols without clearly resolving these ambiguities. Thanks for your clarity, here. – Reb.Cabin Aug 6 '11 at 18:27 Perhaps I can understand this best via matrices. If $(123)$ means "permute the entries," then a matrix rep could be $M(123)=[[0 0 1],[1 0 0],[0 1 0]]$. Multiplying on the left of the sequence-as-column-vector, $M(123)(z_1,z_2,z_3)^T=(z_3,z_1,z_2)^T$. The inverse or transpose, namely $[[0 1 0],[0 0 1],[1 0 0]]$, produces the permutation of indices: $M(123)^T(z_1,z_2,z_3)^T=(z_2,z_3,z_1)^T$. Perhaps, in general, the "permute-the-entries" matrix rep of any perm symbol $(\xi\eta\zeta)$ could be the matrix in which col $\xi$ has 1 in row $\eta$, col $\eta$ has 1 in col $\zeta$, etc.? – Reb.Cabin Aug 6 '11 at 23:04 oops, last line should be col $\eta$ has 1 in ROW $\zeta$, and completing the thought, col $\zeta$ has 1 in ROW $\xi$, and zeros everywhere else. – Reb.Cabin Aug 6 '11 at 23:12 Assuming $(1 2 3)$ refers to cycle notation then no, that's not the standard interpretation. It means "$1 to 2 to 3 to 1". Initially you shouldn't be thinking about the action of a permutation on some other set of symbols - think, more simply, about its action on the very symbols it comprises of (the numbers 1, 2, 3 in this case). A more general permutation could look like$(1 4 5)(2 6)$which means 1 goes to 4 who goes to 5 who goes back to 1, while 2 and 6 swap places and (implied) 3 stays put. - Generally$( 1 \ 2 \ 3)$means, the elements$1 \to 2$and$2 \to 3$and$3 \to 1$. That is the element$1$is sent to$2$and$2$is sent to$3$and so on. Suppose you to operate$(1 \ 2 \ 3)$with$(1 \ 3 \ 2)$then we do it this way: since$1$is sent to$2$in$(1 \ 2 \ 3)$we see where$2$is sent in$(1 \ 3 \ 2)$. Now$2$is sent to$1$is in$(1 \ 3 \ 2)$therefore$1$is sent to$1\$ in their product. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515460133552551, "perplexity": 504.2429337641532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824345.69/warc/CC-MAIN-20160723071024-00013-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/395142/cubing-a-simple-thing/395155
# Cubing a simple thing I am trying to expand $\quad (x + 2)^3$ I am actually not to sure what to do from here, the rules are confusing. To square something is simple, you just foil it. It is easy to memorize and execute. Here though I am not sure if I need to do it like multiplication where I take one $(x + 2)$ term and multiply by another or if I need to multiply all $(x + 2)$ terms by it. I want to treat it like how I would square it so I just square it and then I am left with the result and the $(x + 2)$ term. This is wrong and I do not know why. I get this $$(x^2 + 4x + 4)(x + 2)$$ This is wrong and I am not sure why. So not I try the other way, multiplying everything by everything. This leaves me with $$(x^2 + 4x + 4)(x^2 + 4x + 4)$$ Which is again wrong. I have exhausted all my options and nothing results in a correct answer and I am not sure why. The following is correct, though not fully expanded: $$(x^2 + 4x + 4)(x + 2) = (x+2)^2(x+2) = (x+2)^3$$ Consider writing this as $(x + 2)(x^2 + 4x + 4)$, and then distribute (multiply) each term in the first factor, with each term of the second factor. \begin{align} (\color{blue}{\bf x + 2})(x^2 + 4x + 4) & = \color{blue}{\bf x}(x^2 + 4x + 4) + \color{blue}{\bf 2}(x^2 + 4x + 4) \\ \\ & = (x^3 + {\bf 4x^2} + \color{blue}{\bf 4x}) + ({\bf 2x^2 }+ \color{blue}{\bf 8x} + 8) \\ \\ & = x^3 + {\bf 6x^2} + \color{blue}{\bf 12x} + 8\\ \\ \end{align} • I still get it wrong, I end up with $2x^3 + 8x^2 + 8x$ – user138246 May 18 '13 at 2:35 • Can you see you have $x^3 + (4 + 2)x^2 + (4+8)x + 8$? – amWhy May 18 '13 at 2:38 • Is this making more sense now? – amWhy May 18 '13 at 2:45 • I see what I did wrong now, I multiplied by 2 the part I already multiplied by x when I should have multiplied seperated and added. – user138246 May 18 '13 at 2:48 • Just practice a lot: it will become automatic in no time! ;-) – amWhy May 18 '13 at 2:55 Why is $(x^{2} + 4x + 4)(x+2)$ wrong? $(x^{2} + 4x + 4)(x+2) = x^{3} + 4x^{2} + 4x + 2x^{2} + 8x + 8 = x^{3} + 6x^{2} + 12x + 8$ And it is the right answer. If you have problem remembering the power formulas for binomials, just use pascal's triangle to calculate the coefficients: \begin{align*} (a + b)^{0}\to &1\\ (a + b)^{1}\to &1 &1\\ (a + b)^{2}\to &1 &2 &&1\\ (a + b)^{3}\to &1 &3 &&3 &&1\\ (a + b)^{4}\to &1 &4 &&6 &&4 &&1\\ \end{align*} (and so on) The pattern here is that every number is the sum of the two adjacent number from the previous line. These numbers are the coefficients of the nth power. The pattern of the exponents is: $$\sum_{i=0}^{n} a^{n-i} b^{i}$$ Therefore: $$(x + 2)^{3} = (1)(x^{3})(2^{0}) + (3)(x^{2})(2^{1}) + (3)(x^{1})(2^{2}) + (1)(x^{0})(2^{3}) = x^{3} + 6x^{2} + 12x + 8$$ By the way: $(x^{2}+4x+4)(x^{2}+4x+4) = (x + 2)^{2} (x + 2)^{2} = (x+2)^{4} \neq (x+3)^{3}$ • You know, you're the first person that explained the binomial theorem in a way I understand it... though I've only looked at textbooks... +1! – Stephen J May 18 '13 at 7:21 • I'm glad you liked it =) – Felipe Gavilan May 18 '13 at 14:41 Given: $(x+2)^3$ we can rewrite $(x+2)^3$ as: $(x+2)(x+2)(x+2)$ When expanding 3 terms we must first calculate $(x+2)(x+2)$ before we can multiply by the third $(x+2)$. = $(x+2)(x+2)\implies x^2+2x+2x+4\implies x^2+4x+4$ = $(x+2)(x^2+4x+4)\implies x(x^2+4x+4)$ + $2(x^2+4x+4)$ = $x^3+4x^2+4x+2x^2+8x+8$ = $x^3+6x^2+12x+8$ • nitrous2, why did you edit it back? Do you see that you forgot a plus when you say $(x+2)(x^2+4x+4)\implies x(x^2+4x+4)$ $2(x^2 + 4x + 4)$? $(x+2)(x^2+4x+4) = x(x^2+4x+4)\color{red}{+}2(x^2 + 4x + 4)$ – Stahl May 18 '13 at 2:48 • Fixed, thanks for noticing. – nitrous2 May 18 '13 at 3:00 You had a correct step, when you had $(x^2 + 4x+4)(x+2)$. Now just distribute. $$(x^2 + 4x + 4)x + (x^2 + 4x + 4)2 = x^3 + 4x^2 + 4x + 2x^2 + 8x + 8 = x^3 + 6x^2 + 12x + 8$$ $$(x^2 + 4x + 4)(x + 2)$$ was correct, because $(x + 2)^2 = (x^2 + 4x + 4)$. So it follows that $(x + 2)^3 = (x^2 + 4x + 4)(x + 2)$. Let's try just multiplying it out: To multiply $(x^2 + 4x + 4)$ by $(x+2)$, we'll multiply every term in $(x^2 + 4x + 4)$ by every term in $(x+2)$. That's easy to do, and easy to visualise, because $(x^2 + 4x + 4)\cdot (x + 2)$ is the same as $(x \cdot(x^2 + 4x + 4)) + (2 \cdot (x^2 + 4x + 4)).$ $$(x \cdot(x^2 + 4x + 4)) = x^3 + 4x^2 + 4x$$ $$(2 \cdot (x^2 + 4x + 4)) = 2x^2 + 8x + 8$$ $$x^3 + \color{red}{4x^2} + \color{blue}{4x} + \color{red}{2x^2} + \color{blue}{8x} + 8 = x^3 + 6x^2 + 12x + 8$$ There is a more intuitive way to do this that works like the FOIL method. Write $(x+2)^3$ as $(x+2)(x+2)(x+2)$. When we multiply these we look at all the ways we can pick one element from each of the $(x+2)$s, multiply each possibility and add them all up. What do these look like? Well, from each $(x+2)$ you can pick either an $x$ or a $2$. Since we pick one from each, we will get three things multiplied together. If we pick an $x$ from each we will get an $x\cdot x\cdot x$ which equals $x^3$. We can only do this one way so we get $x^3$. What if we pick an $x$ from only two of the $(x+2)$s? Then we have to pick a $2$ from the third. How many ways are there to do this? Well, we can pick an $x$ from the first two and a number from the third; we can pick an $x$ from the first and the third and a number from the second or we can pick an $x$ from the second and the third and a number from the first. This gives us $3(2x^2)=6x^2$. So far we have $x^3+6x^2$. What if we pick only one $x$ and a $2$ from the other two? What do we get and how many ways are there to do this? We get $2\cdot 2\cdot x=4x$ and there are three ways to do this because we can pick the $x$ from either of the three. So we get $3(4x)=12x$ which gives us $x^3+6x^2+12x$ all together. Are there any other possibilities? Yes, we can pick a number from each and there is only one way to do that so we get $2\cdot 2\cdot 2=8$ and finally we arrive at $(x+2)^3=x^3+6x^2+12x+8$. This approach is easier than it seems if you practice a few and it works for $(x+a)^n$ for any positive n. • I have never seen that, seems like it would get very complicated with powers over 3 though. – user138246 May 18 '13 at 2:59 • @Jordan It is the intuition behind the binomial theorem. If you practice a few you will like it. – John Douma May 18 '13 at 3:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833246469497681, "perplexity": 273.7000247564371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00157.warc.gz"}
https://math.libretexts.org/Courses/Truckee_Meadows_Community_College/TMCC%3A_Precalculus_I_and_II/Under_Construction%2F%2Ftest2%2F%2F01%3A_Functions
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 1: Functions $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ • 1.0: Prelude to Functions In this chapter, we will explore functions that are a kind of relationship between parameters and their properties. • 1.1: Functions and Function Notation A jetliner changes altitude as its distance from the starting point of a flight increases. The weight of a growing child increases with time. In each case, one quantity depends on another. There is a relationship between the two quantities that we can describe, analyze, and use to make predictions. In this section, we will analyze such relationships. • 1.2: Domain and Range In creating various functions using the data, we can identify different independent and dependent variables, and we can analyze the data and the functions to determine the domain and range. In this section, we will investigate methods for determining the domain and range of functions. • 1.3: Rates of Change and Behavior of Graphs In this section, we will investigate changes in functions. For example, a rate of change relates a change in an output quantity to a change in an input quantity. The average rate of change is determined using only the beginning and ending data. Identifying points that mark the interval on a graph can be used to find the average rate of change. Comparing pairs of input and output values in a table can also be used to find the average rate of change. • 1.4: Composition of Functions Combining two relationships into one function, we have performed function composition, which is the focus of this section. Function composition is only one way to combine existing functions. Another way is to carry out the usual algebraic operations on functions, such as addition, subtraction, multiplication and division. We do this by performing the operations with the function outputs, defining the result as the output of our new function. • 1.5: Transformation of Functions Often when given a problem, we try to model the scenario using mathematics in the form of words, tables, graphs, and equations. One method we can employ is to adapt the basic graphs of the toolkit functions to build new models for a given scenario. There are systematic ways to alter functions to construct appropriate models for the problems we are trying to solve. • 1.6: Absolute Value Functions Distances in the universe can be measured in all directions. As such, it is useful to consider distance as an absolute value function. In this section, we will investigate absolute value functions. The absolute value function is commonly thought of as providing the distance the number is from zero on a number line. Algebraically, for whatever the input value is, the output is the value without regard to sign. • 1.7: Inverse Functions If some physical machines can run in two directions, we might ask whether some of the function “machines” we have been studying can also run backwards. In this section, we will consider the reverse nature of functions. • 1.E: Functions (Exercises)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609924912452698, "perplexity": 277.4150921256967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00279.warc.gz"}
https://arxiv.org/abs/1703.02979
Current browse context: cond-mat.stat-mech (what is this?) Title: Entanglement Entropy of Eigenstates of Quadratic Fermionic Hamiltonians Abstract: In a seminal paper [D. N. Page, Phys. Rev. Lett. 71, 1291 (1993)], Page proved that the average entanglement entropy of subsystems of random pure states is $S_{\rm ave}\simeq\ln{\cal D}_{\rm A} - (1/2) {\cal D}_{\rm A}^2/{\cal D}$ for $1\ll{\cal D}_{\rm A}\leq\sqrt{\cal D}$, where ${\cal D}_{\rm A}$ and ${\cal D}$ are the Hilbert space dimensions of the subsystem and the system, respectively. Hence, typical pure states are (nearly) maximally entangled. We develop tools to compute the average entanglement entropy $\langle S\rangle$ of all eigenstates of quadratic fermionic Hamiltonians. In particular, we derive exact bounds for the most general translationally invariant models $\ln{\cal D}_{\rm A} - (\ln{\cal D}_{\rm A})^2/\ln{\cal D} \leq \langle S \rangle \leq \ln{\cal D}_{\rm A} - [1/(2\ln2)] (\ln{\cal D}_{\rm A})^2/\ln{\cal D}$. Consequently we prove that: (i) if the subsystem size is a finite fraction of the system size then $\langle S\rangle<\ln{\cal D}_{\rm A}$ in the thermodynamic limit, i.e., the average over eigenstates of the Hamiltonian departs from the result for typical pure states, and (ii) in the limit in which the subsystem size is a vanishing fraction of the system size, the average entanglement entropy is maximal, i.e., typical eigenstates of such Hamiltonians exhibit eigenstate thermalization. Comments: 4+6 pages, 3+2 figures, as published Subjects: Statistical Mechanics (cond-mat.stat-mech); Quantum Gases (cond-mat.quant-gas); High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph) Journal reference: Phys. Rev. Lett. 119, 020601 (2017) DOI: 10.1103/PhysRevLett.119.020601 Cite as: arXiv:1703.02979 [cond-mat.stat-mech] (or arXiv:1703.02979v3 [cond-mat.stat-mech] for this version) Submission history From: Lev Vidmar [view email] [v1] Wed, 8 Mar 2017 19:00:02 GMT (74kb,D) [v2] Fri, 12 May 2017 19:39:01 GMT (76kb,D) [v3] Wed, 12 Jul 2017 14:17:04 GMT (76kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031888842582703, "perplexity": 1549.741813572086}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513330.14/warc/CC-MAIN-20171211090353-20171211110353-00166.warc.gz"}
http://mathhelpforum.com/pre-calculus/56347-piecewise-defined-function.html
1. ## Piecewise-defined Function For the piecewise-defined below, find: f(-2), f(0) and f(2) .........{x^2..........if x < 0 f(x) = {2..............if x = 0 .........{2x + 1.......if x > 0 2. Originally Posted by magentarita For the piecewise-defined below, find: f(-2), f(0) and f(2) .........{x^2..........if x < 0 f(x) = {2..............if x = 0 .........{2x + 1.......if x > 0 Which of these piecewise functions is defined for negative values of x (i.e. x<0)? Substitute -2 into that function to get the value of f(-2). There is only one function that is defined at x=0. So what do you think f(0) would be? Which of these piecewise functions is defined for positive values of x (i.e. x>0)? Substitute 2 into that function to get the value of f(2). Does this make sense? --Chris 3. ## ok... Originally Posted by Chris L T521 Which of these piecewise functions is defined for negative values of x (i.e. x<0)? Substitute -2 into that function to get the value of f(-2). There is only one function that is defined at x=0. So what do you think f(0) would be? Which of these piecewise functions is defined for positive values of x (i.e. x>0)? Substitute 2 into that function to get the value of f(2). Does this make sense? --Chris It does make slight sense but can you do at least one of them for me? 4. Originally Posted by magentarita It does make slight sense but can you do at least one of them for me? Sure. $f(x)=\left\{\begin{array}{rl}x^2&x<0\\2&x=0\\2x+1& x>0\end{array}\right.$ I'll evaluate $f(-2)$ Since $-2<0$, you want to focus on the function that is defined for $x<0$. Looking at the piecewise function, we see something: $f(x)=\left\{\begin{array}{rl}{\color{red}x^2}&{\co lor{red}x<0}\\2&x=0\\2x+1&x>0\end{array}\right.$ We see that the function defined for $x<0$ is $x^2$. So we can now say that $f(-2)=(-2)^2=\color{red}\boxed{4}$ Does this clarify things? Can you try the others on your own? --Chris 5. ## ok..... Originally Posted by Chris L T521 Sure. $f(x)=\left\{\begin{array}{rl}x^2&x<0\\2&x=0\\2x+1& x>0\end{array}\right.$ I'll evaluate $f(-2)$ Since $-2<0$, you want to focus on the function that is defined for $x<0$. Looking at the piecewise function, we see something: $f(x)=\left\{\begin{array}{rl}{\color{red}x^2}&{\co lor{red}x<0}\\2&x=0\\2x+1&x>0\end{array}\right.$ We see that the function defined for $x<0$ is $x^2$. So we can now say that $f(-2)=(-2)^2=\color{red}\boxed{4}$ Does this clarify things? Can you try the others on your own? --Chris Thank you so much.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087451696395874, "perplexity": 791.6365989973799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00344-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/differentials-and-rates-of-change-related-rates.718419/
# Differentials and Rates of Change; Related Rates 1. Oct 23, 2013 ### Qube 1. The problem statement, all variables and given/known data 2. Relevant equations Product rule; implicit differentiation. Volume of cylinder, V = pi(r^2)(h) 3. The attempt at a solution dV/dt = 0 = pi[2r(dr/dt)(h) + (dh/dt)(r^2)] Solve the equation after plugging in r = 5; h = 8, and dh/dt = -2/5. Solve for dr/dt. 0 = 80pi(dr/dt) - 10pi 1 = 8(dr/dt) dr/dt = 1/8 Last edited by a moderator: May 6, 2017 2. Oct 23, 2013 ### elvishatcher You are not trying to find $\frac{dr}{dt}$. Each part of the question wants you to find $\frac{dV}{dt}$ for a given rate at which the radius is changing (think about what represents the rate of change of the radius). 3. Oct 23, 2013 ### iRaid It says to find the rate of change of the radius in the first part of the question. I think you just have to have it like this: $\frac{dV}{dt}=2\pi r\frac{dr}{dt}\frac{dh}{dt}$, then just plug in values. Edit: Oh forgot to add, the V is constant, so what does that mean the value of dV/dt is? Last edited: Oct 23, 2013 4. Oct 23, 2013 ### haruspex It's multiple choice. V is given as constant, dh/dt is a given value, so the obvious approach is to calculate dr/dt and see which choice matches. Of course, you could run it the other way: for each choice compute dV/dt and see which one gives 0, but that probably takes longer on average. You mean V is constant, so what value is dV/dt, right? 5. Oct 23, 2013 ### iRaid Yes, my mistake. 6. Oct 27, 2013 ### Qube Is 1/8 the correct answer? I've redone the work again below: 7. Oct 27, 2013 ### Qube 8. Oct 27, 2013 ### haruspex Yes. Draft saved Draft deleted Similar Discussions: Differentials and Rates of Change; Related Rates
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347362518310547, "perplexity": 1944.5564489018252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823630.63/warc/CC-MAIN-20171020025810-20171020045810-00672.warc.gz"}
http://physics.aps.org/articles/v6/3
# Viewpoint: Mixed Feelings About a Rare Event , , , , and , Physikalisches Institute, University of Bonn, Nußallee 12, 53115 Bonn, Germany Published January 7, 2013  |  Physics 6, 3 (2013)  |  DOI: 10.1103/Physics.6.3 #### First Evidence for the Decay ${B}_{s}^{0}\to {\mu }^{\mathbf{+}}{\mu }^{\mathbf{-}}$ R. Aaij et al. (LHCb Collaboration) Published January 7, 2013 | PDF (free) One of the most important missions of the Large Hadron Collider at CERN is to search for phenomena that cannot be explained by the standard model of particle physics. In this context, the latest result from the LHCb experiment, now reported in Physical Review Letters, is a bittersweet victory [1]. The LHCb collaboration has, for the first time, observed evidence for the very rare decay of a neutral meson into a pair of muons. Only about one in every $300$ million of the meson’s decays happen this way, and it is no small feat that LHCb has been able to detect the few that do. The rate at which the decay occurs also agrees with the value calculated using the standard model, a theoretical success considering the intricacies involved in the calculations. But many particle physicists were hopeful that the agreement between theory and experiment wouldn’t be quite so good, since a deviation would have been a sign that new physics was at play. So far, no such signs are there, but in the future, the precision of the measurement will improve considerably, potentially allowing smaller deviations from the standard model predictions to be detected. For all of its successes, the standard model of elementary particle physics has left us with a number of mysteries. In the model, there are six quarks. Three—the up, charm, and top quarks—have electric charge $2/3$; and three—the down, strange, and bottom quarks—have electric charge $-1/3$. To this day we have no idea why we have six quarks and not, say, just the two (the up and down quarks) we need to make protons and neutrons, the building blocks of ordinary matter. Another puzzle is the so-called flavor problem: we do not understand why the masses of the six quarks are what they are, or why they vary over several orders of magnitude. The gaps in our knowledge are a bit like having the periodic table without knowing that atoms consist of electrons and nuclei. But this analogy only goes so far: to the best of our knowledge, the quarks are elementary down to a scale of ${10}^{-20}$ meters. One approach to answering our questions is to study particle interactions that involve the outlier quarks, the top and the bottom, since their masses are far greater than the masses of the other quarks. (The bottom quark is, for example, roughly $1000$ times more massive than the down quark.) In the past $15$ years, particle physicists have thus investigated the properties of the bottom quark by measuring the decays of $B$-mesons, which are bound states between an antibottom quark and another quark. Prior to LHCb, these experiments were led by two $B$-meson “factories,” one at KEK in Japan, the other at SLAC in California, in addition to the Tevatron at Fermilab. When looking for new physics, the best place to start is where existing theory says an event is not likely to happen: any deviations will be large compared to what we expect. This is why LHCb and the experiments before it have focused on looking for the extremely rare decay of ${B}_{s}^{0}$ mesons—bound states of a strange quark and an antibottom quark—into a pair of muons (Fig. 1). ${B}_{s}^{0}$ mesons are electrically neutral and more than $90%$ of them decay to a $D$-meson (a meson containing a charm quark) and other particles. But the standard model predicts that a tiny fraction of ${B}_{s}^{0}$ mesons—$3.23±0.27×{10}^{-9}$—decay to a pair of muons [2]. Why is this decay so rare? For one, the ${B}_{s}^{0}$ has zero spin, but because of the standard model interactions that lead to the decay, the muons end up with a total spin of $1$. To conserve spin angular momentum, one muon must therefore flip its spin, a process suppressed by the size of the muon mass relative to the ${B}_{s}^{0}$ mass, squared—a factor of about $3×{10}^{-4}$. The second suppressing factor exists because in the ${B}_{s}^{0}$ decay, a bottom quark and a strange quark annihilate into muons. But a special feature of the standard model is that transitions involving quarks with the same charge—as is the case here—are highly suppressed. (Makoto Kobayashi and Toshihide Maskawa received the 2008 Nobel Prize for their contribution to our understanding this important point.) Experimentally it is a tremendous challenge to observe such a rare decay. Although the $B$-factories had a very high luminosity, unlike the LHC, they largely ran below the energy threshold for the production of the ${B}_{s}^{0}$ meson. The CDF experiment at the Tevatron [3] set the best limits on the meson decay rate before the LHC but could also not reach the new particle accelerator’s sensitivity. Now, LHCb’s measurement shows that the fractional rate of the rare decay [1] is $3.{2}_{-1.2}^{+1.5}×{10}^{-9}$. It is one of the rarest decays ever observed and it agrees with the standard model within experimental errors, though these errors, at $40%$, are still quite large. The series of suppression factors given above is peculiar to the structure of the standard model. Since the decay is so rare, even a modest contribution from new physics could easily outshine standard model physics. Leading up to the latest LHCb experiment, hopes were high that experimentalists might observe an anomaly in the decay rate, which could provide hints to a solution to the flavor problem. Now we know that any candidate theory for new physics must predict further contributions to this special decay to be even smaller than what the standard model predicts. As a result, it has been suggested in a BBC News article that the LHCb measurement dealt a “significant blow” to the most widely studied extension of the standard model, supersymmetry. In my view, this is not entirely true, but some background is needed to explain why. Supersymmetry conjectures that all known particles have a respective “superpartner,” which only differs by half a unit of spin—all other quantum numbers are the same. So, for example, the electron, a fermion, has a superpartner called the scalar electron, or selectron for short. The selectron is the same as the electron in every way except that it has spin zero, and is thus a boson. Furthermore its mass must be larger. There are many versions of supersymmetry. It turns out that in the “minimal” version of the theory, that is, the one with the fewest number of couplings between particles, the $B$-meson decay is also suppressed, but because of a different combination of factors than the ones that show up in the standard model calculations. For one, although no spin flip is required, the involved supersymmetric interaction is suppressed by exactly the same factor of muon mass over ${B}_{s}^{0}$ mass squared. And two, we know from experimental searches for the superpartner particles that, if they exist, they must be much heavier than their standard model counterparts, typically with masses above $1000\phantom{\rule{0.333em}{0ex}}\text{GeV}/{c}^{2}$ [4]. In the supersymmetric computation, these particles enter as (heavy) virtual particles, and thus suppress the meson decay. (In Fig. 1, the up, charm and top quarks and the $W$-bosons are such virtual particles.) The combination of these factors leads to a prediction of the decay rate of the $B$-mesons into a muon pair that is compatible with the range observed in the experiments [5, 6, 7, 8]. What the LHCb measurement has excluded are some other, very well motivated, nonminimal versions of supersymmetry. These versions of the theory don’t feature the special cancellations and would have led to a much higher rate of decay than LHCb observed. The LHC will soon shut down for a two-year upgrade. Afterwards it will amp up from the $8$ tera-electron-volts (TeV), at which it currently runs, to $14$ TeV, and the luminosity will also be increased. The additional luminosity will lead to an increased production of ${B}_{s}^{0}$ mesons and thus to a reduction of the experimental error in LHCb’s result by about a factor of $3$ by 2018 [9]. At the still higher luminosities that should be possible at the LHC, LHCb believe they can reduce their error down to about $5%$, which is below the current level of the error in theoretical calculations. At this point they will start putting the standard model to a more stringent test. ### References 1. R. Aaij et al. (), “First Evidence for the Decay Bs0μ+μ-,” Phys. Rev. Lett. 110, 021801 (2013). 2. A. J. Buras, J. Girrbach, D. Guadagnoli, and G. Isidori, “On the Standard Model Prediction for B(Bs,dμ+μ-),” Eur. Phys. J. C 72, 2172 (2012). 3. A. Abulencia et al. (), “Search for Bs0μ+μ- and Bd0μ+μ- Decays in pp̅ Collisions with CDF II,” Phys. Rev. Lett. 95, 221805 (2005); “Publisher’s Note: Search for Bs0μ+μ- and Bd0μ+μ- Decays in pp̅ Collisions with CDF II (Phys. Rev. Lett. 95, 221805 (2005)),” 95, 249905 (2005); T. Aaltonen et al. (), “Search for Bs0μ+μ- and B0μ+μ- Decays with CDF II,” 107, 191801 (2011). 4. S. Heinemeyer, “Still Waiting for Supersymmetry,” Physics 4, 98 (2011). 5. K. S. Babu and C. Kolda, “Higgs-Mediated B0μ+μ- in Minimal Supersymmetry,” Phys. Rev. Lett. 84, 228 (2000). 6. C. Bobeth, T. Ewerth, F. Krüger, and J. Urban, “Analysis of Neutral Higgs-Boson Contributions to the Decays Bs+- and B̅ →K+-,” Phys. Rev. D 64, 074014 (2001). 7. P. Bechtle et al., “Constrained Supersymmetry After Two Years of LHC Data: A Global View with Fittino,” J. High Energy Phys. No. 06, 098 (2012). 8. O. Buchmueller et al., “The CMSSM and NUHM1 in Light of 7 TeV LHC, Bsμ+μ- and XENON100 Data,” E. Phys. J. C 72, (2012). 9. R. Aaij et al. (), “Implications of LHCb Measurements and Future Prospects,” arXiv:1208.3355. ### About the Author: Herbert Dreiner Herbert Dreiner obtained his Ph.D. at the University of Wisconsin, Madison, in 1989. This was followed by postdoctoral work at DESY Hamburg, Oxford University, and ETH Zurich. From 1995 to 2000 he was a senior scientific officer at the Rutherford Laboratories, UK. Since 2000 he has been a professor at Bonn University. His research has mainly focused on searches for physics beyond the standard model at colliders. Since 2002 Herbert has been organizing an extensive outreach effort at Bonn University, the Bonn Physikshow. He was awarded the HEP Outreach Prize of the European Physical Society in 2009.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506232738494873, "perplexity": 877.6500126148403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379636.59/warc/CC-MAIN-20141119123259-00033-ip-10-235-23-156.ec2.internal.warc.gz"}
http://eprints.pascal-network.org/archive/00006138/
Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective Alfonso Martinez, A. Guillén i Fàbregas, G. Caire and F.M.J. Willems IEEE Transactions on Information Theory Volume 55, Number 6, pp. 2756-2765, 2009. ## Abstract We revisit the information-theoretic analysis of bit-interleaved coded modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The mismatched decoding model is well-defined for finite, yet arbitrary, block lengths, and naturally captures the channel memory among the bits belonging to the same symbol. We give two independent proofs of the achievability of the BICM capacity calculated by Caire {\em et al.} where BICM was modeled as a set of independent parallel binary-input channels whose output is the bitwise log-likelihood ratio. Our first achievability proof uses typical sequences, and shows that due to the random coding construction, the interleaver is not required. The second proof is based on the random coding error exponents with mismatched decoding, where the largest achievable rate is the generalized mutual information. Moreover, the generalized mutual information of the mismatched decoder coincides with the infinite-interleaver BICM capacity. We show that the error exponent --and hence the cutoff rate-- of the BICM mismatched decoder is upper bounded by that of coded modulation and may thus be lower than in the infinite-interleaved model; for binary reflected Gray mapping in Gaussian channels the loss in error exponent is small. We also consider the mutual information appearing in the analysis of iterative decoding of BICM with EXIT charts: if the symbol metric has knowledge of the transmitted symbol, the EXIT mutual information admits a representation as a pseudo-generalized mutual information, which is in general not achievable. A different symbol decoding metric, for which the extrinsic side information refers to the hypothesized symbol, induces a generalized mutual information lower than the coded modulation capacity. In this case, perfect extrinsic side information turns the mismatched-decoder error exponent into that of coded modulation. EPrint Type: Article Project Keyword UNSPECIFIED Computational, Information-Theoretic Learning with Statistics 6138 Alfonso Martinez 08 March 2010
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306194186210632, "perplexity": 940.0044893559464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
https://byjus.com/question-answer/sound-waves-are-a-sequence-of-and-rarefactions-compressionreflectionsrefractionscollisions/
Question Sound waves are a sequence of __ and rarefactions.compressionreflectionsrefractionscollisions Solution The correct option is A compression Sound waves are a sequence of compressions and rarefactions. A compression is a region in a longitudinal wave where the particles are closest together. A rarefaction is a region in a longitudinal wave where the particles are farthest apart. Suggest corrections
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947845995426178, "perplexity": 937.6549976613676}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00459.warc.gz"}
http://math.stackexchange.com/users/43823/alanh?tab=activity&sort=all&page=3
AlanH Reputation 1,249 Top tag Next privilege 2,000 Rep. Jun 19 asked Understanding equivalent definitions of left cosets Jun 18 accepted Open sets in $\Bbb{R}^2$ and $\Bbb{R}^3$ Jun 17 comment Open sets in $\Bbb{R}^2$ and $\Bbb{R}^3$ So the unit intveral on the x-axis wouldn't be open in $\Bbb{R}^2$ right? Jun 17 comment Open sets in $\Bbb{R}^2$ and $\Bbb{R}^3$ @Berci Thanks for that Jun 17 comment Open sets in $\Bbb{R}^2$ and $\Bbb{R}^3$ Yes, but it is still a union of open balls. So in some sense, are open balls the basis of all open sets in $\Bbb{R}^2$? Jun 17 asked Open sets in $\Bbb{R}^2$ and $\Bbb{R}^3$ Jun 17 comment Topologist's sine curve is connected Doesn't disconnected mean that there are two disjoint open sets $A$ and $B$ such that the entire set $S$ is equal to $A\cup B$? But when you take your definitions of $A$ and $B$, the union of them is more than what is needed, no? Jun 17 accepted Clarification on quotient groups Jun 17 asked Clarification on quotient groups Jun 17 accepted Congruence relation possible typo? Jun 17 asked Congruence relation possible typo? Jun 10 accepted Clarification needed on finding last two digits of $9^{9^9}$ Jun 10 asked Clarification needed on finding last two digits of $9^{9^9}$ Jun 10 accepted Showing a compact metric space has a countable dense subset Jun 10 accepted Can a group of order $55$ have exactly $20$ elements of order $11$? (Clarification) Jun 9 asked How to determine the parity of a permutation by its cycle decomposition Jun 9 revised Can a group of order $55$ have exactly $20$ elements of order $11$? (Clarification) edited body Jun 9 asked Can a group of order $55$ have exactly $20$ elements of order $11$? (Clarification) Jun 9 comment Can a group of order $55$ have exactly $20$ elements of order $11$? How do you get that there would be 34 elements remaining with order 5? Jun 9 comment Showing a compact metric space has a countable dense subset I'm struggling with why $d(x,y)$ has to be less than $1/n$. If the definition of a dense set just requires that another point of $D$ lie in some open neighbourhood of $x \in X$, couldn't it be possible that $y$ is still in $B(x,\epsilon)$ but that the distance between the two points is not less than $1/n$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183177351951599, "perplexity": 231.82247487105167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456975.30/warc/CC-MAIN-20151124205416-00320-ip-10-71-132-137.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/19581/does-electron-go-through-a-forbidden-state-when-annihilate-with-positron
# Does electron go through a forbidden state when annihilate with positron? Let's consider an electron-positron pair with total spin equal to zero. When it annihilates it can not emit only one photon because it would have zero momentum and nonzero energy. The pair emits two photons with opposite momenta but on the momentum-energy plain it looks like the particle goes through a forbidden state (red path on the picture below). The first question is: How is it possible? I suppose this is because of the energy-time uncertainty. The annihilation process is instant (at least looks like on Feynman diagram) and the energy of the intermediate state is not determined. Is it correct? If we can go through any forbidden state, why doesn't the annihilation go the blue path? This is the second question. And the third question: Why do electron-hole pairs in semiconductors always emit photons with energy equal to the band gap? Is it just because the interaction with one photon has higher probability or there is a fundamental difference? - @zephyr: Thanks! This answers my third question and may be two others. Do I see it right that two-photon (red) annihilation is not the only possible but just the most probable? Is four-photon (blue) annihilation possible but of a negligible probability because of larger number of the particles participating in the interaction? –  Maksim Zholudev Jan 16 '12 at 14:57 No, an electron doesn't go through any forbidden state, surely not along a specific path of the kind you drew in the momentum space. The annihilation event is instantaneous, the momentum and energy are always perfectly conserved, and all external particles are always exactly on-shell. –  Luboš Motl Jan 16 '12 at 15:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871272206306458, "perplexity": 484.82687984752613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926924.76/warc/CC-MAIN-20150521113206-00103-ip-10-180-206-219.ec2.internal.warc.gz"}
https://rojefferson.blog/2017/10/09/general-relativity-or-gravitons/
## General relativity or gravitons? Question: How is the existence of a graviton consistent with the GR paradigm of gravity as a purely geometrical effect? Answer: Ontologically, it’s not! Gravitons are predicated on a quantum field-theoretic formulation of gravity, while spacetime curvature is the corresponding classical description. By analogy, the electromagnetic force may be alternatively described in terms of the exchange of virtual bosons (in QFT), or in terms of electromagnetic waves (in classical electromagnetism); these are fundamentally different paradigms, but are epistemically consistent in the sense that the former (quantum electrodynamics) reduces to the latter (classical electrodynamics) in the appropriate limit. That said, it is possible to show that the classical electromagnetic and gravitational forces must correspond — in the language of Lorentz-invariant quantum particle (as opposed to field) theory — to the transmission of a massless virtual particle with helicity ${\pm1}$ and ${\pm2}$, respectively. In particular, as shown in a beautiful paper by Boulware and Deser in 1975 [1], “a quantum particle description of local (non-cosmological) gravitational phenomena necessarily leads to a classical limit which is just a metric theory of gravity. [If] only helicity ${\pm2}$ gravitons are included, the theory is precisely Einstein’s general relativity…” This implies that Einstein’s theory enjoys a sort of quantum uniqueness (at least at tree level: it is entirely possible that the high-frequency behaviour of gravitons differs substantially from the (experimentally probed) low-energy regime of effective field theory). The remarkable aspect of this correspondence is that one sees the emergence of a metric theory from a non-geometrical, flat-space formulation. Perhaps this will shed light on the notion of “emergent spacetime” from other non-geometrical precepts (namely, entanglement)? To begin, consider the description of the world entirely in terms of S-matrix elements (or rather, the generalizations thereof necessitated by zero-mass particles, i.e., soft theorems). Observation of a force then implies the existence of a mediating particle whose exchange produces it. Since the effective potential for the exchange of a massive particle is ${V\sim e^{-mr}/r}$, the experimental ${1/r}$ gravitational potential implies that the graviton must be massless, at least to within experimental accuracy. Establishing the spin is more subtle. It must be an integer, since the Pauli exclusion principle prevents any virtual particle obeying Fermi-Dirac statistics from conspiring in sufficient numbers to produce a classical force. (The keyword here is “virtual”. Spin-${1/2}$ electrons, for example, are not force carriers in this paradigm; that role belongs to the integer bosons). We can also rule out spin ${\pm1}$, since a vector exchange would result in repulsion between like charges—masses, in this case. (As an aside, the term “vector boson” for particles of spin ${\pm1}$ arises from the fact that in quantum field theory, the component of a (massive) particle’s spin along any axis can take one of three values: ${0}$, ${\pm\hbar}$. Thus the dimension of the space of spin states is the same as that of a vector in three-dimensional space, and in fact can be shown to form a representation of SU(2), the corresponding group of rotations). Ruling out a scalar particle — that is, spin 0 — can be done by considering the bending of light by a gravitational field. In particular, as shown in Boulware and Deser’s paper [1], the scattering angle for a photon interacting with a massive object (such as the sun) via a scalar graviton depends on both the momentum of the photon and its polarization. But experiments reveal no such dependence. Finally, the possibility of spin greater than 2 was quashed by Weinberg’s 1964 paper [2], though we shall not repeat the arguments here. The graviton must therefore be a spin-2 particle. Furthermore, one can show that a finite-range exchange is untenable on both experimental and theoretical grounds [3,4]. We therefore conclude that the gravitational force corresponds, in the special relativistic scattering paradigm, to the exchange of massless spin-2 virtual bosons with infinite range. The above is essentially the argument that classical (Einstein) gravity implies the existence of a massless spin-2 virtual particle. What about the other direction, namely that this graviton uniquely leads to Einstein gravity in the classical limit? Unfortunately this direction is substantially more technical, so we will only summarize the argument here. The basic idea is that the virtual exchange of helicity ${\pm2}$ gravitons is governed by second-rank tensor vertices, which correspond to matrix elements of the stress-energy tensor of a local field theory. In particular, Boulware and Deser show [1] that the graviton can only couple (to other particles as well as itself) via a conserved stress-energy tensor; otherwise, the graviton is free (i.e., it couples to nothing). While Boulware and Deser’s analysis is only at tree level, it suffices to show that in the low-frequency limit, Einstein gravity follows uniquely from special relativistic scattering theory combined with a few observational constraints. Philosophically, this implies that one can view Einstein’s theory — and by extension, the intuitively-pleasing conception of gravity as the curvature of spacetime — as a phenomenological theory for macroscopic interactions. It is by no means necessary that this same geometrical interpretation continue to hold at small distances and times—such as at the Compton wavelength of a given interaction, where quantum effects are expected to be relevant. And while we have yet to understand the UV nature of gravity, the above picture of low-energy, geometrical gravity as a purely phenomenological descriptor, however much it may baffle one’s intuition, is tantalizingly in line with the emergent spacetime paradigm. References: [1] D. G. Boulware and S. Deser, “Classical General Relativity Derived from Quantum Gravity,” Annals Phys. 89 (1975) 193. [2] S. Weinberg, “Photons and Gravitons in s Matrix Theory: Derivation of Charge Conservation and Equality of Gravitational and Inertial Mass,” Phys. Rev. 135 (1964) B1049–B1056. [3] H. van Dam and M. J. G. Veltman, “Massive and massless Yang-Mills and gravitational fields,” Nucl. Phys. B22 (1970) 397–411. [4] D. G. Boulware and S. Deser, “Can gravitation have a finite range?,” Phys. Rev. D6 (1972) 3368–3382. This entry was posted in Physics. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228448271751404, "perplexity": 561.4727395414197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00386.warc.gz"}
https://searxiv.org/search?author=Antonia%20Rowlinson
### Results for "Antonia Rowlinson" total 287took 0.11s Calculating transient rates from surveysNov 18 2016We have developed a method to determine the transient surface density and transient rate for any given survey, using Monte-Carlo simulations. This method allows us to determine the transient rate as a function of both the flux and the duration of the ... More Evidence for a TDE origin of the radio transient Cygnus A-2Apr 12 2019In 2015, a radio transient named Cygnus A-2 was discovered in Cygnus A with the Very Large Array. Because of its radio brightness ($\nu F_{\nu} \approx 6 \times 10^{39}$ erg s$^{-1}$), this transient likely represents a secondary black hole in orbit around ... More Studying the multi-wavelength signals from short GRBsAug 07 2013Since the first host galaxies and afterglows of short GRBs were identified, they have remained very difficult to study: their multiwavelenth afterglows are notoriously faint and host galaxy identification often relies upon minimalising a chance alignment ... More Can magnetar spin-down power extended emission in some short GRBs?Feb 14 2013Extended emission gamma-ray bursts are a subset of the short' class of burst which exhibit an early time rebrightening of gamma emission in their light curves. This extended emission arises just after the initial emission spike, and can persist for up ... More On the Application of Link Analysis Algorithms for Ranking Bipartite GraphsJul 18 2015Recently bipartite graphs have been widely used to represent the relationship two sets of items for information retrieval applications. The Web offers a wide range of data which can be represented by bipartite graphs, such us movies and reviewers in recomender ... More AARTFAAC Flux Density Calibration and Northern Hemisphere Catalogue at 60 MHzOct 15 2018We present a method for calibrating the flux density scale for images generated by the Amsterdam ASTRON Radio Transient Facility And Analysis Centre (AARTFAAC). AARTFAAC produces a stream of all-sky images at a rate of one second in order to survey the ... More Identifying transient and variable sources in radio imagesAug 23 2018Mar 18 2019With the arrival of a number of wide-field snapshot image-plane radio transient surveys, there will be a huge influx of images in the coming years making it impossible to manually analyse the datasets. Automated pipelines to process the information stored ... More The Lorentzian proper vertex amplitude: Classical analysis and quantum derivationFeb 16 2015May 31 2016Spin foam models, an approach to defining the dynamics of loop quantum gravity, make use of the Plebanski formulation of gravity, in which gravity is recovered from a topological field theory via certain constraints called simplicity constraints. However, ... More Permutation properties of Dickson and Chebyshev polynomials and connections to number theoryJul 21 2017Jan 31 2018The $k$th Dickson polynomial of the first kind, $D_k(x) \in {\mathbb Z}[x]$, is determined by the formula: $D_k(u+1/u) = u^k + 1/u^k$, where $k \ge 0$ and $u$ is an indeterminate. These polynomials are closely related to Chebyshev polynomials and have ... More A Swan-like TheoremJun 28 2004Aug 17 2004Richard G. Swan proved in 1962 that trinomials x^{8k} + x^m + 1 with 8k > m have an even number of irreducible factors, and so cannot be irreducible. In fact, he found the parity of the number of irreducible factors for any square-free trinomial in F_2[x]. ... More Linking covariant and canonical LQG II: Spin foam projectorJul 22 2013Nov 26 2013In a seminal paper, Kaminski, Kisielowski an Lewandowski for the first time extended the definition of spin foam models to arbitrary boundary graphs. This is a prerequisite in order to make contact to the canonical formulation of Loop Quantum Gravity ... More Quantum Maupertuis PrincipleFeb 12 2011According to the Maupertuis principle, the movement of a classical particle in an external potential $V(x)$ can be understood as the movement in a curved space with the metric $g_{\mu\nu}(x)=2M[V(x)-E]\delta_{\mu\nu}$. We show that the principle can be ... More On the relation between reduced quantisation and quantum reduction for spherical symmetry in loop quantum gravityDec 01 2015Jul 28 2016Building on a recent proposal for a quantum reduction to spherical symmetry from full loop quantum gravity, we investigate the relation between a quantisation of spherically symmetric general relativity and a reduction at the quantum level. To this end, ... More Elementary background in elliptic curvesAug 22 1997This paper gives additional background in algebraic geometry as an accompaniment to the article, `Formal Groups, Elliptic Curves, and some Theorems of Couveignes'' [arXiv:math.NT/9708215]. Section 1 discusses the addition law on elliptic curves, and ... More Bounds on List Decoding Gabidulin CodesMay 02 2012An open question about Gabidulin codes is whether polynomial-time list decoding beyond half the minimum distance is possible or not. In this contribution, we give a lower and an upper bound on the list size, i.e., the number of codewords in a ball around ... More Formal groups, elliptic curves, and some theorems of CouveignesAug 22 1997The formal group law of an elliptic curve has seen recent applications to computational algebraic geometry in the work of Couveignes to compute the order of an elliptic curve over finite fields of small characteristic. The purpose of this paper is to ... More Random walk on comb-type subsets of Z^2Oct 28 2018We study the path behavior of the simple symmetric walk on some comb-type subsets of Z^2 which are obtained from Z^2 by removing all horizontal edges belonging to certain sets of values on the y-axis. We obtain some strong approximation results and discuss ... More QSL Squasher: A Fast Quasi-Separatrix Layer Map CalculatorSep 02 2016Quasi-Separatrix Layers (QSLs) are a useful proxy for the locations where current sheets can develop in the solar corona, and give valuable information about the connectivity in complicated magnetic field configurations. However, calculating QSL maps ... More Stable coherent statesJun 29 2015We analyze the stability under time evolution of complexifier coherent states (CCS) in one-dimensional mechanical systems. A system of coherent states is called stable if it evolves into another coherent state. It turns out that a system can only poses ... More Bounds on List Decoding of Rank-Metric CodesJan 20 2013Jul 18 2013So far, there is no polynomial-time list decoding algorithm (beyond half the minimum distance) for Gabidulin codes. These codes can be seen as the rank-metric equivalent of Reed--Solomon codes. In this paper, we provide bounds on the list size of rank-metric ... More Gray-coding through nested setsFeb 09 2015Feb 11 2015We consider the following combinatorial question. Let $$S_0 \subset S_1 \subset S_2 \subset ...\subset S_m$$ be nested sets, where #$(S_i) = i$. A move consists of altering one of the sets $S_i$, $1 \le i \le m-1$, in a manner so that the nested condition ... More Phase transitions of regular Schwarzschild-Anti-deSitter black holesFeb 04 2015We study a solution of the Einstein's equations generated by a self-gravitating, anisotropic, static, non-singular matter fluid. The resulting Schwarzschild like solution is regular and accounts for smearing effects of noncommutative fluctuations of the ... More On existence of Budaghyan-Carlet APN hexanomialsAug 11 2012Aug 16 2012Budaghyan and Carlet constructed a family of almost perfect nonlinear (APN) hexanomials over a field with r^2 elements, and with terms of degrees r+1, s+1, rs+1, rs+r, rs+s, and r+s, where r = 2^m and s = 2^n with GCD(m,n)=1. The construction requires ... More Green Base Station Placement for Microwave Backhaul LinksJul 18 2017Wireless mobile backhaul networks have been proposed as a substitute in cases in which wired alternatives are not available due to economical or geographical reasons. In this work, we study the location problem of base stations in a given region where ... More How Tall Can Be the Excursions of a Random Walk on a SpiderFeb 23 2014We consider a simple symmetric random walk on a spider, that is a collection of half lines (we call them legs) joined at the origin. Our main question is the following: if the walker makes $n$ steps how high can he go up on all legs. This problem is discussed ... More Advertising Competitions in Social NetworksAug 09 2016In the present work, we study the advertising competition of several marketing campaigns who need to determine how many resources to allocate to potential customers to advertise their products through direct marketing while taking into account that competing ... More A new identity of Dickson polynomialsOct 19 2016We find a new polynomial identity in characteristic 2: $$\Pi_{w\in F_q^\times} (D_{q+1}(wX)-Y) = X^{q^2-1} + (\sum_{i=1}^{n} Y^{2^{n}-2^i}) X^{q-1} + Y^{q-1},$$ where $q = 2^n$ and $D_k$ is a Dickson polynomial, defined by $D_k(u+u^{-1})=u^k + u^{-k}$. ... More A Reduced Basis Method for Parabolic Partial Differential Equations with Parameter Functions and Application to Option PricingAug 12 2014We consider the Heston model as an example of a parameterized parabolic partial differential equation. A space-time variational formulation is derived that allows for parameters in the coefficients (for calibration) as well as choosing the initial condition ... More New Wilson-like theorems arising from Dickson polynomialsJul 21 2017Nov 16 2017Wilson's Theorem states that the product of all nonzero elements of a finite field ${\mathbb F}_q$ is $-1$. In this article, we define some natural subsets $S \subset {\mathbb F}_q^\times$ and find formulas for the product of the elements of $S$, denoted ... More The LOFAR Transients PipelineMar 05 2015Current and future astronomical survey facilities provide a remarkably rich opportunity for transient astronomy, combining unprecedented fields of view with high sensitivity and the ability to access previously unexplored wavelength regimes. This is particularly ... More Codes for Partially Stuck-at Memory CellsMay 13 2015In this work, we study a new model of defect memory cells, called partially stuck-at memory cells, which is motivated by the behavior of multi-level cells in non-volatile memories such as flash memories and phase change memories. If a cell can store the ... More Inverting The Generator Of A Generative Adversarial NetworkNov 17 2016Generative adversarial networks (GANs) learn to synthesise new samples from a high-dimensional distribution by passing samples drawn from a latent space through a generative network. When the high-dimensional distribution describes images of a particular ... More Fast Operations on Linearized Polynomials and their Applications in Coding TheoryDec 21 2015This paper considers fast algorithms for operations on linearized polynomials. Among these results are fast algorithms for division of linearized polynomials, $q$-transform, multi-point evaluation, computing minimal subspace polynomials and interpolation. ... More Modelling supernova line profile asymmetries to determine ejecta dust masses: SN 1987A from days 714 to 3604Sep 02 2015Nov 27 2015The late time optical and near-IR line profiles of many core-collapse supernovae exhibit a red-blue asymmetry as a result of greater extinction by internal dust of radiation emitted from the receding parts of the supernova ejecta. We present here a new ... More Vector Network Coding Based on Subspace Codes Outperforms Scalar Linear Network CodingDec 20 2015May 15 2016This paper considers vector network coding solutions based on rank-metric codes and subspace codes. The main result of this paper is that vector solutions can significantly reduce the required field size compared to the optimal scalar linear solution ... More Task Specific Adversarial Cost FunctionSep 27 2016The cost function used to train a generative model should fit the purpose of the model. If the model is intended for tasks such as generating perceptually correct samples, it is beneficial to maximise the likelihood of a sample drawn from the model, Q, ... More Optimal Ferrers Diagram Rank-Metric CodesMay 08 2014May 25 2014Optimal rank-metric codes in Ferrers diagrams are considered. Such codes consist of matrices having zeros at certain fixed positions and can be used to construct good codes in the projective space. Four techniques and constructions of Ferrers diagram ... More Defensive Resource Allocation in Social NetworksJul 30 2015In this work, we are interested on the analysis of competing marketing campaigns between an incumbent who dominates the market and a challenger who wants to enter the market. We are interested in (a) the simultaneous decision of how many resources to ... More Information Spreading on Almost Torus NetworksDec 27 2013Epidemic modeling has been extensively used in the last years in the field of telecommunications and computer networks. We consider the popular Susceptible-Infected-Susceptible spreading model as the metric for information spreading. In this work, we ... More List Decoding of Locally Repairable CodesJan 12 2018May 08 2018We show that locally repairable codes (LRCs) can be list decoded efficiently beyond the Johnson radius for a large range of parameters by utilizing the local error correction capabilities. The new decoding radius is derived and the asymptotic behavior ... More List and Unique Error-Erasure Decoding of Interleaved Gabidulin Codes with Interpolation TechniquesApr 24 2014A new interpolation-based decoding principle for interleaved Gabidulin codes is presented. The approach consists of two steps: First, a multi-variate linearized polynomial is constructed which interpolates the coefficients of the received word and second, ... More Vector Network Coding Based on Subspace Codes Outperforms Scalar Linear Network CodingApr 12 2016May 13 2016This paper considers vector network coding based on rank-metric codes and subspace codes. Our main result is that vector network coding can significantly reduce the required field size compared to scalar linear network coding in the same multicast network. ... More Efficient Interpolation-Based Decoding of Interleaved Subspace and Gabidulin CodesAug 06 2014An interpolation-based decoding scheme for interleaved subspace codes is presented. The scheme can be used as a (not necessarily polynomial-time) list decoder as well as a probabilistic unique decoder. Both interpretations allow to decode interleaved ... More Adversarial Training For Sketch RetrievalJul 10 2016Aug 23 2016Generative Adversarial Networks (GAN) are able to learn excellent representations for unlabelled data which can be applied to image generation and scene classification. Representations learned by GANs have not yet been applied to retrieval. In this paper, ... More Some Gabidulin Codes cannot be List Decoded Efficiently at any RadiusJan 18 2015Oct 14 2016Gabidulin codes can be seen as the rank-metric equivalent of Reed-Solomon codes. It was recently proven, using subspace polynomials, that Gabidulin codes cannot be list decoded beyond the so-called Johnson radius. In another result, cyclic subspace codes ... More Sub-Quadratic Decoding of Gabidulin CodesJan 23 2016Apr 13 2016This paper shows how to decode errors and erasures with Gabidulin codes in sub-quadratic time in the code length, improving previous algorithms which had at least quadratic complexity. The complexity reduction is achieved by accelerating operations on ... More A classification of locally homogeneous affine connections on compact surfacesApr 21 2014We classify the affine connections on compact orientable surfaces for which the pseudogroup of local isometries acts transitively. We prove that such a connection is either torsion-free and flat, the Levi-Civita connection of a Riemannian metric of constant ... More QuasiresonanceMay 19 2005The concept of quasiresonance was introduced in connection with inelastic collisions between one atom and a vibro-rotationally excited diatomic molecule. In its original form, the collisions induce {\sl quasiresonant} transfer of energy between the internal ... More Strategic Resource Allocation for Competitive Influence in Social NetworksFeb 21 2014One of the main objectives of data mining is to help companies determine to which potential customers to market and how many resources to allocate to these potential customers. Most previous works on competitive influence in social networks focus on the ... More The AARTFAAC All Sky Monitor: System Design and ImplementationSep 14 2016The Amsterdam-ASTRON Radio Transients Facility And Analysis Center (AARTFAAC) all sky monitor is a sensitive, real time transient detector based on the Low Frequency Array (LOFAR). It generates images of the low frequency radio sky with spatial resolution ... More Efficient Decoding of Partial Unit Memory Codes of Arbitrary RateFeb 08 2012Partial Unit Memory (PUM) codes are a special class of convolutional codes, which are often constructed by means of block codes. Decoding of PUM codes may take advantage of existing decoders for the block code. The Dettmar--Sorger algorithm is an efficient ... More Wave refraction in left-handed materialsOct 24 2008We examine the response of a plane wave incident on a flat surface of a medium characterized by simultaneously negative electric and magnetic susceptibilities by solving Maxwell's equations explicitly and without making any assumptions on the way. In ... More On the universality of thermodynamics and $η/s$ ratio for the charged Lovelock black branesFeb 17 2016Feb 24 2016We investigate general features of charged Lovelock black branes by giving a detailed description of geometrical, thermodynamic and holographic properties of charged Gauss-Bonnet (GB) black branes in five dimensions. We show that when expressed in terms ... More Decoding Cyclic Codes up to a New Bound on the Minimum DistanceMay 10 2011Mar 12 2012A new lower bound on the minimum distance of q-ary cyclic codes is proposed. This bound improves upon the Bose-Chaudhuri-Hocquenghem (BCH) bound and, for some codes, upon the Hartmann-Tzeng (HT) bound. Several Boston bounds are special cases of our bound. ... More The spectral energy distribution of protoplanetary diss around Massive Young Stellar ObjectsOct 19 2012We investigate the effect of ionising radiation from Massive Young Stellar Objects impinging on their emerging spectral energy distribution. By means of detailed radiative transfer calculations including both the gaseous and dust phase of their surrounding ... More LatentPoison - Adversarial Attacks On The Latent SpaceNov 08 2017Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, ... More Improving Sampling from Generative Autoencoders with Markov ChainsOct 28 2016Jan 12 2017We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution ... More Random Spherical TrianglesSep 27 2010Dec 21 2015Let Delta be a random spherical triangle (meaning that vertices are independent and uniform on the unit sphere). A closed-form expression for the area density of Delta has been known since 1867; a complicated integral expression for the perimeter density ... More Locality in Crisscross Error CorrectionJun 19 2018Sep 04 2018The cover metric is suitable for describing the resilience against correlated errors in arrays, in particular crisscross errors, which makes it interesting for applications such as distributed data storage (DDS). In this work, we consider codes designed ... More Decoding High-Order Interleaved Rank-Metric CodesApr 18 2019This paper presents an algorithm for decoding homogeneous interleaved codes of high interleaving order in the rank metric. The new decoder is an adaption of the Hamming-metric decoder by Metzner and Kapturowski (1990) and guarantees to correct all rank ... More Geometric model of black hole quantum $N$-portrait, extradimensions and thermodynamicsApr 12 2016May 19 2016Recently a short scale modified black hole metric, known as holographic metric, has been proposed in order to capture the self-complete character of gravity. In this paper we show that such a metric can reproduce some geometric features expected from ... More Possible persistent current in a ring made of the perfect crystalline insulatorDec 06 2007A mesoscopic conducting ring pierced by magnetic flux is known to support the persistent electron current. Here we propose possibility of the persistent current in the ring made of the perfect crystalline insulator. We consider a ring-shaped lattice of ... More Un-Casimir effectNov 27 2013Mar 04 2014In this paper we present the un-Casimir effect, namely the study of the Casimir energy related to the presence of an un-particle component in addition to the electromagnetic field contribution. We derive this result by considering modifications of the ... More Spin foam propagator: A new perspective to include a cosmological constantNov 30 2017May 01 2018In recent years, the calculation of the first non-vanishing order of the metric 2-point function or graviton propagator in a semiclassical limit has evolved as a standard test for the credibility of a proposed spin foam model. The existing results of ... More On Error Decoding of Locally Repairable and Partial MDS CodesApr 11 2019We consider error decoding of locally repairable codes (LRC) and partial MDS (PMDS) codes through interleaved decoding. For a specific class of LRCs we investigate the success probability of interleaved decoding. For PMDS codes we show that there is a ... More On denoising autoencoders trained to minimise binary cross-entropyAug 28 2017Oct 09 2017Denoising autoencoders (DAEs) are powerful deep learning models used for feature extraction, data generation and network pre-training. DAEs consist of an encoder and decoder which may be trained simultaneously to minimise a loss (function) between an ... More On a Rank-Metric Code-Based Cryptosystem with Small Key SizeDec 12 2018A repair of the Faure-Loidreau (FL) public-key code-based cryptosystem is proposed. The FL cryptosystem is based on the hardness of list decoding Gabidulin codes which are special rank-metric codes. We prove that the recent structural attack on the system ... More About the distance between random walkers on some graphsApr 27 2016Jul 26 2016We consider two or more simple symmetric walks on some graphs, e.g. the real line, the plane or the two dimensional comb lattice, and investigate the properties of the distance among the walkers. Local Commutators and Deformations in Conformal Chiral Quantum Field TheoriesMay 15 2011We study the general form of M"obius covariant local commutation relations in conformal chiral quantum field theories and show that they are intrinsically determined up to structure constants, which are subject to an infinite system of constraints. The ... More Adiabaticity and diabaticity in strong-field ionizationApr 22 2013If the photon energy is much less than the electron binding energy, ionization of an atom by a strong optical field is often described in terms of electron tunneling through the potential barrier resulting from the superposition of the atomic potential ... More The Lorentzian proper vertex amplitude: AsymptoticsMay 25 2015May 31 2016In previous work, the Lorentzian proper vertex amplitude for a spin-foam model of quantum gravity was derived. In the present work, the asymptotics of this amplitude are studied in the semi-classical limit. The starting point of the analysis is an expression ... More Convolutional Codes in Rank Metric with Application to Random Network CodingApr 29 2014Jan 19 2015Random network coding recently attracts attention as a technique to disseminate information in a network. This paper considers a non-coherent multi-shot network, where the unknown and time-variant network is used several times. In order to create dependencies ... More Finite size effects on liquid-solid phase coexistence and the estimation of crystal nucleation barriersJan 07 2015A fluid in equilibrium in a finite volume $V$ with particle number $N$ at a density $\rho = N/V$ exceeding the onset density $\rho_f$ of freezing may exhibit phase coexistence between a crystalline nucleus and surrounding fluid. Using a method suitable ... More Matrix Elements of Lorentzian Hamiltonian Constraint in LQGJun 04 2013Jul 18 2013The Hamiltonian constraint is the key element of the canonical formulation of LQG coding its dynamics. In Ashtekar-Barbero variables it naturally splits into the so called Euclidean and Lorentzian parts. However, due to the high complexity of this operator, ... More Linking covariant and canonical LQG: new solutions to the Euclidean Scalar ConstraintSep 06 2011It is often emphasized that spin-foam models could realize a projection on the physical Hilbert space of canonical Loop Quantum Gravity (LQG). As a first test we analyze the one-vertex expansion of a simple Euclidean spin-foam. We find that for fixed ... More Bounds on Codes Correcting Tandem and Palindromic DuplicationsJun 30 2017Jan 16 2018In this work, we derive upper bounds on the cardinality of tandem duplication and palindromic deletion correcting codes by deriving the generalized sphere packing bound for these error types. We first prove that an upper bound for tandem deletions is ... More Escape of photons from two fixed extreme Reissner-Nordström black holesJan 09 2007Nov 20 2008We study the scattering of light (null geodesics) by two fixed extreme Reissner-Nordstr\"om black holes, in which the gravitational attraction of their masses is exactly balanced with the electrostatic repulsion of their charges, allowing a static spacetime. ... More Twisted Gabidulin Codes in the GPT CryptosystemJun 26 2018Aug 14 2018In this paper, we investigate twisted Gabidulin codes in the GPT code-based public-key cryptosystem. We show that Overbeck's attack is not feasible for a subfamily of twisted Gabidulin codes. The resulting key sizes are significantly lower than in the ... More Duplication-Correcting CodesDec 24 2017In this work, we propose constructions that correct duplications of multiple consecutive symbols. These errors are known as tandem duplications, where a sequence of symbols is repeated; respectively as palindromic duplications, where a sequence is repeated ... More Improving Sampling from Generative Autoencoders with Markov ChainsOct 28 2016Nov 07 2016We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution ... More Wave-packet propagation based calculation of above-threshold ionization in the x-ray regimeFeb 19 2015We investigate the multi-photon process of above-threshold ionization for the light elements hydrogen, carbon, nitrogen and oxygen in the hard x-ray regime. Numerical challenges are discussed and by comparing Hartree-Fock-Slater calculations to configuration-interaction-singles ... More Relaxation Phenomena in a System of Two Harmonic OscillatorsMay 21 2007Mar 07 2008We study the process by which quantum correlations are created when an interaction Hamiltonian is repeatedly applied to a system of two harmonic oscillators for some characteristic time interval. We show that, for the case where the oscillator frequencies ... More Dust masses for SN 1980K, SN1993J and Cassiopeia A from red-blue emission line asymmetriesNov 15 2016We present Monte Carlo line transfer models that investigate the effects of dust on the very late time emission line spectra of the core collapse supernovae SN 1980K and SN 1993J and the young core collapse supernova remnant Cassiopeia A. Their blue-shifted ... More Emergent 4-dimensional linearized gravity from spin foam modelDec 05 2018Spin Foam Models (SFMs) are covariant formulations of Loop Quantum Gravity (LQG) in 4 dimensions. This work studies the perturbations of SFMs on a flat background. It demonstrates for the first time that smooth curved spacetime geometries satisfying Einstein ... More Quantum correlations and energy currents across finite harmonic chainsMar 31 2015We present a study that addresses both the stationary properties of the energy current and quantum correlations in a three-mode chain subjected to Ohmic and super-Ohmic dissipa- tions. An extensive numerical analysis shows that the mean value and the ... More Improved Decoding and Error Floor Analysis of Staircase CodesApr 06 2017Dec 03 2018Staircase codes play an important role as error-correcting codes in optical communications. In this paper, a low-complexity method for resolving stall patterns when decoding staircase codes is described. Stall patterns are the dominating contributor to ... More Improving Sampling from Generative Autoencoders with Markov ChainsOct 28 2016We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. We define generative autoencoders as autoencoders which are trained to softly enforce a prior on ... More Network MIMO with Linear Zero-Forcing Beamforming: Large System Analysis, Impact of Channel Estimation and Reduced-Complexity SchedulingDec 15 2010Sep 12 2011We consider the downlink of a multi-cell system with multi-antenna base stations and single-antenna user terminals, arbitrary base station cooperation clusters, distance-dependent propagation pathloss, and general "fairness" requirements. Base stations ... More A Basis for all Solutions of the Key Equation for Gabidulin CodesJun 09 2010We present and prove the correctness of an efficient algorithm that provides a basis for all solutions of a key equation in order to decode Gabidulin (G-) codes up to a given radius tau. This algorithm is based on a symbolic equivalent of the Euclidean ... More Crystal nuclei in melts: A Monte Carlo simulation of a model for attractive colloidsApr 02 2015May 06 2015As a model for a suspension of hard-sphere like colloidal particles where small nonadsorbing dissolved polymers create a depletion attraction, we introduce an effective colloid-colloid potential closely related to the Asakura-Oosawa model but that does ... More Constraints on the pMSSM from searches for squarks and gluinos by ATLASFeb 28 2012We study the impact of the jets and missing transverse momentum SUSY analyses of the ATLAS experiment on the phenomenological MSSM (pMSSM). We investigate sets of SUSY models with a flat and logarithmic prior in the SUSY mass scale and a mass range up ... More Bounds and Constructions for Multi-Symbol Duplication Error Correcting CodesJul 08 2018Sep 19 2018In this paper, we study codes correcting $t$ duplications of $\ell$ consecutive symbols. These errors are known as tandem duplication errors, where a sequence of symbols is repeated and inserted directly after its original occurrence. Using sphere packing ... More Density duct formation in the wake of a travelling ionospheric disturbance: Murchison Widefield Array observationsJan 12 2016Geomagnetically-aligned density structures with a range of sizes exist in the near-Earth plasma environment, including 10-100 km-wide VLF/HF wave-ducting structures. Their small diameters and modest density enhancements make them difficult to observe, ... More Signatures of magnetar central engines in short GRB lightcurvesJan 03 2013A significant fraction of the Long Gamma-ray Bursts (LGRBs) in the Swift sample have a plateau phase showing evidence of ongoing energy injection. We suggest that many Short Gamma-ray Bursts (SGRBs) detected by the Swift satellite also show evidence of ... More Hypergraph-Based Analysis of Clustered Cooperative Beamforming with Application to Edge CachingOct 21 2015The evaluation of the performance of clustered cooperative beamforming in cellular networks generally requires the solution of complex non-convex optimization problems. In this letter, a framework based on a hypergraph formalism is proposed that enables ... More Achievable Sum Rate of MIMO MMSE Recievers: A General Analytic FrameworkMar 04 2009This paper investigates the achievable sum rate of multiple-input multiple-output (MIMO) wireless systems employing linear minimum mean-squared error (MMSE) receivers. We present a new analytic framework which unveils an interesting connection between ... More Generalizing Bounds on the Minimum Distance of Cyclic Codes Using Cyclic Product CodesJan 26 2013Jun 27 2013Two generalizations of the Hartmann--Tzeng (HT) bound on the minimum distance of q-ary cyclic codes are proposed. The first one is proven by embedding the given cyclic code into a cyclic product code. Furthermore, we show that unique decoding up to this ... More Limit theorems for local and occupation times of random walks and Brownian motion on a spiderSep 27 2016A simple random walk and a Brownian motion are considered on a spider that is a collection of half lines (we call them legs) joined in the origin. We give a strong approximation of these two objects and their local times. For fixed number of legs we establish ... More Finite Size Effects for the Ising Model on Random Graphs with Varying DilutionFeb 03 2009We investigate the finite size corrections to the equilibrium magnetization of an Ising model on a random graph with $N$ nodes and $N^{\gamma}$ edges, with $1 < \gamma \leq 2$. By conveniently rescaling the coupling constant, the free energy is made extensive. ... More A New Sample of Cool Subdwarfs from SDSS: Properties and KinematicsSep 03 2014We present a new sample of M subdwarfs compiled from the 7th data release of the Sloan Digital Sky Survey. With 3517 new subdwarfs, this new sample significantly increases the number of spectroscopically confirmed low-mass subdwarfs. This catalog also ... More
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105668425559998, "perplexity": 1669.7031558078531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419202423-00068.warc.gz"}
https://www.physicsforums.com/threads/finding-instantaneous-velocity-of-objects-in-projectile-motion.222335/
# Finding instantaneous velocity of objects in projectile motion 1. Mar 16, 2008 ### Suzan 1. The problem statement, all variables and given/known data No air resitance Initial speed 8 m/s inst. time at 3 s Find instantaneous velocity at 3s. Components of displacement: dx= 24 dy= 44 2. Relevant equations d=v1t+(1/2)a(t)^2 or other kinematics equations 3. The attempt at a solution I tried to find the components of the velocity using the components of the displacement (d/t=v). But both my answer and angle of direction were wrong. How do i do this? 2. Mar 16, 2008 ### rock.freak667 So dx and dy are those values at t=3? If so consider Vertical motion: $s=ut+\frac{1}{2}at^2$ where u is the initial vertical velocity. Sub t=3 in there and get u. and well you know how to get the horizontal component (v=d/t as you stated before) Similar Discussions: Finding instantaneous velocity of objects in projectile motion
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9491452574729919, "perplexity": 2195.5285754235933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00426.warc.gz"}
http://mathtourist.blogspot.com/2010/07/paul-halmos-on-writing-mathematics.html
July 8, 2010 Paul Halmos on Writing Mathematics As a mathematician, Paul R. Halmos (1916-2006) made fundamental contributions to probability theory, statistics, functional analysis, mathematical logic, and other areas of mathematics. He was also known and widely recognized as a masterly mathematical expositor. And he served as editor (1981-1985) of the American Mathematical Monthly. Halmos described his approach to writing in an essay published in the book How to Write Mathematics (American Mathematical Society, 1973). One paragraph presents the essence of the process: "The basic problem in writing mathematics is the same as in writing biology, writing a novel, or writing directions for assembling a harpsichord: the problem is to communicate an idea. To do so, and to do it clearly, you must have something to say, and you must have someone to say it to, you must organize what you want to say, and you must arrange it in the order that you want it said in, you must write it, rewrite it, and re-rewrite it several times, and you must be willing to think hard about and work hard on mechanical details such as diction, notation, and punctuation.” Halmos adds, “That’s all there is to it.” Halmos then expands on what he sees as the key elements of good mathematical writing. 1. Say something. To have something to say is by far the most important ingredient of good exposition. 2. Speak to someone. Ask yourself who it is that you want to reach. 3. Organize. Arrange the material so as to minimize the resistance and maximize the insight of the reader. 4. Use consistent notation. The letters (or symbols) that you use to denote the concepts that you’ll discuss are worthy of thought and careful design. 5. Write in spirals. Write the first section, write the second section, rewrite the first section, rewrite the second section, write the third section, rewrite the first section, rewrite the second section, rewrite the third section, write the fourth section, and so on. 6. Watch your language. Good English style implies correct grammar, correct choice of words, correct punctuation, and common sense. 7. Be honest. Smooth the reader’s way, anticipating difficulties and forestalling them. Aim for clarity, not pedantry; understanding, not fuss. 8. Remove the irrelevant. Irrelevant assumptions, incorrect emphasis, or even the absence of correct emphasis can wreak havoc. 9. Use words correctly. Think about and use with care the small words of common sense and intuitive logic, and the specifically mathematical words (technical terms) that can have a profound effect on mathematical meaning. 10.Resist symbols. The best notation is no notation; whenever it is possible to avoid the use of a complicated alphabetic apparatus, avoid it. Halmos concludes: “The basic problems of all expository communication are the same. . . . Content, aim, and organization, plus the vitally important details of grammar, diction, and notation—they, not showmanship, are the essential ingredients of good lectures, as well as good books.” The 44-minute  film I Want to Be a Mathematician: A Conversation with Paul Halmos is based on an interview with Paul Halmos, in which he discusses various aspects of writing, teaching, and research (Trailer). 1 comment: Pacha Nambi said...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827460765838623, "perplexity": 1297.672686065508}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00259-ip-10-180-136-8.ec2.internal.warc.gz"}
https://julia.quantecon.org/dynamic_programming/optgrowth.html
• Lectures • Code • Notebooks • Community # Optimal Growth I: The Stochastic Optimal Growth Model¶ ## Overview¶ In this lecture we’re going to study a simple optimal growth model with one agent. The model is a version of the standard one sector infinite horizon growth model studied in The technique we use to solve the model is dynamic programming. Our treatment of dynamic programming follows on from earlier treatments in our lectures on shortest paths and job search. We’ll discuss some of the technical details of dynamic programming as we go along. ## The Model¶ Consider an agent who owns an amount $y_t \in \mathbb R_+ := [0, \infty)$ of a consumption good at time $t$. This output can either be consumed or invested. When the good is invested it is transformed one-for-one into capital. The resulting capital stock, denoted here by $k_{t+1}$, will then be used for production. Production is stochastic, in that it also depends on a shock $\xi_{t+1}$ realized at the end of the current period. Next period output is $$y_{t+1} := f(k_{t+1}) \xi_{t+1}$$ where $f \colon \mathbb{R}_+ \to \mathbb{R}_+$ is called the production function. The resource constraint is $$k_{t+1} + c_t \leq y_t \tag{1}$$ and all variables are required to be nonnegative. In what follows, • The sequence $\{\xi_t\}$ is assumed to be IID. • The common distribution of each $\xi_t$ will be denoted $\phi$. • The production function $f$ is assumed to be increasing and continuous. • Depreciation of capital is not made explicit but can be incorporated into the production function. While many other treatments of the stochastic growth model use $k_t$ as the state variable, we will use $y_t$. This will allow us to treat a stochastic model while maintaining only one state variable. We consider alternative states and timing specifications in some of our other lectures. ### Optimization¶ Taking $y_0$ as given, the agent wishes to maximize $$\mathbb E \left[ \sum_{t = 0}^{\infty} \beta^t u(c_t) \right] \tag{2}$$ subject to $$y_{t+1} = f(y_t - c_t) \xi_{t+1} \quad \text{and} \quad 0 \leq c_t \leq y_t \quad \text{for all } t \tag{3}$$ where • $u$ is a bounded, continuous and strictly increasing utility function and • $\beta \in (0, 1)$ is a discount factor In (3) we are assuming that the resource constraint (1) holds with equality — which is reasonable because $u$ is strictly increasing and no output will be wasted at the optimum. In summary, the agent’s aim is to select a path $c_0, c_1, c_2, \ldots$ for consumption that is 1. nonnegative, 2. feasible in the sense of (1), 3. optimal, in the sense that it maximizes (2) relative to all other feasible consumption sequences, and 4. adapted, in the sense that the action $c_t$ depends only on observable outcomes, not future outcomes such as $\xi_{t+1}$ In the present context • $y_t$ is called the state variable — it summarizes the “state of the world” at the start of each period. • $c_t$ is called the control variable — a value chosen by the agent each period after observing the state. ### The Policy Function Approach¶ One way to think about solving this problem is to look for the best policy function. A policy function is a map from past and present observables into current action. We’ll be particularly interested in Markov policies, which are maps from the current state $y_t$ into a current action $c_t$. For dynamic programming problems such as this one (in fact for any Markov decision process), the optimal policy is always a Markov policy. In other words, the current state $y_t$ provides a sufficient statistic for the history in terms of making an optimal decision today. This is quite intuitive but if you wish you can find proofs in texts such as [SLP89] (section 4.1). Hereafter we focus on finding the best Markov policy. In our context, a Markov policy is a function $\sigma \colon \mathbb R_+ \to \mathbb R_+$, with the understanding that states are mapped to actions via $$c_t = \sigma(y_t) \quad \text{for all } t$$ In what follows, we will call $\sigma$ a feasible consumption policy if it satisfies $$0 \leq \sigma(y) \leq y \quad \text{for all} \quad y \in \mathbb R_+ \tag{4}$$ In other words, a feasible consumption policy is a Markov policy that respects the resource constraint. The set of all feasible consumption policies will be denoted by $\Sigma$. Each $\sigma \in \Sigma$ determines a continuous state Markov process $\{y_t\}$ for output via $$y_{t+1} = f(y_t - \sigma(y_t)) \xi_{t+1}, \quad y_0 \text{ given} \tag{5}$$ This is the time path for output when we choose and stick with the policy $\sigma$. We insert this process into the objective function to get $$\mathbb E \left[ \, \sum_{t = 0}^{\infty} \beta^t u(c_t) \, \right] = \mathbb E \left[ \, \sum_{t = 0}^{\infty} \beta^t u(\sigma(y_t)) \, \right] \tag{6}$$ This is the total expected present value of following policy $\sigma$ forever, given initial income $y_0$. The aim is to select a policy that makes this number as large as possible. The next section covers these ideas more formally. ### Optimality¶ The policy value function $v_{\sigma}$ associated with a given policy $\sigma$ is the mapping defined by $$v_{\sigma}(y) = \mathbb E \left[ \sum_{t = 0}^{\infty} \beta^t u(\sigma(y_t)) \right] \tag{7}$$ when $\{y_t\}$ is given by (5) with $y_0 = y$. In other words, it is the lifetime value of following policy $\sigma$ starting at initial condition $y$. The value function is then defined as $$v^*(y) := \sup_{\sigma \in \Sigma} \; v_{\sigma}(y) \tag{8}$$ The value function gives the maximal value that can be obtained from state $y$, after considering all feasible policies. A policy $\sigma \in \Sigma$ is called optimal if it attains the supremum in (8) for all $y \in \mathbb R_+$. ### The Bellman Equation¶ With our assumptions on utility and production function, the value function as defined in (8) also satisfies a Bellman equation. For this problem, the Bellman equation takes the form $$w(y) = \max_{0 \leq c \leq y} \left\{ u(c) + \beta \int w(f(y - c) z) \phi(dz) \right\} \qquad (y \in \mathbb R_+) \tag{9}$$ This is a functional equation in $w$. The term $\int w(f(y - c) z) \phi(dz)$ can be understood as the expected next period value when • $w$ is used to measure value • the state is $y$ • consumption is set to $c$ As shown in EDTC, theorem 10.1.11 and a range of other texts. The value function $v^*$ satisfies the Bellman equation In other words, (9) holds when $w=v^*$. The intuition is that maximal value from a given state can be obtained by optimally trading off • current reward from a given action, vs • expected discounted future value of the state resulting from that action It also suggests a way of computing the value function, which we discuss below. ### Greedy policies¶ The primary importance of the value function is that we can use it to compute optimal policies. The details are as follows. Given a continuous function $w$ on $\mathbb R_+$, we say that $\sigma \in \Sigma$ is $w$-greedy if $\sigma(y)$ is a solution to $$\max_{0 \leq c \leq y} \left\{ u(c) + \beta \int w(f(y - c) z) \phi(dz) \right\} \tag{10}$$ for every $y \in \mathbb R_+$. In other words, $\sigma \in \Sigma$ is $w$-greedy if it optimally trades off current and future rewards when $w$ is taken to be the value function. In our setting, we have the following key result A feasible consumption policy is optimal if and only if it is $v^*$-greedy The intuition is similar to the intuition for the Bellman equation, which was provided after (9). See, for example, theorem 10.1.11 of EDTC. Hence, once we have a good approximation to $v^*$, we can compute the (approximately) optimal policy by computing the corresponding greedy policy. The advantage is that we are now solving a much lower dimensional optimization problem. ### The Bellman Operator¶ How, then, should we compute the value function? One way is to use the so-called Bellman operator. (An operator is a map that sends functions into functions) The Bellman operator is denoted by $T$ and defined by $$Tw(y) := \max_{0 \leq c \leq y} \left\{ u(c) + \beta \int w(f(y - c) z) \phi(dz) \right\} \qquad (y \in \mathbb R_+) \tag{11}$$ In other words, $T$ sends the function $w$ into the new function $Tw$ defined (11). By construction, the set of solutions to the Bellman equation (9) exactly coincides with the set of fixed points of $T$. For example, if $Tw = w$, then, for any $y \geq 0$, $$w(y) = Tw(y) = \max_{0 \leq c \leq y} \left\{ u(c) + \beta \int v^*(f(y - c) z) \phi(dz) \right\}$$ which says precisely that $w$ is a solution to the Bellman equation. It follows that $v^*$ is a fixed point of $T$. ### Review of Theoretical Results¶ One can also show that $T$ is a contraction mapping on the set of continuous bounded functions on $\mathbb R_+$ under the supremum distance $$\rho(g, h) = \sup_{y \geq 0} |g(y) - h(y)|$$ See EDTC, lemma 10.1.18. Hence it has exactly one fixed point in this set, which we know is equal to the value function. It follows that • The value function $v^*$ is bounded and continuous. • Starting from any bounded and continuous $w$, the sequence $w, Tw, T^2 w, \ldots$ generated by iteratively applying $T$ converges uniformly to $v^*$. This iterative method is called value function iteration. We also know that a feasible policy is optimal if and only if it is $v^*$-greedy. It’s not too hard to show that a $v^*$-greedy policy exists (see EDTC, theorem 10.1.11 if you get stuck). Hence at least one optimal policy exists. Our problem now is how to compute it. ### Unbounded Utility¶ The results stated above assume that the utility function is bounded. In practice economists often work with unbounded utility functions — and so will we. In the unbounded setting, various optimality theories exist. Unfortunately, they tend to be case specific, as opposed to valid for a large range of applications. Nevertheless, their main conclusions are usually in line with those stated for the bounded case just above (as long as we drop the word “bounded”). Consult, for example, section 12.2 of EDTC, [Kam12] or [MdRV10]. ## Computation¶ Let’s now look at computing the value function and the optimal policy. ### Fitted Value Iteration¶ The first step is to compute the value function by value function iteration. In theory, the algorithm is as follows 1. Begin with a function $w$ — an initial condition. 2. Solving (11), obtain the function $T w$. 3. Unless some stopping condition is satisfied, set $w = Tw$ and go to step 2. This generates the sequence $w, Tw, T^2 w, \ldots$. However, there is a problem we must confront before we implement this procedure: The iterates can neither be calculated exactly nor stored on a computer. To see the issue, consider (11). Even if $w$ is a known function, unless $Tw$ can be shown to have some special structure, the only way to store it is to record the value $Tw(y)$ for every $y \in \mathbb R_+$. Clearly this is impossible. What we will do instead is use fitted value function iteration. The procedure is to record the value of the function $Tw$ at only finitely many “grid” points $y_1 < y_2 < \cdots < y_I$ and reconstruct it from this information when required. More precisely, the algorithm will be 1. Begin with an array of values $\{ w_1, \ldots, w_I \}$ representing the values of some initial function $w$ on the grid points $\{ y_1, \ldots, y_I \}$. 2. Build a function $\hat w$ on the state space $\mathbb R_+$ by interpolation or approximation, based on these data points. 3. Obtain and record the value $T \hat w(y_i)$ on each grid point $y_i$ by repeatedly solving (11). 4. Unless some stopping condition is satisfied, set $\{ w_1, \ldots, w_I \} = \{ T \hat w(y_1), \ldots, T \hat w(y_I) \}$ and go to step 2. How should we go about step 2? This is a problem of function approximation, and there are many ways to approach it. What’s important here is that the function approximation scheme must not only produce a good approximation to $Tw$, but also combine well with the broader iteration algorithm described above. One good choice from both respects is continuous piecewise linear interpolation (see this paper for further discussion). The next figure illustrates piecewise linear interpolation of an arbitrary function on grid points $0, 0.2, 0.4, 0.6, 0.8, 1$. ### Setup¶ In [1]: using InstantiateFromURL github_project("QuantEcon/quantecon-notebooks-julia", version = "0.5.0") # github_project("QuantEcon/quantecon-notebooks-julia", version = "0.5.0", instantiate = true) # uncomment to force package installation In [2]: using LinearAlgebra, Statistics using Plots, QuantEcon, Interpolations, NLsolve, Optim, Random In [3]: using Plots, QuantEcon, Interpolations, NLsolve gr(fmt = :png); In [4]: f(x) = 2 .* cos.(6x) .+ sin.(14x) .+ 2.5 c_grid = 0:.2:1 f_grid = range(0, 1, length = 150) Af = LinearInterpolation(c_grid, f(c_grid)) plt = plot(xlim = (0,1), ylim = (0,6)) plot!(plt, f, f_grid, color = :blue, lw = 2, alpha = 0.8, label = "true function") plot!(plt, f_grid, Af.(f_grid), color = :green, lw = 2, alpha = 0.8, label = "linear approximation") plot!(plt, f, c_grid, seriestype = :sticks, linestyle = :dash, linewidth = 2, alpha = 0.5, label = "") plot!(plt, legend = :top) Out[4]: Another advantage of piecewise linear interpolation is that it preserves useful shape properties such as monotonicity and concavity / convexity. ### The Bellman Operator¶ Here’s a function that implements the Bellman operator using linear interpolation In [5]: using Optim function T(w, grid, β, u, f, shocks, Tw = similar(w); compute_policy = false) w_func = LinearInterpolation(grid, w) # objective for each grid point objectives = (c -> u(c) + β * mean(w_func.(f(y - c) .* shocks)) for y in grid_y) results = maximize.(objectives, 1e-10, grid_y) # solver result for each grid point Tw = Optim.maximum.(results) if compute_policy σ = Optim.maximizer.(results) return Tw, σ end return Tw end Out[5]: T (generic function with 2 methods) Notice that the expectation in (11) is computed via Monte Carlo, using the approximation $$\int w(f(y - c) z) \phi(dz) \approx \frac{1}{n} \sum_{i=1}^n w(f(y - c) \xi_i)$$ where $\{\xi_i\}_{i=1}^n$ are IID draws from $\phi$. Monte Carlo is not always the most efficient way to compute integrals numerically but it does have some theoretical advantages in the present setting. (For example, it preserves the contraction mapping property of the Bellman operator — see, e.g., [PalS13]) ### An Example¶ Let’s test out our operator when • $f(k) = k^{\alpha}$ • $u(c) = \ln c$ • $\phi$ is the distribution of $\exp(\mu + \sigma \zeta)$ when $\zeta$ is standard normal As is well-known (see [LS18], section 3.1.2), for this particular problem an exact analytical solution is available, with $$v^*(y) = \frac{\ln (1 - \alpha \beta) }{ 1 - \beta} + \frac{(\mu + \alpha \ln (\alpha \beta))}{1 - \alpha} \left[ \frac{1}{1- \beta} - \frac{1}{1 - \alpha \beta} \right] + \frac{1}{1 - \alpha \beta} \ln y \tag{12}$$ The optimal consumption policy is $$\sigma^*(y) = (1 - \alpha \beta ) y$$ Let’s code this up now so we can test against it below In [6]: α = 0.4 β = 0.96 μ = 0 s = 0.1 c1 = log(1 - α * β) / (1 - β) c2 = (μ + α * log(α * β)) / (1 - α) c3 = 1 / (1 - β) c4 = 1 / (1 - α * β) # Utility u(c) = log(c) ∂u∂c(c) = 1 / c # Deterministic part of production function f(k) = k^α f′(k) = α * k^(α - 1) # True optimal policy c_star(y) = (1 - α * β) * y # True value function v_star(y) = c1 + c2 * (c3 - c4) + c4 * log(y) Out[6]: v_star (generic function with 1 method) ### A First Test¶ To test our code, we want to see if we can replicate the analytical solution numerically, using fitted value function iteration. We need a grid and some shock draws for Monte Carlo integration. In [7]: using Random Random.seed!(42) # For reproducible results. grid_max = 4 # Largest grid point grid_size = 200 # Number of grid points shock_size = 250 # Number of shock draws in Monte Carlo integral grid_y = range(1e-5, grid_max, length = grid_size) shocks = exp.(μ .+ s * randn(shock_size)) Now let’s do some tests. As one preliminary test, let’s see what happens when we apply our Bellman operator to the exact solution $v^*$. In theory, the resulting function should again be $v^*$. In practice we expect some small numerical error. In [8]: w = T(v_star.(grid_y), grid_y, β, log, k -> k^α, shocks) plt = plot(ylim = (-35,-24)) plot!(plt, grid_y, w, linewidth = 2, alpha = 0.6, label = "T(v_star)") plot!(plt, v_star, grid_y, linewidth = 2, alpha=0.6, label = "v_star") plot!(plt, legend = :bottomright) Out[8]: The two functions are essentially indistinguishable, so we are off to a good start. Now let’s have a look at iterating with the Bellman operator, starting off from an arbitrary initial condition. The initial condition we’ll start with is $w(y) = 5 \ln (y)$ In [9]: w = 5 * log.(grid_y) # An initial condition -- fairly arbitrary n = 35 plot(xlim = (extrema(grid_y)), ylim = (-50, 10)) lb = "initial condition" plt = plot(grid_y, w, color = :black, linewidth = 2, alpha = 0.8, label = lb) for i in 1:n w = T(w, grid_y, β, log, k -> k^α, shocks) plot!(grid_y, w, color = RGBA(i/n, 0, 1 - i/n, 0.8), linewidth = 2, alpha = 0.6, label = "") end lb = "true value function" plot!(plt, v_star, grid_y, color = :black, linewidth = 2, alpha = 0.8, label = lb) plot!(plt, legend = :bottomright) Out[9]: The figure shows 1. the first 36 functions generated by the fitted value function iteration algorithm, with hotter colors given to higher iterates 2. the true value function $v^*$ drawn in black The sequence of iterates converges towards $v^*$. We are clearly getting closer. We can write a function that computes the exact fixed point In [10]: function solve_optgrowth(initial_w; tol = 1e-6, max_iter = 500) Tw = similar(grid_y) v_star_approx = fixedpoint(w -> T(w, grid_y, β, u, f, shocks, Tw), initial_w).zero # gets returned end Out[10]: solve_optgrowth (generic function with 1 method) We can check our result by plotting it against the true value In [11]: initial_w = 5 * log.(grid_y) v_star_approx = solve_optgrowth(initial_w) plt = plot(ylim = (-35, -24)) plot!(plt, grid_y, v_star_approx, linewidth = 2, alpha = 0.6, label = "approximate value function") plot!(plt, v_star, grid_y, linewidth = 2, alpha = 0.6, label = "true value function") plot!(plt, legend = :bottomright) Out[11]: The figure shows that we are pretty much on the money. ### The Policy Function¶ To compute an approximate optimal policy, we take the approximate value function we just calculated and then compute the corresponding greedy policy. The next figure compares the result to the exact solution, which, as mentioned above, is $\sigma(y) = (1 - \alpha \beta) y$. In [12]: Tw, σ = T(v_star_approx, grid_y, β, log, k -> k^α, shocks; compute_policy = true) cstar = (1 - α * β) * grid_y plt = plot(grid_y, σ, lw=2, alpha=0.6, label = "approximate policy function") plot!(plt, grid_y, cstar, lw = 2, alpha = 0.6, label = "true policy function") plot!(plt, legend = :bottomright) Out[12]: The figure shows that we’ve done a good job in this instance of approximating the true policy. ## Exercises¶ ### Exercise 1¶ Once an optimal consumption policy $\sigma$ is given, income follows (5). The next figure shows a simulation of 100 elements of this sequence for three different discount factors (and hence three different policies). In each sequence, the initial condition is $y_0 = 0.1$. The discount factors are discount_factors = (0.8, 0.9, 0.98). We have also dialed down the shocks a bit. In [13]: s = 0.05 shocks = exp.(μ .+ s * randn(shock_size)) Otherwise, the parameters and primitives are the same as the log linear model discussed earlier in the lecture. Notice that more patient agents typically have higher wealth. Replicate the figure modulo randomness. ## Solutions¶ ### Exercise 1¶ Here’s one solution (assuming as usual that you’ve executed everything above) In [14]: function simulate_og(σ, y0 = 0.1, ts_length=100) y = zeros(ts_length) ξ = randn(ts_length-1) y[1] = y0 for t in 1:(ts_length-1) y[t+1] = (y[t] - σ(y[t]))^α * exp(μ + s * ξ[t]) end return y end plt = plot() for β in (0.9, 0.94, 0.98) Tw = similar(grid_y) initial_w = 5 * log.(grid_y) v_star_approx = fixedpoint(w -> T(w, grid_y, β, u, f, shocks, Tw), initial_w).zero Tw, σ = T(v_star_approx, grid_y, β, log, k -> k^α, shocks, compute_policy = true) σ_func = LinearInterpolation(grid_y, σ) y = simulate_og(σ_func) plot!(plt, y, lw = 2, alpha = 0.6, label = label = "beta = \$β") end plot!(plt, legend = :bottomright) Out[14]: • Share page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506876826286316, "perplexity": 1281.9677698217786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143646.38/warc/CC-MAIN-20200218085715-20200218115715-00099.warc.gz"}
https://stats.stackexchange.com/questions/396717/qq-plot-and-shapiro-wilk-test-disagree/396751
# QQ Plot and Shapiro-Wilk Test Disagree My QQ Plot shows that the data is not normally distributed qqplot(residual_values, fit = True, line = '45') pylab.show() It has a skewness of 0.54 residual_values.skew() # 0.5469389365591185 But the p_value of Shapiro-Wilk test is greater than 0.05, telling me that it is normally distributed shapiro(residual_values) # (0.9569438099861145, 0.2261517345905304) What is the correct inference from this? Is it normally distributed or not? • The QQ plot looks consistent with being normally distributed. Did you expect every point to fall exactly on the line? – The Laconic Mar 10 at 20:24 • It is approximately normally distributed if you are prepared to discount slight skewness. No procedure ever indicates more. – Nick Cox Mar 10 at 21:07 • It's approximately normal, the skewness in the sample is quite mild; this doesn't automatically mean the population is also skewed (though I expect it is). A high p-value on a test of normality doesn't mean that it is normal, only that you couldn't detect whatever population non-normality there was. (The answer to "is it normally distributed" is "no" - unless you generated it to be normal it won't actually be normal -- but why would it have to be?) – Glen_b Mar 10 at 23:09 The QQ plot is an informal test of normality that can give you some insight into the nature of deviations from normality; for example, whether the distribution has some skew, or fat tails, or there are specific observations that deviate from what you would expect from a normal distribution (outliers). The QQ plot can often convince you that the distribution is definitely not normal, but this isn't such a case. Here, the points fall more or less along the line, which is broadly consistent with normality--intuitively, the sort of variation you would expect to see in a small sample. The Shapiro-Wilk test is a formal test of normality. I'm not familiar with the shapiro function's output, so I'm not sure which number, if either, is supposed to be the p-value, but if you say it's largish, then we are led to accept the null hypothesis of normality. And this is consistent with what we see qualitatively in the QQ plot. The q-q is consistent with (not "proving") approximate normality, more or less. The Shapiro-Wilk is a formal test of normality and as such, it cannot confirm the null hypothesis of normality. The data may be reasonably consistent with normality yet still be from a different nonnormal underlying distribution. Frequentist hypothesis tests, as a general rule, cannot prove a hypothesis, and failure to reject (p>alpha) does not support the null hypothesis. @The Laconic gave some decent advice to interpret the q-q plot. However, large p-values do not lead you to accept the null hypothesis (therefore, you don't conclude normality based on this test; the best you can do is say insufficient evidence of nonnormality at the a priori chosen alpha level). My understanding is that, given power issues with normality tests, they are not highly recommended. As a result I don't use them any more, preferring QQ plots (which are recommended in the literature I have seen). • I was under the impression formal tests of normality are usually too powerful and too frequently detect immaterial departures from normality. Visualization is generally preferred, as you said (and theoretical knowledge when available). – LSC Mar 11 at 0:57 The Shapiro-Wilk p-value being >0.05 indicates lack of evidence against normality. That is consistent with the QQ plot you showed, which is not too far off the line. I don't see what the inconsistency is here. Also, you should give a CI for the skewness coefficient.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8119122385978699, "perplexity": 866.7837286505995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00037.warc.gz"}
https://www.math10.com/en/university-math/matrices/matrices3/matrices-invarian-factors.html
# Invarian Factors and Elementary Divisors ### 3.01 Elementary transformations. By an elementary transformation of a matric polynomial a(λ) = ||aij|| is meant one of the following operations on the rows or columns. Type I. The operation of adding to a row (column) a different row (column) multiplied by a scalar polynomial θ(λ). Type II. The operation of interchanging two rows (columns). Type III. The operation of multiplying a row (column) by a constant k ≠ 0. These transformations can be performed algebraically as follows. Type I.      Let Pij = 1 + θ(λ)eij      (i ≠ j), θ(λ) being a scalar polynomial; then |Pij| = 1 and $P_{ij} a = \sum \limits_{p,q} a_{pq} e_{pq} + \theta \sum \limits_q a_{jq} e_{iq}$ which is the matrix derived from a(λ) by adding θ times the jth row to the ith. The corresponding operation on the columns is equivalent to forming the product aPji. Type II.      Let Qij be the matrix Qij = 1 - eii — ejj + eij + eji      (i ≠ j) that is, Qij is the matrix derived from the identity matrix by inserting 1 in place of 0 in the coefficients of eij and eji and 0 in place of 1 in the coefficients of eii and ejj then |Qij| = — 1 and $Q_{ij} a = \sum \limits_{p,q} a_{pq} e_{pq} - \sum \limits_q a_{iq} e_{iq} - \sum \limits_q a_{jq} e_{jq} + \sum \limits_q a_{jq} e_{iq} + \sum \limits_q a_{iq} e_{jq}$ that is, Qija is derived from a by interchanging the ith and jth rows. Similarly aQij is obtained by interchanging the ith and jth columns. Since any permutation can be effected by a succession of transpositions, the corresponding transformation in the rows (columns) of a matrix can be produced by a succession of transformations of Type II. Type III.      This transformation is effected on the rth row (column) by multiplying on the left (right) by R = 1 + (k — l)err; it is used only when it is convenient to make the leading coefficient in some term equal to 1. The inverses of the matrices used in these transformations are P-1ij = 1 - θeij,      Q-1ij = Qij,      R-1 = 1 + (k-1 - 1)err; these inverses are elementary transformations. The transverses are also elementary since P'ij = Pji and Qa and R are symmetric. A matric polynomial b(λ) which is derived from a(λ) by a sequence of elementary transformations is said to be equivalent to a(λ); every such polynomial has the form p(λ)a(λ)q(λ) where p and q are products of elementary transformations. Since the inverse of an elementary transformation is elementary, a(λ) is also equivalent to b(λ). Further, the inverses of p and q are polynomials so that these are what we have already called elementary polynomials; we shall see later that every elementary polynomial can be derived from 1 by a sequence of elementary transformations. In the following sections we require two lemmas whose proofs are almost immediate. Lemma 1. The rank of a matrix is not altered by an elementary transformation. For if |P| ≠ 0, AP and PA have the same rank as A (§1.10). Lemma 2. The highest common factor of the coordinates of a matric polynomial is not altered by an elementary transformation. This follows immediately from the definition of elementary transformations. ### 3.02 The normal form of a matrix. The theorem we shall prove in this section is as follows. Theorem 1. If a(λ) is a matric polynomial of rank r, it can be reduced by elementary transformations to a diagonal matrix $(1) \ \ \ \ \ \ \sum \limits_i^r a_i(\lambda) e_{ji} = \begin{matrix} \alpha_1(\lambda) & & & & & & & & & & \\ & \alpha_2(\lambda) & & & & & & & & & \\ & & . & & & & & & & & \\ & & & . & & & & & & & \\ & & & & . & & & & & & \\ & & & & & \alpha_r(\lambda) & & & & & \\ & & & & & & 0 & & & & \\ & & & & & & & . & & & \\ & & & & & & & & . & & \\ & & & & & & & & & . & \\ & & & & & & & & & & 0 \end{matrix} = P(\lambda) a(\lambda) Q(\lambda)$, where the coefficient of the highest power of λ in each polynomial αi(λ) is 1, is a factor of αi+1, ...., αr (i = 1, 2,...., r — 1), and P(λ), Q(λ) are elementary polynomials. We shall first show that, if the coordinate of a(λ) of minimum degree m, say apq is not a factor of every other coordinate, then a(λ) is equivalent to a matrix in which the degree of the coordinate of minimum degree is less than m. Suppose that apq is not a factor of api for some i; then we may set api = βapq + a'pi where 0 is integral and api is not 0 and is of lower degree than m. Subtracting β times the ith column from the ith we have an equivalent matrix in which the coordinate (p, i) is a'pi whose degree is less than m. The same reasoning applies if apq is not a factor of every coordinate aiq in the qth column. After a finite number of such steps we arrive at a matrix in which a coordinate of minimum degree, say kpq, is a factor of all the coordinates which lie in the same row or column, but is possibly not a factor of some other coordinate kij. When this is so, let kpj = βkpq, kiq = γkpq where β and γ are integral. If we now add (1 — β) times the qth column to the jth, (p, j) and (i, j) become respectively k'pj = kpj + (1 - β)kpq = kpq, k'ij = kij + (1 - β)kiq = kij + (1 - β)γkpq. Here either the degree of k'ij is less than that of kpq, or k'pj has the minimum degree and is not a factor of k'ij which lies in the same column, and hence the minimum degree can be lowered as above. The process just described can be repeated so long as the coordinate of lowest degree is not a factor of every other coordinate and, since each step lowers the minimum degree, we derive in a finite number of steps a matrix ||b'ij|| which is equivalent to a(λ) and in which the coordinate of minimum degree is in fact a divisor of every other coordinate; and further we may suppose that b'11 = a1(λ) is a coordinate of minimum degree and set b'1i = γib'11, b'j1 = δjb'11. Subtracting γi times the first column from the ith and then δj times the first row from the jth (i, j = 2, 3,...., n) all the coordinates in the first row and column except b'11 become 0, and we have an equivalent matrix in the form $(2) \ \ \ \ \ \ \ \ \begin{matrix} \alpha_1(\lambda) & 0 & 0 & ... & 0 \\ 0 & b_{22} & b_{23} & ... & b_{2n} \\ 0 & b_{32} & b_{33} & ... & b_{3n} \\ . & . & . & ... & . \\ 0 & b_{n2} & b_{n3} & ... & b_{nn} \end{matrix}$ in which α1 is a factor of every bij. The coefficient of the highest power of λ in α1 may be made 1 by a transformation of type III. The theorem now follows readily by induction. For, assuming it is true for matrices of order n — 1, the matrix of this order formed by the b's in (2) can be reduced to the diagonal matrix $\begin{matrix} \alpha_2(\lambda) & & & & & & & & & & \\ & \alpha_3(\lambda) & & & & & & & & & \\ & & . & & & & & & & & \\ & & & . & & & & & & & \\ & & & & . & & & & & & \\ & & & & & \alpha_s(\lambda) & & & & & \\ & & & & & & 0 & & & & \\ & & & & & & & . & & & \\ & & & & & & & & . & & \\ & & & & & & & & & . & \\ & & & & & & & & & & 0 \end{matrix}$ where the α's satisfy the conditions of the theorem and each has α1 as a factor (§3.01, Lemma 2). Moreover, the elementary transformations by which this reduction is carried out correspond to transformations affecting the last n — 1 rows and columns alone in (2) and, because of the zeros in the first row and column, these transformations when applied to (2) do not affect its first row and column; also, sinoe elementary transformations do not affect the rank (§3.01, Lemma 1), s equals r and a(X^ has therefore been reduced to the form required by the theorem. The theorem is clearly true for matrices of order 1 and hence is true for any order. Corollary.      A matric polynomial whose determinant is independent of λ and is not 0, that is, an elementary polynomial, can be derived from 1 by the product of a finite number of elementary transformations. The polynomials αi are called the invariant factors of a(λ). ### 3.03 Determinantal and invariant factors. The determinantal factor of the sth order, Ds, of a matric polynomial a(λ) is defined as the highest common factor of all minors of order s, the coefficient of the highest power of λ being taken as 1. An elementary transformation of type I either leaves a given minor unaltered or changes it into the sum of that minor and a multiple of another of the same order, and a transformation of type II simply permutes the minors of a given order among themselves, while one of type III merely multiplies a minor by a constant different from 0. Hence equivalent matrices have the same determinantal factors. Bearing this in mind we see immediately from the form of (1) that the determinantal factors of a(λ) are given by Ds = α12,...., αs      (s = 1, 2, • • ., r),      Ds = 0 (s > r), so that αs = Ds/Ds-1 The invariant factors are therefore known when the determinantal factors are given, and vice versa. The definitions of this and the preceding sections have all been made relative to the fundamental basis. But we have seen in §1.08 that, if a1 is the matrix with the same array of coordinates as a but relative to another basis, then there exists a non-singular constant matrix b such that a = b-1a1b so that a and a1 are equivalent matrices. In terms of the new basis a1 has the same invariant factors as a does in terms of the old and a, being equivalent to a,sub>1, has therefore the same invariant factors in terms of the new basis as it has in the old. Hence the invariant and determinantal factors of a matric polynomial are independent of the (ccmstant) basis in terms of which its coordinates are expressed. The results of this section may be summarized as follows. Theorem 2. Two matric polynomials are equivalent if, and only if, they have the same invariant factors. ### 3.04 Non-singular linear polynomials. In the case of linear polynomials Theorem 2 can be made more precise as follows. Theorem 3. If aλ + b and cλ + d are non-singular linear polynomials which have the same invariant factors, and if |c| ≠ 0, there exist non-singular constant matrices p and q such that p(aλ + b)q = cλ + d. We have seen in Theorem 2 that there exist elementary polynomials P(λ), Q(λ) such that (3)          cλ + d = P(λ)(aλ + b)Q(λ). Since |c| ≠ 0, we can employ the division transformation to find matric polynomials p1, q1 and constant matrices p, q such that P(λ) = (cλ + d)p1 + p, Q(λ) = q1(cλ + d) + q. Using this in (3) we have (4)      cλ + d = p(aλ + b)q + (cλ + d)p1(aλ + b)Q + P(aλ + b)q1(cλ + d) - (cλ + d)p1(aλ + b)q1(cλ + d) and, since from (3) (aλ + b)Q = P-1(cλ + d),      P(aλ + b) = (cλ + d)Q-1, we may write in place of (4) (5)      p(aλ + b)q = [1 - (cλ + d)(p1P-1 + Q-1q1 - p1(aλ + b)q1)](cλ + d) = [1 - (cλ + d)R](cλ + d) where R = pP-11 + Q-1q1 — p1(aλ + b)q1 which is integral in λ since P and Q are elementary. If R ≠ 0, then, since |c| ≠ 0, the degree of the right side of (5) is at least 2, whereas the degree of the left side is only 1; hence R = 0 so that (5) gives p(aλ + b)q = cλ + d. Since cλ + d is not singular, neither p nor q can be singular, and hence the theorem is proved. When |c| = 0 (and therefore also |a| = 0) the remaining conditions of Theorem 3 are not sufficient to ensure that we can find constant matrices in place of P and Q, but these conditions are readily modified so as to apply to this case also. If we replace λ by λ/μ and then multiply by μ, aλ + b is replaced by the homogeneous polynomial aλ + bμ and the definition of invariant factors applies immediately to such polynomials. In fact, if |a| ≠ 0, the invariant factors of aλ + bμ are simply the homogeneous polynomials which are equivalent to the corresponding invariant factors of aλ + b. If, however, |a| = 0, then |aλ + bμ| is divisible by a power of μ which leads to factors of the form μi in the invariant factors of aλ + bμ which have no counterpart in those of aλ + b. If |c| = 0 but |cλ + d| ≠ 0, there exist values, λ1 ≠ 0, μ1 such that |cλ1 + dμ1| ≠ 0 and, if we make the transformation (6)         λ = λ1α,      μ = μ1α + β, aλ + bμ, cλ + dμ become a1α + b1β, c1α + d1β where a1 = aλ1 + bμ1, C1 = cλ1 + dμ1, and therefore |c| ≠ 0. Further, when a&lamnbda; + bμ and cλ + dμ have the same invariant factors, this is also true of a1α + b1β and c1α + d1η. Since |c1| ≠ 0, the proof of Theorem 3 is applicable, so that there are constant non-singular matrices p, q for which p(a1α + b1β)q = c1α + d1β, and on reversing the substitution (6) we have p(aλ + bμ)q = cλ + dμ. Theorem 3 can therefore be extended as follows. Theorem 4. If the non-tingular polynomials aλ + bμ, cλ + dμ have the same invariant factors, there exist non-singular constant matrices p, q such that p(aλ + bμ)q = cλ + dμ. An important particular case of Theorem 3 arises when the polynomials have the form λ — b, λ — d. For if p(λ — b)q = λ — d, on equating coefficients we have pq = 1, pbq = d; hence b = p-1dp, that is, b and d are similar. Conversely, if b and d are similar, then λ — b and λ — d are equivalent, and hence we have the following theorem. Theorem 5. Two constant matrices b, d are similar if, and only if,λ — b and λ — d have the same invariant factors. ### 3.05 Elementary divisors. If D = |aλ + b| is not identically zero and if λ1, λ2,...., λs are its distinct roots, say D = (λ - λ1)ν1(λ - λ2)ν2.....(λ - λs)νs, then the invariant factors of aλ + b, being factors of D, have the form α1 = (λ - λ1)ν11(λ - λ2)ν12.....(λ - λs)ν1s α2 = (λ - λ1)ν21(λ - λ2)ν22.....(λ - λs)ν2s ................................................................................... αi = (λ - λ1)νi1(λ - λ2)νi2.....(λ - λs)νis ................................................................................... αn = (λ - λ1)νn1(λ - λ2)νn2.....(λ - λs)νns where $\sum \limits_{j=1}^r v_{ji = v_i}$ and, since αj is a factor of αj+1, (8)          ν1i ≤ ν2i ≤ ..... ≤ νni (i = 1, 2,....., s). Such of the factors (λ — λj)νij as are not constants, that is, those for which νij > 0, are called the elementary divisors of aλ + b. The elementary divisors of λ — b are also called the elementary divisors of b. When all the exponents νij which are not 0 equal 1, b is said to have simple elementary divisors. For some purposes the degrees of the elementary divisors are of more importance than the divisors themselves and, when this is the case, they are indicated by writing (9)      [(νn1, νn-1,1,...., ν11),(νn2, νn-1,2,...., ν12).....] where exponents belonging to the same linear factor are in the same parenthesis, zero exponents being omitted; (9) is sometimes called the characteristic of aλ + b. If a root, say λ1, is zero, it is convenient to indicate this by writing ν0i1 in place of νi1. The maximum degree of |aλ + b| is n and therefore $\sum \limits_{i,j} v_{ij} \leq n$ where the equality sign holds only when |a| ≠ 0. The modifications necessary when the homogeneous polynomial aλ + bμ is taken in place of aλ + b are obvious and are left to the reader. ### 3.06 Matrices with given elementary divisors. The direct investigation of the form of a matrix with given elementary divisors is somewhat tedious. It can be carried out in a variety of ways; but, since the form once found is easily verified, we shall here state this form and give the verification, merely saying in passing that it is suggested by the results of §2.07 together with a study of a matrix whose reduced characteristic function is (λ — λ1)ν. Theorem 6. If λ1, λ2,...., λs are any constantsy not necessarily all different and ν1, ν2,...., νs are positive integers whose sum is n, and if ai is the array of νi rows and columns given by (10)          $\begin{matrix} \lambda_i & 1 & 0 & ... & 0 & 0 \\ 0 & \lambda_i & 1 & ... & 0 & 0 \\ . & . & . & ... & . & . \\ . & . & . & ... & . & . \\ 0 & 0 & 0 & ... & \lambda_i & 1 \\ 0 & 0 & 0 & ... & 0 & \lambda_i \end{matrix}$ where each coordinate on the main diagonal equals λi, those on the parallel on its right are 1, and the remaining ones are 0, and if a is the matrix of n rows and columns given by $(11) \ \ \ \ \ \ \ a = \begin{Vmatrix} a_1 & & & & & \\ & a_2 & & & & \\ & & . & & & \\ & & & . & & \\ & & & & . & \\ & & & & & a_s \end{Vmatrix}$ composed of blocks of terms defined by (10) arranged so thai the main diagonal of each lies on the main diagonal of a, the other coordinates being 0, then λ — a has the elementary divisors (12)      (λ - λ1)ν1, (λ - λ2)ν2, ..., (λ - λs)νs In addition to using ai to denote the block given in (10) we shall also use it for the matrix having this block in the position indicated in (11) and zeros elsewhere. In the same way, if ƒi is a block with νi rows and columns with 1's in the main diagonal and zeros elsewhere, we may also use ƒi for the corresponding matrix of order n. We can then write λ - a = ∑(λƒi - ai),      ƒia = ai = aƒi,      ∑ƒi = 1. The block of terms corresponding to λƒi — ai has then the form $(13) \ \ \ \ \ \ \ \begin{matrix} \lambda - \lambda_i & -1 & & & & \\ & & \lambda - \lambda_i & -1 & & \\ & & . & . & & \\ & & & . & . & \\ & & & & . & -1 \\ & & & & & \lambda - \lambda_i \end{matrix} (v_i\ rows\ and\ columns)$ where only the non-zero terms are indicated. The determinant of these νi rows and columns is (λ — λi)νi and this determinant has a first minor equal to ±1; the invariant factors of λƒi — ai, regarded as a matrix of order νi are therefore 1,1,....., 1, (λ — λi)νi and hence it oan be reduced by elementary transformation to the diagonal form $\begin{matrix} (\lambda - \lambda_i)^{v_i} & & & & & \\ & 1 & & & & \\ & & . & & & \\ & & & . & & \\ & & & & . & \\ & & & & & 1 \end{matrix}$ If we apply the same elementary transformations to the corresponding rows and columns of λ — a, the effect is the same as regards the block of terms λƒi — ai (corresponding to ai in (11)) since all the other coordinates in the rows and columns which contain elements of this block are 0; moreover these transformations do not affect the remaining blocks λƒj- — aj- (j ≠ i) nor any 0 coordinate. Carrying out this process for i = 1, 2,...., s and permuting rows and columns, if necessary, we arrive at the form $\begin{matrix} (\lambda - \lambda_1)^v & & & & & & & & & & \\ & (\lambda - \lambda_2)^{v_2} & & & & & & & & & \\ & & . & & & & & & & & \\ & & & . & & & & & & & \\ & & & & . & & & & & & \\ & & & & & (\lambda - \lambda_s)^{v_s} & & & & & \\ & & & & & & 1 & & & & \\ & & & & & & & . & & & \\ & & & & & & & & . & & \\ & & & & & & & & & . & \\ & & & & & & & & & & 1 \end{matrix}$ Suppose now that the notation is so arranged that λ1 = λ2 = ..... = λp = α,      ν1 ≥ ν2 ≥ ..... ≥ νp, but λi &ne α for i > p. The nth determinantal factor Dn then contains (λ — a) to the power $\sum \limits_1^p v_i$ exactly. Each minor of order n — 1 contains at least p — 1 of the factors (14)          (λ - α)ν1, (λ - α)ν2,....., (λ - α)νp and in one the highest power (λ — α)ν is lacking; hence Dn-1 contains (λ — a) to exactly the power $\sum \limits_2^p v_i$ and hence the nth invariant factor αn contains it to exactly the ν1>ith power. Similarly the minors of order n — 2 each contain at least p — 2 of the factors (14) and one lacks the two factors of highest degree; hence (λ — α) is contained in Dn-2 to exactly the power $\sum \limits_3^p v_i$ and in αn-1 to the power ν2. Continuing in this way we see that (14) gives the elementary divisors of a which are powers of (λ — α) and, treating the other roots in the same way, we see that the complete list of elementary divisors is given by (12) as required by the theorem. ### 3.07 If A is a matrix with the same elementary divisors as a, it follows from Theorem 5 that there is a matrix P such that A = PaP-1 and hence, if we choose in place of the fundamental basis (e1, e2,....., en) the basis (Pe1, Pe2,...., Pen), it follows from Theorem 6 of chapter 1 that (11) gives the form of A relative to the new basis. This form is called the canonical form of A. It follows immediately from this that $(15) \ \ \ \ \ \ \ \ P^{-1} A^k P = \begin{Vmatrix} a_1^k & & & & & \\ & a_2^k & & & & \\ & & . & & & \\ & & & . & & \\ & & & & . & \\ & & & & & a_s^k \end{Vmatrix}$ where aki is the block of terms derived by forming the kth power of ai regarded as a matrix of order νi. Since Dn equals |λ — a|, it is the characteristic function of a (or A) and, since Dn-1 is the highest common factor of the first minors, it follows from Theorem 3 of chapter 2 that αn is the reduced characteristic function. If we add ƒ's together in groups each group consisting of all the ƒ's that correspond to the same value of λi, we get a set of idempotent matrices, say φ1, φ2,...., φn corresponding to the distinct roots of a, say α1, α2,...., αr. These are the principal idempotent elements of a; for (i) aφi = φia, (ii) (a — αii is nilpotent, (iii) ∑φi = ∑ƒi = 1 and φiφj = 0 (i ≠ j) so that the conditions of §2.11 are satisfied. When the same root αi occurs in several elementary divisors, the corresponding ƒ's are called partial idempotent elements of a; they are not unique as is seen immediately by taking a = 1. If α is one of the roots of A, the form of A — α is sometimes important. Suppose that λ1 = λ2 = ..... = λp = α, λi ≠ α (i > p) and set bi = ai — αƒi the corresponding array in the t'th block of a — α (cf. (10), (11)) being $(16) \ \ \ \ \ \ \ \ \ \ \begin{matrix} \lambda_i - \alpha & 1 & & & & \\ & \lambda_i - \alpha & 1 & & & \\ & & . & . & & \\ & & & . & . & \\ & & & & . & 1 \\ & & & & & \lambda_i - \alpha \end{matrix}$ In the case of the first p blocks λi — α = 0 and the corresponding b1, b2,...., bp are nilpotent, the index of bi being νi and, assuming ν1 ≥ ν2 ≥ .... ≥ νp as before, (A — α)k has the form $P^{-1} (A - \alpha)^k P = \begin{Vmatrix} b_1^k & & & & & \\ & b_2^k & & & & \\ & & . & & & \\ & & & . & & \\ & & & & . & \\ & & & & & b_s^k \end{Vmatrix}$ or, when k ≥ ν1, $(17) \ \ \ \ \ \ \ P^{-1} (A - \alpha)^k P = \begin{Vmatrix} 0 & & & & & & & & & \\ & . & & & & & & & & \\ & & . & & & & & & & \\ & & & . & & & & & & \\ & & & & 0 & & & & & \\ & & & & & b_{p+1}^k & & & & \\ & & & & & & . & & & \\ & & & & & & & . & & \\ & & & & & & & & . & \\ & & & & & & & & & b_s^k \end{Vmatrix}$ Since none of the diagonal coordinates of bp+1,....., bs are 0, the rank of (A — α)k, when k ≥ ν1 is exactly n — $\sum \limits_1^p v_i = \sum \limits_{p+1}^s v_i$ and the nullspace of (A — α)k is then the same as that of (A — α)ν1. Hence, if there exists a vector z such that (A — α)kz = 0 but (A — α)k-1 z ≠ 0, then (i) k ≤ ν1, (ii) z lies in the nullspace of (A — α)k. ### 3.08 Invariant vectors. If A is a matrix with the elementary divisors given in the statement of Theorem 6, then λ — A is equivalent to λ — a and by Theorem 5 there is a non-singular matrix P such that A = PaP-1. If we denote the unit vectors corresponding to the rows and columns of ai in (10) by ei1, ei2,...., eiνi and set $(18) \ \ \ \ \ x_j^i = \left\{ \begin{array}{l l} P e_j^i & (j = 1, 2, ... , v_i; i = 1, 2, ... , s) \\ 0 & (j < 1 \ or \ > v_i; \ or \ i < 1 \ or > s)\\ \end{array} \right. \]$ then aei1 = λi1, aei2 = λiei2 + ei1,....., aeiνi = λieiνi + eiνi-1 and hence (19)      Axij = λixij + xij-1, (j = 1, 2,...., νi; i = 1, 2,....., s). The vectors xij are called a set of invariant vectors of A. The matrix A can be expressed in terms of its invariant vectors as follows. We have from (10) $a_i = \sum \limits_i (\lambda_i e_j^i + e_{j-1}^i ) S e_j^i = \sum \limits_i e_j^i S (\lambda_i e_j^i + e_{j +1}^i )$ and hence, if (20)          yij = (P')-1eij = (PP')-1xij, then $(21) \ \ \ \ \ \ \ \ \ \ A = \sum \limits_{i,j} (\lambda_i x_j^i + x_{j-1}^i ) S y_j^i = \sum \limits_{i,j} x_j^i S (\lambda_i y_j^i + y_{j +1}^i )$ where it should be noted that the y's form a system reciprocal to the x's and that each of these systems forms a basis of the vector space since |P| ≠ 0. If we form the transverse of A, we have from (21) $(22) \ \ \ \ \ \ \ \ \ \ A' = \sum \limits_{i,j} (\lambda_i y_j^i + y_{j +1}^i ) S x_j^i$ so that the invariant vectors of A'are obtained by forming the system reciprocal to the x's and inverting the order in each group of vectors corresponding to a given elementary divisor; thus A'yiνi = λiyiνi, A'yiνi-1 = λiyiνi-1 + yiνi,...., A'yi1 = λiyi1 + yi2. A matrix A and its transverse clearly have the same elementary divisors and are therefore similar. The matrix which transforms A into A' can be given explicitly as follows. Let qi be the symmetric array $\begin{matrix} 0 & 0 & ... & 0 & 1 \\ 0 & 0 & ... & 1 & 0 \\ . & . & ... & . & . \\ . & . & ... & . & . \\ 0 & 1 & ... & 0 & 0 \\ 1 & 0 & ... & 0 & 0 \end{matrix} (v_i \ rows \ and \ columns)$ It is easily seen that qiai = a'iqi and hence, if Q is the matrix $Q = \begin{Vmatrix} q_1 & & & & & \\ & q_2 & & & & \\ & & . & & & \\ & & & . & & \\ & & & & . & \\ & & & & & q_s \end{Vmatrix}$ we have Qa = a'Q, and a short calculation gives A' = R-1AR where R is the symmetric matrix (23)          R = PQ-1P' = PQP'. If the elementary divisors of A are simple, then Q = 1 and R = PP'. If the roots λi of the elementary divisors (12) are all different, the nullity of (A — λi) is 1, and hence xi1 is unique to a scalar multiplier. But the remaining xij are not unique. In fact, if the x's denote one choice of the invariant vectors, we may take in place of xij zij = ki1xij + ki2xij-1 + .... + kijxi1 (j = 1, 2,...., νi) where the k's are any constant scalars subject to the condition ki1 ≠ 0. Suppose now that λ1 = λ2 = ..... = λp = α, λi ≠ α (i > p) and ν1 ≥ ν2 ≥ .... ≥ νp as in §3.07. We shall say that z1, z2,....., zk is a chain of invariant vectors belonging to the exponent k if (24)      zi = (A - α)k-izk      (i = 1, 2,...,k)      (A - α)kzk = 0. It is also convenient to set zi = 0 for i < 0 or > k. We have already seen that k ≤ νi and that zk lies in the nullspace of (A — α)ν1; and from (17) it is seen that the nullspace of (A — α)ν1 has the basis (xij; j = 1, 2,..., νi, i = 1. 2,...., p). Since zk belongs to the nullspace of (A — α)ν1, we may set $(25) \ \ \ \ \ \ \ \ \ z_k = \sum \limits_{i=1}^p \sum \limits_{j=1}^{v_i} \xi_{ij} x_j^i$ and therefore by repeated application of (15) with λi = α $(26) \ \ \ \ \ \ \ \ \ (A - \alpha)^r z_k = \sum \limits_{i,j} \xi_{ij} x_{j-2}^i$ From this it follows that, in order that (A — α)kzk = 0, only values of j which are less than or equal to k can actually occur in (25) and in order that (A — α)k-1zk ≠ 0 at least one ζik must be different from 0; hence (27)      $z_k = \sum \limits_i (\xi_{ik} x_k^i + \xi_{i,k-1} x_{k-1}^i + ...) \\ z_{k-1} = \sum \limits_i (\xi_{ik} x_{k-1}^i + \xi_{i,k-1} x_{k-2}^i + ...) \\ ........................................................................................\\ z_1 = \sum \limits_i \xi_{ik} x_1^i$ Finally, if we impose the restriction that zk does not belong to any chain pertaining to an exponent greater than k, it is necessary and sufficient that k be one of the numbers ν1, ν2,...., νp and that no value of i corresponding to an exponent greater than k occur in (27). ### 3.09 The actual determination of the vectors xij can be carried out by the processes of §3.02 and §3.04 or alternatively as follows. Suppose that the first s1 of the exponents νi equal n1, the next s2 equal n2, and so on, and finally the last sq equal nq. Let R1 be the nullspace of (A — α)n1 and R'1 the nullspace of (A — α)n1-1; then R1 contains R'1. If M1 is a space complementary to R'1 in R1, then for any vector x in M1 we have (A — α)rx = 0 only when r ≥ n1. Also, if x1, x2,....., xm1 is a basis of M1, the vectors (28)          (A - α)rxi (r = 0,1,....,n1 - 1) are linearly independent; for, if $\sum \limits_{r=a}^{n_1 - 1} \sum \limits_i \xi_{ir} (A - \alpha)^r x_i = 0,$ some ξir being different from 0, then multiplying by (A — α)n1-s-1 we have $(A - \alpha)^{n_1 - 1} \sum \limits_i \xi_{is} x_i = 0$ which is only possible if every ξis = 0 since x1, x2,...., xm1 form a basis of M1 and for no other vector of M1 is (A — α)n1-1x = 0. The space defined by (28) clearly lies in R1; we shall denote it by ℘1. If we set R1 = R2 + ℘1 where R2 is complementary to ℘1 in R1, then R2 contains all vectors which are members of sets belonging to the exponents n2, n3,.... but not lying in sets with the exponent n1. We now set R2 = R'2 + M2, where R'2 is the subspace of vectors x in R2 such that (A — α)n1-1x = 0. As before the elements of M2 generate sets with exponent n2 but are not members of sets with higher exponents; and by a repetition of this process we can determine step by step the sets of invariant vectors corresponding to each exponent n1. Contact email:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291343092918396, "perplexity": 1225.9720594268058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00006.warc.gz"}
http://math.stackexchange.com/questions/105821/finding-all-the-numbers-that-fit-x-y-z
# Finding all the numbers that fit $x! + y! = z!$ I have the formula $x! + y! = z!$ and I'm looking for positive integers that make it true. Upon inspection it seems that x = y = 1 and z = 2 is the only solution. The problem is how to show it. From the definition of the factorial function we know $x! = x(x-1)(x-2)...(2)(1)$ So we can do something like this: $$[x(x-1)(x-2)...(2)(1)] + [y(y-1)(y-2)...(2)(1)] = [z(z-1)(z-2)...(2)(1)]$$ we can then factor all of the common terms out on the LHS. $$[...(2)(1)][x(x-1)(x-2)... + y(y-1)(y-2)...] = [z(z-1)(z-2)...(2)(1)]$$ and divide the common terms out of the right hand side $$[x(x-1)(x-2)...] + [y(y-1)(y-2)...] = [z(z-1)(z-2)...]$$ I'm stuck on how to proceed and how to make a clearer argument that there is only the one solution (if indeed there is only the one solution). If anybody can provide a hint as to how to proceed I would appreciate it. - Suppose without loss of generality that $x \le y$. Then $z!\le 2y!$. – Chris Eagle Feb 5 '12 at 0:23 Does Modular Arithmetic help? – user21436 Feb 5 '12 at 0:35 – Martin Sleziak Aug 14 '14 at 4:29 If $x, y \in \{0,1\}$, then we can always find a solution $z \in \{0, 1, 2\}$. The rest of the post will show that there are no other solutions. Let us assume $y \geq x \geq 2$ without loss of generality. Dividing both sides by $x!$ gives $$1 + y(y-1)\cdots(x+1) = z(z-1)\cdots(x+1).$$ If $y > x$, we see $x+1$ divides the right-hand side but not the left-hand side ($x+1$ divides one term in the sum but not the other), in which case there are no solutions. If $y = x$, we may reduce the problem to that of solving $2y! = z!$. Since $y \geq 2$, the left-hand side always has more factors of $2$ than the right-hand side, in which case there are no solutions. - If $x>1$ and $y \leq x$ then $$x!<x!+y! \leq 2x!<(x+1)!$$ therefore $$x!<z!<(x+1)!$$ so that the only solutions are $(x,y,z)=(0,0,2),(0,1,2),(1,0,2),(1,1,2)$. - How does the $z!$ come into play in your second equation? – Roland Feb 15 at 19:42 We already have $z! = x! + y!$ – jelec Feb 15 at 19:49 Very shortly, $(x,y,z)=(0,0,2),(0,1,2),(1,0,2),(1,1,2)$ are the only solutions because if WLOG $1<x\le y<z$ then dividing by $y!$ yields $$1+\frac{1}{(x+1)(x+2)\cdots y}=(y+1)(y+2)\cdots z,$$ whose RHS is an integer while its LHS is not. - The problem is how to show it One way maybe using Stirling's formula that approximates $n!$ as follows: $n! \approx n \ln (n) - n$ so you could write: $x \ln(x) - x + y \ln(y) - y \approx z \ln(z) - z$ $z - x - y \approx z \ln(z) - x \ln (x) - y \ln (y)$ one solution to this may be derived by: $z=z\ln(z)$ and $x=x\ln(x)$ and $y=y\ln(y)$ that is: $1=\ln(z)$ and $1 = \ln(x)$ and $1 = \ln(y)$ this leads us to the fact that $x, y, z$ are all between $0$ and $e+m$ where $m$ is a small integer greater than or equal to zero. I used the $m$ here since the Sterling formula is not accurate hence the values may not be exact. One could then try manually integers in the range $[0,2+m]$ and construct the different combinations to find at least $1$ solution. - Unfortunately for the assignment we've been instructed to stay in the integers. So we can't use ln() and such. – AvatarOfChronos Feb 5 '12 at 0:56 A harder problem is x! y! = z!. – marty cohen Feb 5 '12 at 3:21 @Brian M. Scott - Thanks for editing! – NoChance Feb 5 '12 at 6:56 If $x!+y!=z!$ in positive integers $x,y,z$, we can assume $x\le y$, and it's clear that $y\le z$. These inequalities imply $x!\mid y!$ and $y!\mid z!$. But $$y!\mid z!\implies y!\mid(z!-y!)\implies y!\mid x!$$ and this implies $y=x$, so that $z!=2x!$. The only solution to this is $x=1$, $z=2$, so we get $(x,y,z)=(1,1,2)$ as the only solution in positive integers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111852049827576, "perplexity": 162.71928774857722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111612.51/warc/CC-MAIN-20160428161511-00106-ip-10-239-7-51.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/372469/solving-recurrence-equation-using-exponential-generating-functions
# Solving recurrence equation using exponential generating functions The recurrence is $a_n = (n-1) a_{n-1} + (n-2)a_{n-2}$ I tried using exponential generating functions and have problems with it (the second term mostly) Further can this be solved without reducing it to a differential equations? - You could try $b_n=na_n$. Then your recurrence becomes $\frac{b_n}{n}=b_{n-1}+b_{n-2}$ –  vadim123 Apr 25 '13 at 13:58 That's the recurrence relation for the dearrangements problem (hatcheck problem), which is solved using calculus. I guess there is no way to circumvent that little devil, calculus that is.. –  Vigneshwaren Apr 25 '13 at 16:17 @Vigneshwaren, it is close to the derangements recurrence $D_n = (n - 1) (D_{n - 1} + D_{n - 2})$, but not the same. –  vonbrand Apr 25 '13 at 18:03 Define $A(z) = \sum_{n \ge 0} a_n \frac{z^n}{n!}$. Write: $$a_{n + 2} = (n + 1) a_{n + 1} + n a_n$$ From properties of exponential generating functions: \begin{align*} A''(z) &= (z \frac{d}{d z} + 1) A'(z) + z A'(z) \\ A''(z) &= z A''(z) + (1 + z) A'(z) \end{align*} This looks second order, but is a homogeneous linear first order differential equation for $A'(z)$. Courtesy of maxima we have for some $c_1$: $$A'(z) = \frac{c_1 e^{-z}}{(1 - z)^2}$$ This gets ugly... maxima integrates this to: $$A(z) = c_2 + \frac{c_1 E_2(z - 1)}{1 - z}$$ where: $$E_2(z) = z \int_z^\infty \frac{e^{-t} d t}{t}$$ NIST has details on this function. Expanding $A'(z)$ in series (multiply the exponential by the series for $(1 - z)^{-2}$) and integrating term by term might be a nicer bet... \begin{align*} A'(z) &= c_1 \sum_{k \ge 0} \binom{-2}{k} (-z)^k e^{-z} \\ &= c_1 \sum_{k \ge 0} \binom{k + 1}{1} z^k e^{-z} \\ &= c_1 \sum_{k \ge 0} (k + 1) z^k e^{-z} \\ A(z) &= c_2 - c_1 \sum_{k \ge 0} (n + 1) \int_z^\infty t^k e^{-t} d t \end{align*} Too bad that the integrals don't give any sort of nice polynomials... would need to set up a recurrence for those by integration by parts. Edit Do I feel dumb... when I do know that if: $$C(z) = \sum_{n \ge 0} c_n z^n$$ then: $$\frac{C(z)}{1 - z} = \sum_{n \ge 0} \left( \sum_{0 \le k \le n} c_k \right) z^n$$ For an exponential generating function it is $a_0 = A(0)$, $a_1 = A'(0)$. So we can say: \begin{align*} A'(z) &= \frac{a_1 e^{-z}}{(1 - z)^2} \\ &= a_1 \sum_{n \ge 0} \left( \sum_{0 \le r \le n} \sum_{0 \le s \le r} \frac{(-1)^s}{s!} \right) z^n \end{align*} This gives: $$\frac{a_n}{n!} = [n = 0] a_0 + [a > 0] a_1 \sum_{0 \le r \le n - 1} \sum_{0 \le s \le r} \frac{(-1)^s}{s!}$$ Summing by parts: $$\sum_{0 \le r \le n - 1} \sum_{0 \le s \le r} \frac{(-1)^s}{s!} = n \sum_{0 \le s \le n} \frac{(-1)^s}{s!} - 0 \cdot 1 - n \cdot \frac{(-1)^n}{n!} = n \sum_{0 \le s \le n - 1} \frac{(-1)^s}{s!}$$ So: $$a_n = [n = 0] a_0 + a_1 n! \cdot n \sum_{0 \le s \le n - 1} \frac{(-1)^s}{s!}$$ (I think something isn't quite right here, but I'm tired...) - That was brilliant and helpful, and this is gonna take some of my time. Thanks –  Vigneshwaren Apr 26 '13 at 16:40 By vadim123's hint, the substitution $b_{n + 1} = n a_n$ gives: $$b_{n + 1} = n (b_n + b_{n - 1})$$ Defining the exponential generating function $B(z) = \sum_{n \ge 0} b_n \frac{z^n}{n!}$, by properties of exponential generating functions: \begin{align*} B'(z) &= z B'(z) + z \frac{d}{dz} \int B(z) \\ B'(z) (1 - z) &= z B(z) \\ B(z) = \frac{c e^{-z}}{1 - z} \end{align*} From here: \begin{align*} b_n &= c n! \sum_{0 \le k \le n} \frac{(-1)^k}{k!} \\ a_n &= c (n + 1) (n - 1)! \sum_{0 \le k \le n - 1} \frac{(-1)^k}{k!} \end{align*} Ugly as they come... but at least $a_n \approx c (n + 1) (n - 1)! e^{-1}$ - The original problem which follows this recursion is: A good permutation of {1,2,..., n} is one which doesn't have two consecutive numbers (like {1,2} etc.). Find the number of such permutations. I figured the recursion which was fun to construct. I was later told that this was somehow related to Euler's totient function $\phi$, which I don't see how though. But most importantly the proof of this recursion is what makes it fun and interesting. - You should really add this as an edit to your question. –  vonbrand Apr 26 '13 at 18:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999929666519165, "perplexity": 906.1263964881965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660724.78/warc/CC-MAIN-20150417045740-00028-ip-10-235-10-82.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-x-11
Algebra Topics # How do you solve |x|=11? $x = \pm 11$ #### Explanation: The absolute value sign (the two vertical lines that the $x$ is sitting between) means that whatever the value of $x$ is, whether positive or negative, it will be positive when the absolute value operation is performed. For instance, if $x = 11$, then $\left\mid 11 \right\mid = 11$ However, if we set $x = - 11$, then $\left\mid - 11 \right\mid = 11$ And so $x$ can be 11 and $- 11$. ##### Impact of this question 474 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146633744239807, "perplexity": 553.2829039754434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00483.warc.gz"}
https://en.wikipedia.org/wiki/Sagnac_effect
# Sagnac effect Figure 1. Schematic representation of a Sagnac interferometer. The Sagnac effect, also called Sagnac interference, named after French physicist Georges Sagnac, is a phenomenon encountered in interferometry that is elicited by rotation. The Sagnac effect manifests itself in a setup called a ring interferometer. A beam of light is split and the two beams are made to follow the same path but in opposite directions. On return to the point of entry the two light beams are allowed to exit the ring and undergo interference. The relative phases of the two exiting beams, and thus the position of the interference fringes, are shifted according to the angular velocity of the apparatus. In other words, when the interferometer is at rest with respect to the earth, the light travels at a constant speed. However, when the interferometer system is spun, one beam of light will slow with respect to the other beam of light. This arrangement is also called a Sagnac interferometer. Georges Sagnac set up this experiment to prove the existence of the aether that Einstein's theory of special relativity had discarded.[1][2] A gimbal mounted mechanical gyroscope remains pointing in the same direction after spinning up, and thus can be used as a rotational reference for an inertial navigation system. With the development of so-called laser gyroscopes and fiber optic gyroscopes based on the Sagnac effect, the bulky mechanical gyroscope is replaced by one having no moving parts in many modern inertial navigation systems. The principles behind the two devices are different, however. A conventional gyroscope relies on the principle of conservation of angular momentum whereas the sensitivity of the ring interferometer to rotation arises from the invariance of the speed of light for all inertial frames of reference. ## Description and operation Figure 2. A guided wave Sagnac interferometer, or fibre optic gyroscope, can be realized using an optical fiber in a single or multiple loops. Typically three or more mirrors are used, so that counter-propagating light beams follow a closed path such as a triangle or square.(Fig. 1) Alternatively fiber optics can be employed to guide the light through a closed path.(Fig. 2) If the platform on which the ring interferometer is mounted is rotating, the interference fringes are displaced compared to their position when the platform is not rotating. The amount of displacement is proportional to the angular velocity of the rotating platform. The axis of rotation does not have to be inside the enclosed area. The phase shift of the interference fringes is proportional to the platform's angular velocity ${\displaystyle {\boldsymbol {\omega }}}$ and is given by a formula originally derived by Sagnac: ${\displaystyle \Delta \phi \approx {\frac {8\pi }{\lambda c}}{\boldsymbol {\omega }}\cdot \mathbf {A} }$ where ${\displaystyle \mathbf {A} }$ is the oriented area of the loop and ${\displaystyle \lambda }$ the wavelength of light. The effect is a consequence of the different times it takes right and left moving light beams to complete a full round trip in the interferometer ring. The difference in travel times, when multiplied by the optical frequency ${\displaystyle c/\lambda }$, determines the phase difference ${\displaystyle \Delta \phi }$. The rotation thus measured is an absolute rotation, that is, the platform's rotation with respect to an inertial reference frame. ## History of aether experiments Early suggestions to build a giant ring interferometer to measure the rotation of the Earth were made by Oliver Lodge in 1897, and then by Albert Abraham Michelson in 1904. They hoped that with such an interferometer, it would be possible to decide between the idea of a stationary aether, and an aether which is completely dragged by the Earth. That is, if the hypothetical aether were carried along by the Earth (or by the interferometer) the result would be negative, while a stationary aether would give a positive result.[3][4][5] An experiment conducted in 1911 by Franz Harress, aimed at making measurements of the Fresnel drag of light propagating through moving glass, was in 1920 recognized by Laue as actually constituting a Sagnac experiment. Not aware of the Sagnac effect, Harress had realized the presence of an "unexpected bias" in his measurements, but was unable to explain its cause.[6] The first description of the Sagnac effect in the framework of special relativity was done by Max von Laue in 1911,[7][8] two years before Sagnac conducted his experiment. By continuing the theoretical work of Michelson (1904), von Laue confined himself to an inertial frame of reference (which he called a "valid" reference frame), and in a footnote he wrote "a system which rotates in respect to a valid system ${\displaystyle K^{0}}$ is not valid".[7] Assuming constant light speed ${\displaystyle c}$, and setting the rotational velocity as ${\displaystyle \omega }$, he computed the propagation time ${\displaystyle \tau _{+}}$ of one ray and ${\displaystyle \tau _{-}}$ of the counter-propagating ray, and consequently obtained the time difference ${\displaystyle \Delta \tau =\tau _{+}-\tau _{-}}$. He concluded that this interferometer experiment would indeed produce (when restricted to terms of first order in ${\displaystyle v/c}$) the same positive result for both special relativity and the stationary aether (the latter he called "absolute theory" in reference to the 1895-theory of Lorentz). He also concluded that only complete-aether-drag models (such as the ones of Stokes or Hertz) would give a negative result.[7] In practice, the first interferometry experiment aimed at observing the correlation of angular velocity and phase-shift was performed by the French scientist Georges Sagnac in 1913. Its purpose was to detect "the effect of the relative motion of the ether".[1][2] Sagnac believed that his results constituted proof of the existence of a stationary aether. However, as explained above, Max von Laue already showed in 1911 that this effect is consistent with special relativity.[7][8] Unlike the carefully prepared Michelson–Morley experiment which was set up to prove an aether wind caused by earth drag, the Sagnac experiment could not prove this type of aether wind because a universal aether would affect all parts of the rotating light equally. Einstein was fully aware of the phenomenon of the Sagnac effect through the earlier experimentation of Franz Harress, mathematically analyzed in an article by Paul Harzer, entitled "Dragging of Light in Glass and Aberration" in 1914.[9] This was rebutted by Einstein in his articles "Observation on P. Harzer's Article: Dragging of Light in Glass and Aberration"[10] and "Answer to P. Harzer's Reply."[11] After Einstein's mathematical argument in the first article, Einstein replied, "As I have shown, the frequency of the light relative to the medium through which it is applied is decisive for the magnitude k; because this determines the speed of the light relative to the medium. In our case, it is a light process which, in relation to the rotating prism system, is to be understood as a stationary process. From this it follows that the frequency of the light relative to the moving prisms, and also the magnitude k, is the same for all prisms. This repudiates Mr Harzer's reply." (1914) In 1920 von Laue continued his own theoretical work of 1911, describing the Harress experiment and showing the role of the Sagnac effect in this experiment.[6] Laue said that in the Harress experiment (in which light traverses glass) there was a calculable difference in time due to both the dragging of light (which follows from the relativistic velocity addition in moving media, i.e. in moving glass) and "the fact that every part of the rotating apparatus runs away from one ray, while it approaches the other one", i.e. the Sagnac effect. He acknowledged that this latter effect alone could cause the time variance and, therefore, "the accelerations connected with the rotation in no way influence the speed of light."[6] While Laue's explanation is based on inertial frames, Paul Langevin (1921, 1937) and others described the same effect when viewed from rotating reference frames (in both special and general relativity, see Born coordinates). So when the Sagnac effect should be described from the viewpoint of a corotating frame, one can use ordinary rotating cylindrical coordinates and apply them to the Minkowski metric, which results into the so-called Born metric or Langevin metric.[12][13][14] From these coordinates, one can derive the different arrival times of counter-propagating rays, an effect which was shown by Paul Langevin (1921).[15] Or when these coordinates are used to compute the global speed of light in rotating frames, different apparent light speeds are derived depending on the orientation, an effect which was shown by Langevin in another paper (1937).[16] It should be noted that this does not contradict special relativity and the above explanation by von Laue that the speed of light is not affected by accelerations. Because this apparent variable light speed in rotating frames only arises if rotating coordinates are used, whereas if the Sagnac effect is described from the viewpoint of an external inertial coordinate frame the speed of light of course remains constant – so the Sagnac effect arises no matter whether one uses inertial coordinates (see the formulas in section #Theories below) or rotating coordinates (see the formulas in section #Reference frames below). That is, special relativity in its original formulation was adapted to inertial coordinate frames, not rotating frames. Einstein in his paper introducing special relativity stated, "light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body."[17] Einstein specifically stated that light speed is only constant in the vacuum of empty space, using equations that only held in linear and parallel inertial frames. However, when Einstein started to investigate accelerated reference frames, he noticed that “the principle of the constancy of light must be modified” for accelerating frames of reference.[18] Max von Laue in his 1920 paper gave serious consideration to the effect of General Relativity on the Sagnac effect stating, "General relativity would of course be capable of giving some statements about it, and we want to show at first that no noticeable influences of acceleration are expected according to it." He makes a footnote regarding discussions with German physicist, Wilhelm Wien.[6] The reason for looking at General Relativity is because Einstein's Theory of General Relativity predicted that light would slow down in a gravitational field which is why it could predict the curvature of light around a massive body. Under General Relativity, there is the equivalence principle which states that gravity and acceleration are equivalent. Spinning or accelerating an interferometer creates a gravitational effect. "There are, however, two different types of such [non-inertial] motion; it may for instance be acceleration in a straight line, or circular motion with constant speed."[19] Also, Irwin Shapiro in 1964 explained General Relativity saying, "the speed of a light wave depends on the strength of the gravitational potential along its path." This is called the Shapiro delay.[20] However, since the gravitational field would have to be significant, Laue (1920) concluded it is more likely that the effect is a result of changing the distance of the path by its movement through space.[6] "The beam traveling around the loop in the direction of rotation will have farther to go than the beam traveling counter to the direction of rotation, because during the period of travel the mirrors and detector will all move (slightly) toward the counter-rotating beam and away from the co-rotating beam. Consequently the beams will reach the detector at slightly different times, and slightly out of phase, producing optical interference 'fringes' that can be observed and measured."[21] In 1926, an ambitious ring interferometry experiment was set up by Albert Michelson and Henry Gale. The aim was to find out whether the rotation of the Earth has an effect on the propagation of light in the vicinity of the Earth. The Michelson–Gale–Pearson experiment was a very large ring interferometer, (a perimeter of 1.9 kilometer), large enough to detect the angular velocity of the Earth. The outcome of the experiment was that the angular velocity of the Earth as measured by astronomy was confirmed to within measuring accuracy. The ring interferometer of the Michelson–Gale experiment was not calibrated by comparison with an outside reference (which was not possible, because the setup was fixed to the Earth). From its design it could be deduced where the central interference fringe ought to be if there would be zero shift. The measured shift was 230 parts in 1000, with an accuracy of 5 parts in 1000. The predicted shift was 237 parts in 1000.[22] ### The Wang experiment Modified versions of the Sagnac experiment have been made by Wang et al.[23] in configurations similar to those shown in Fig. 3. Figure 3. A rigid Sagnac interferometer, shown on the left, versus a deformable Wang interferometer shown on the right. The Wang interferometer does not move like a rigid body and Sagnac original formula does not apply as the angular frequency of rotation ${\displaystyle \omega }$ is not defined. Wang et al. verified experimentally that a generalized Sagnac formula applies ${\displaystyle \Delta \phi \approx {\frac {4\pi }{\lambda c}}\oint \mathbf {v} \cdot d\mathbf {x} }$ ## Relativistic derivation of Sagnac formula Fig. 4: A closed optical fiber moving arbitrarily in space without stretching. Consider a ring interferometer where two counter-propagating light beams share a common optical path determined by a loop of an optical fiber, see Figure 4. The loop may have an arbitrary shape, and can move arbitrarily in space. The only restriction is that it is not allowed to stretch. (The case of a circular ring interferometer rotating about its center in free space is recovered by taking the index of refraction of the fiber to be 1.) Consider a small segment of the fiber, whose length in its rest frame is ${\displaystyle d\ell '}$. The time intervals, ${\displaystyle dt'_{\pm }}$, it takes the left and right moving light rays to traverse the segment in the rest frame coincide and are given by ${\displaystyle dt'_{\pm }={n \over c}d\ell '}$ Let ${\textstyle d\ell =|d\mathbf {x} |}$ be the length of this small segment in the lab frame. By the relativistic length contraction formula, ${\textstyle d\ell '=\gamma d\ell \approx d\ell }$ correct to first order in the velocity ${\displaystyle \mathbf {v} }$ of the segment. The time intervals ${\displaystyle dt_{\pm }}$ for traversing the segment in the lab frame are given by Lorentz transformation as: ${\displaystyle dt_{\pm }=\gamma \left(dt'\pm {\frac {\mathbf {v} \cdot d\mathbf {x} '}{c^{2}}}\right)\approx {\frac {n}{c}}d\ell \pm {\frac {\mathbf {v} }{c^{2}}}\cdot d\mathbf {x} }$ correct to first order in the velocity ${\displaystyle \mathbf {v} }$. In general, the two beams will visit a given segment at slightly different times, but, in the absence of stretching, the length ${\textstyle d\ell }$ is the same for both beams. It follows that the time difference for completing a cycle for the two beams is ${\displaystyle \Delta T=\int \left(dt_{+}-dt_{-}\right)\approx {\frac {2}{c^{2}}}\oint \mathbf {v} \cdot d\mathbf {x} }$ Remarkably, the time difference is independent of the refraction index ${\displaystyle n}$ and the velocity of light in the fiber. Imagine a screen for viewing fringes placed at the light source (alternatively, use a beamsplitter to send light from the source point to the screen). Given a steady light source, interference fringes will form on the screen with a fringe displacement given by ${\textstyle \Delta \phi \approx {\frac {2\pi c}{\lambda }}\Delta T}$ where the first factor is the frequency of light. This gives the generalized Sagnac formula[24] ${\displaystyle \Delta \phi \approx {\frac {4\pi }{\lambda c}}\oint \mathbf {v} \cdot d\mathbf {x} }$ In the special case that the fiber moves like a rigid body with angular frequency ${\displaystyle {\boldsymbol {\omega }}}$, the velocity is ${\textstyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {x} }$ and the line integral can be computed in terms of the area of the loop: ${\displaystyle \oint \mathbf {v} \cdot d\mathbf {x} =\oint {\boldsymbol {\omega }}\times \mathbf {x} \cdot d\mathbf {x} =\oint {\boldsymbol {\omega }}\cdot \mathbf {x} \times d\mathbf {x} =2\oint {\boldsymbol {\omega }}\cdot d\mathbf {A} =2{\boldsymbol {\omega }}\cdot \mathbf {A} }$ This gives Sagnac formula for ring interferometers of arbitrary shape and geometry ${\displaystyle \Delta \phi \approx {\frac {8\pi }{\lambda c}}{\boldsymbol {\omega }}\cdot \mathbf {A} }$ If one also allows for stretching one recovers the Fizeau interference formula.[24] The Sagnac effect has stimulated a century long debate on its meaning and interpretation,[25][26][27] much of this debate being surprising since the effect is perfectly well understood in the context of special relativity. ### Other generalizations A relay of pulses that circumnavigates the Earth, verifying precise synchronization, is also recognized as a case requiring correction for the Sagnac effect. In 1984 a verification was set up that involved three ground stations and several GPS satellites, with relays of signals both going eastward and westward around the world.[28] In the case of a Sagnac interferometer a measure of difference in arrival time is obtained by producing interference fringes, and observing the fringe shift. In the case of a relay of pulses around the world the difference in arrival time is obtained directly from the actual arrival time of the pulses. In both cases the mechanism of the difference in arrival time is the same: the Sagnac effect. The Hafele–Keating experiment is also recognized as a counterpart to Sagnac effect physics.[28] In the actual Hafele–Keating experiment[29] the mode of transport (long-distance flights) gave rise to time dilation effects of its own, and calculations were needed to separate the various contributions. For the (theoretical) case of clocks that are transported so slowly that time dilation effects arising from the transport are negligible the amount of time difference between the clocks when they arrive back at the starting point will be equal to the time difference that is found for a relay of pulses that travels around the world: 207 nanoseconds. ### Practical uses The Sagnac effect is employed in current technology. One use is in inertial guidance systems. Ring laser gyroscopes are extremely sensitive to rotations, which need to be accounted for if an inertial guidance system is to return accurate results. The ring laser also can detect the sidereal day, which can also be termed "mode 1". Global navigation satellite systems (GNSSs), such as GPS, GLONASS, COMPASS or Galileo, need to take the rotation of the Earth into account in the procedures of using radio signals to synchronize clocks. ### Ring lasers Figure 6. Schematic representation of a ring laser setup. Fibre optic gyroscopes are sometimes referred to as 'passive ring interferometers'. A passive ring interferometer uses light entering the setup from outside. The interference pattern that is obtained is a fringe pattern, and what is measured is a phase shift. It is also possible to construct a ring interferometer that is self-contained, based on a completely different arrangement. This is called a ring laser or ring laser gyroscope. The light is generated and sustained by incorporating laser excitation in the path of the light. To understand what happens in a ring laser cavity, it is helpful to discuss the physics of the laser process in a laser setup with continuous generation of light. As the laser excitation is started, the molecules inside the cavity emit photons, but since the molecules have a thermal velocity, the light inside the laser cavity is at first a range of frequencies, corresponding to the statistical distribution of velocities. The process of stimulated emission makes one frequency quickly outcompete other frequencies, and after that the light is very close to monochromatic. Figure 7. Schematic representation of the frequency shift when a ring laser interferometer is rotating. Both the counterpropagating light and the co-propagating light go through 12 cycles of their frequency. For the sake of simplicity, assume that all emitted photons are emitted in a direction parallel to the ring. Fig. 7 illustrates the effect of the ring laser's rotation. In a linear laser, an integer multiple of the wavelength fits the length of the laser cavity. This means that in traveling back and forth the laser light goes through an integer number of cycles of its frequency. In the case of a ring laser the same applies: the number of cycles of the laser light's frequency is the same in both directions. This quality of the same number of cycles in both directions is preserved when the ring laser setup is rotating. The image illustrates that there is wavelength shift (hence a frequency shift) in such a way that the number of cycles is the same in both directions of propagation. By bringing the two frequencies of laser light to interference a beat frequency can be obtained; the beat frequency is the difference between the two frequencies. This beat frequency can be thought of as an interference pattern in time. (The more familiar interference fringes of interferometry are a spatial pattern). The period of this beat frequency is linearly proportional to the angular velocity of the ring laser with respect to inertial space. This is the principle of the ring laser gyroscope, widely used in modern inertial navigation systems. #### Zero point calibration Figure 8. The red and blue dots represent counter-propagating photons, the grey dots represent molecules in the laser cavity. In passive ring interferometers, the fringe displacement is proportional to the first derivative of angular position; careful calibration is required to determine the fringe displacement that corresponds to zero angular velocity of the ring interferometer setup. On the other hand, ring laser interferometers do not require calibration to determine the output that corresponds to zero angular velocity. Ring laser interferometers are self-calibrating. The beat frequency will be zero if and only if the ring laser setup is non-rotating with respect to inertial space. Fig. 8 illustrates the physical property that makes the ring laser interferometer self-calibrating. The grey dots represent molecules in the laser cavity that act as resonators. Along every section of the ring cavity, the speed of light is the same in both directions. When the ring laser device is rotating, then it rotates with respect to that background. In other words: invariance of the speed of light provides the reference for the self-calibrating property of the ring laser interferometer. #### Lock-in Ring laser gyroscopes suffer from an effect known as "lock-in" at low rotation rates (less than 100°/h). At very low rotation rates, the frequencies of the counter-propagating laser modes become almost identical. In this case, crosstalk between the counter-propagating beams can result in injection locking, so that the standing wave "gets stuck" in a preferred phase, locking the frequency of each beam to each other rather than responding to gradual rotation. By rotationally dithering the laser cavity back and forth through a small angle at a rapid rate (hundreds of hertz), lock-in will only occur during the brief instances where the rotational velocity is close to zero; the errors thereby induced approximately cancel each other between alternating dead periods. #### Fibre optic gyroscopes versus ring laser gyroscopes Fibre optic gyros (FOGs) and ring laser gyros (RLGs) both operate by monitoring the difference in propagation time between beams of light traveling in clockwise and counterclockwise directions about a closed optical path. They differ considerably in various cost, reliability, size, weight, power, and other performance characteristics that need to be considered when evaluating these distinct technologies for a particular application. RLGs require accurate machining, use of precision mirrors, and assembly under clean room conditions. Their mechanical dithering assemblies add somewhat to their weight but not appreciably.[citation needed] RLGs are capable of logging in excess of 100,000 hours of operation in near-room temperature conditions.[citation needed] Their lasers have relatively high power requirements.[30] Interferometric FOGs are purely solid-state, require no mechanical dithering components, do not require precision machining, are not subject to lock-in, have a flexible geometry, and can be made very small. They use many standard components from the telecom industry. In addition, the major optical components of FOGs have proven performance in the telecom industry, with lifespans measured in decades.[31] However, the assembly of multiple optical components into a precision gyro instrument is costly. Analog FOGs offer the lowest possible cost but are limited in performance; digital FOGs offer the wide dynamic ranges and accurate scale factor corrections required for stringent applications.[32] Use of longer and larger coils increases sensitivity at the cost of greater sensitivity to temperature variations and vibrations. ### Zero-area Sagnac interferometer and gravitational wave detection The Sagnac topology was actually first described by Michelson in 1886,[33] who employed an even-reflection variant of this interferometer in a repetition of the Fizeau experiment.[34] Michelson noted the extreme stability of the fringes produced by this form of interferometer: White-light fringes were observed immediately upon alignment of the mirrors. In dual-path interferometers, white-light fringes are difficult to obtain since the two path lengths must be matched to within a couple of micrometers (the coherence length of the white light). However, being a common path interferometer, the Sagnac configuration inherently matches the two path lengths. Likewise Michelson observed that the fringe pattern would remain stable even while holding a lighted match below the optical path; in most interferometers the fringes would shift wildly due to the refractive index fluctuations from the warm air above the match. Sagnac interferometers are almost completely insensitive to displacements of the mirrors or beam-splitter.[35] This characteristic of the Sagnac topology has led to their use in applications requiring exceptionally high stability. Figure 9. Zero-area Sagnac interferometer The fringe shift in a Sagnac interferometer due to rotation has a magnitude proportional to the enclosed area of the light path, and this area must be specified in relation to the axis of rotation. Thus the sign of the area of a loop is reversed when the loop is wound in the opposite direction (clockwise or anti-clockwise). A light path that includes loops in both directions, therefore, has a net area given by the difference between the areas of the clockwise and anti-clockwise loops. The special case of two equal but opposite loops is called a zero-area Sagnac interferometer. The result is an interferometer that exhibits the stability of the Sagnac topology while being insensitive to rotation.[36] The Laser Interferometer Gravitational-Wave Observatory (LIGO) consisted of two 4-km Michelson–Fabry–Pérot interferometers, and operated at a power level of about 100 watts of laser power at the beam splitter. After an upgrade to Advanced LIGO several kilowatts of laser power are required. A variety of competing optical systems are being explored for third generation enhancements beyond Advanced LIGO.[37] One of these competing proposals is based on the zero-area Sagnac design. With a light path consisting of two loops of the same area, but in opposite directions, an effective area of zero is obtained thus canceling the Sagnac effect in its usual sense. Although insensitive to low frequency mirror drift, laser frequency variation, reflectivity imbalance between the arms, and thermally induced birefringence, this configuration is nevertheless sensitive to passing gravitational waves at frequencies of astronomical interest.[36] However, many considerations are involved in the choice of an optical system, and despite the zero-area Sagnac's superiority in certain areas, there is as yet no consensus choice of optical system for third generation LIGO.[38][39] ## References 1. ^ a b Sagnac, Georges (1913), "L'éther lumineux démontré par l'effet du vent relatif d'éther dans un interféromètre en rotation uniforme" [The demonstration of the luminiferous aether by an interferometer in uniform rotation], Comptes Rendus, 157: 708–710 2. ^ a b Sagnac, Georges (1913), "Sur la preuve de la réalité de l'éther lumineux par l'expérience de l'interférographe tournant" [On the proof of the reality of the luminiferous aether by the experiment with a rotating interferometer], Comptes Rendus, 157: 1410–1413 3. ^ Anderson, R.; Bilger, H.R.; Stedman, G.E. (1994). "Sagnac effect: A century of Earth-rotated interferometers". Am. J. Phys. 62 (11): 975–985. Bibcode:1994AmJPh..62..975A. doi:10.1119/1.17656. 4. ^ Lodge, Oliver (1897). "Experiments on the Absence of Mechanical Connexion between Ether and Matter". Philos. Trans. R. Soc. 189: 149–166. Bibcode:1897RSPTA.189..149L. doi:10.1098/rsta.1897.0006. 5. ^ Michelson, A.A. (1904). "Relative Motion of Earth and Aether". Philosophical Magazine. 8 (48): 716–719. doi:10.1080/14786440409463244. 6. Laue, Max von (1920). "Zum Versuch von F. Harress". Annalen der Physik. 367 (13): 448–463. Bibcode:1920AnP...367..448L. doi:10.1002/andp.19203671303. English translation: On the Experiment of F. Harress 7. ^ a b c d Laue, Max von (1911). "Über einen Versuch zur Optik der bewegten Körper". Münchener Sitzungsberichte: 405–412. English translation: On an Experiment on the Optics of Moving Bodies 8. ^ a b Pauli, Wolfgang (1981). Theory of Relativity. New York: Dover. ISBN 0-486-64152-X. 9. ^ List of scientific publications by Albert Einstein 10. ^ Astronomische Nachrichten, 199, 8–10 11. ^ Astronomische Nachrichten, 199, 47–48 12. ^ Guido Rizzi; Matteo Luca Ruggiero (2003). "The relativistic Sagnac Effect: two derivations". In G. Rizzi; M.L. Ruggiero. Relativity in Rotating Frames. Dordrecht: Kluwer Academic Publishers. arXiv:gr-qc/0305084. Bibcode:2003gr.qc.....5084R. ISBN 0-486-64152-X. 13. ^ Ashby, N. (2003). "Relativity in the Global Positioning System". Living Rev. Relativ. 6. Bibcode:2003LRR.....6....1A. doi:10.12942/lrr-2003-1. (Open access) 14. ^ L.D. Landau, E.M. Lifshitz, (1962). "The Classical Theory of Fields". 2nd edition, Pergamon Press, pp. 296–297. 15. ^ Langevin, Paul (1921). "Sur la théorie de la relativité et l'expérience de M. Sagnac". Comptes Rendus. 173: 831–834. 16. ^ Langevin, Paul (1937). "Sur l'expérience de M. Sagnac". Comptes Rendus. 205: 304–306. 17. ^ Albert Einstein, 1905, "On the Electrodynamics of Moving Bodies." http://www.fourmilab.ch/etexts/einstein/specrel/www/ 18. ^ A. Einstein, ‘Generalized theory of relativity’, 94; the anthology ‘The Principle of Relativity’, A. Einstein and H. Minkowski, University of Calcutta, 1920 19. ^ "General Relativity", Lewis Ryder, Cambridge University Press (2009). P.7 21. ^ http://www.mathpages.com/rr/s2-07/2-07.htm 22. ^ Michelson, Albert Abraham; Gale, Henry G. (1925). "The Effect of the Earths Rotation on the Velocity of Light, II". The Astrophysical Journal. 61: 140–145. Bibcode:1925ApJ....61..140M. doi:10.1086/142879. 23. ^ Wang, R.; Zheng, Y.; Yao, A.; Langley, D (2006). "Modified Sagnac experiment for measuring travel-time difference between counter-propagating light beams in a uniformly moving fiber". Physics Letters A. 312: 7–10. arXiv:physics/0609222. Bibcode:2003PhLA..312....7W. doi:10.1016/S0375-9601(03)00575-9. 24. ^ a b Ori, A. (2016). "Generalized Sagnac-Wang-Fizeau formula". Physical Review A. 94 (6). arXiv:1601.01448. Bibcode:2016PhRvA..94f3837O. doi:10.1103/physreva.94.063837. 25. ^ Stedman, G. E. (1997). "Ring-laser tests of fundamental physics and geophysics". Rep. Prog. Phys. 60: 615–688. Bibcode:1997RPPh...60..615S. CiteSeerX 10.1.1.128.191. doi:10.1088/0034-4885/60/6/001. 26. ^ Malykin, G. B. (2002). "Sagnac effect in a rotating frame of reference. Relativistic Zeno paradox" (PDF). Physics-Uspekhi. 45 (8): 907–909. Bibcode:2002PhyU...45..907M. doi:10.1070/pu2002v045n08abeh001225. Retrieved 15 February 2013. 27. ^ Tartaglia, A.; Ruggiero, M. L. (2004). "Sagnac effect and pure geometry". arXiv:gr-qc/0401005. 28. ^ a b Allan, D. W., Weiss, M. A., & Ashby, N. (1985). "Around-the-World Relativistic Sagnac Experiment". Science. 228 (4695): 69–71. Bibcode:1985Sci...228...69A. doi:10.1126/science.228.4695.69. 29. ^ Hafele J., Keating, R. (1972-07-14). "Around the world atomic clocks:predicted relativistic time gains". Science. 177 (4044): 166–168. Bibcode:1972Sci...177..166H. doi:10.1126/science.177.4044.166. PMID 17779917. Retrieved 2006-09-18. 30. ^ Juang, J.-N.; Radharamanan, R. "Evaluation of Ring Laser and Fiber Optic Gyroscope Technology" (PDF). Retrieved 15 February 2013. 31. ^ Napolitano, F. "Fiber-Optic Gyroscopes Key Technological Advantages" (PDF). iXSea. Archived from the original (PDF) on 5 March 2012. Retrieved 15 February 2013. 32. ^ Udd, E.; Watanabe, S. F.; Cahill, R. F. "Comparison of ring laser and fiber-optic gyro technology". McDonnell-Douglas. Bibcode:1986gosm.agar.....U. 33. ^ Hariharan, P. (1975). "Sagnac or Michelson–Sagnac interferometer?". Applied Optics. 14 (10): 2319_1–2321. Bibcode:1975ApOpt..14.2319H. doi:10.1364/AO.14.2319_1. 34. ^ Michelson, A. A. & Morley, E.W. (1886). "Influence of Motion of the Medium on the Velocity of Light". Am. J. Sci. 31: 377–386. Bibcode:1886AmJS...31..377M. doi:10.2475/ajs.s3-31.185.377. 35. ^ Hariharan, P. (2003). Optical Interferometry (Second ed.). Academic Press. pp. 28–29. ISBN 0-12-311630-9. 36. ^ a b Sun, K-X.; Fejer, M.M.; Gustafson, E.; Byer R.L. (1996). "Sagnac Interferometer for Gravitational-Wave Detection" (PDF). Physical Review Letters. 76 (17): 3053–3056. Bibcode:1996PhRvL..76.3053S. doi:10.1103/PhysRevLett.76.3053. Retrieved 31 March 2012. 37. ^ Punturo, M.; Abernathy, M.; Acernese, F.; Allen, B.; Andersson, N.; Arun, K.; Barone, F.; Barr, B.; Barsuglia, M.; Beker, M.; Beveridge, N.; Birindelli, S.; Bose, S.; Bosi, L.; Braccini, S.; Bradaschia, C.; Bulik, T.; Calloni, E.; Cella, G.; Chassande Mottin, E.; Chelkowski, S.; Chincarini, A.; Clark, J.; Coccia, E.; Colacino, C.; Colas, J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Danilishin, S. (2010). "The third generation of gravitational wave observatories and their science reach". Classical and Quantum Gravity. 27 (8): 084007. Bibcode:2010CQGra..27h4007P. doi:10.1088/0264-9381/27/8/084007. 38. ^ Freise, A.; Chelkowski, S.; Hild, S.; Pozzo, W. D.; Perreca, A.; Vecchio, A. (2009). "Triple Michelson interferometer for a third-generation gravitational wave detector". Classical and Quantum Gravity. 26 (8): 085012. arXiv:0804.1036. Bibcode:2009CQGra..26h5012F. doi:10.1088/0264-9381/26/8/085012. 39. ^ Eberle, T.; Steinlechner, S.; Bauchrowitz, J. R.; Händchen, V.; Vahlbruch, H.; Mehmet, M.; Müller-Ebhardt, H.; Schnabel, R. (2010). "Quantum Enhancement of the Zero-Area Sagnac Interferometer Topology for Gravitational Wave Detection". Physical Review Letters. 104 (25): 251102. arXiv:1007.0574. Bibcode:2010PhRvL.104y1102E. doi:10.1103/PhysRevLett.104.251102. PMID 20867358.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 33, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84603351354599, "perplexity": 1318.6800852910594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746800.89/warc/CC-MAIN-20181120211528-20181120233528-00529.warc.gz"}
https://www.physicsforums.com/threads/rotation-between-meter-stick-and-a-can.554886/
Rotation between meter stick and a can 1. Nov 28, 2011 Smartguy94 1. The problem statement, all variables and given/known data The tip of a meterstick rests on the can. The stick is pushed horizontally so that the can rolls on the table, with no slipping between the can and the table or the can or the can and the meterstick. The push continues until the can makes one complete rotation. During the roll, the displacement of the meterstick is equal to 3. The attempt at a solution since there is no slipping between the can and the table and the meter stick, wouldn't the answer just be the can's circumference? because one rotational of the can is equal to the circumference of the can? therefore the meterstick displacement is equal to the circumference of the can? but I got it wrong. can anybody help and explain? 2. Nov 28, 2011 Spinnor Does the can's center move as fast as the stick? Do the experiment, get a beer bottle, rubber bands, and a ruler. Wrap the rubber bands around the bottle, the ruler does not slip on the rubber bands as it does on the glass, and move the bottle one revolution with the ruler. 3. Nov 28, 2011 Smartguy94 i did it with a battery and a ruler, the battery's diameter is 1.4cm and so the circumference is 1.4∏ = which is 4.39822cm and then i roled the batery with my ruler and got 4.4cm for one revolution. so it is the circumference of the battery.. but apparently that's not the answer. 4. Nov 28, 2011 Spinnor The ruler should move about twice as far as the can. 5. Nov 28, 2011 Smartguy94 could you please explain why and how do you get this? 6. Nov 29, 2011 Spinnor 7. Nov 29, 2011 Staff: Mentor Try it again. Have one end of the ruler on the battery, but allow the other end of the ruler to drag along the table. It is this dragging end of the ruler that you focus on. Mark the start and finish points of that dragging end, then measure their distance apart. Similar Discussions: Rotation between meter stick and a can
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566551208496094, "perplexity": 917.6528929334667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00200.warc.gz"}
http://math.stackexchange.com/questions/177712/exist-problem-using-probability
# Exist problem using probability Here is the question, but I even don't understand what's $k$-sample... It seems very abstract to me somehow... Let $S$ be a set of binary strings $a_1 \cdots a_n$ of length $n$ (where juxtaposition means concatenation). We call $S$ $k$-complete if for any indices $1 < i_1 < \cdots < i_k < n$ and any binary string $b_1 \cdots b_k$ of length $k$, there is a string $s_1 \cdots s_n$ in $S$ such that $s_{i_1}s_{i_2} \cdots s_{i_k} = b_1 b_2 \cdots b_k$. For example, for $n = 3$, the set $$S = \{001, 010, 011, 100, 101, 110\}$$ is $2$-complete since all $4$ patterns of $0$’s and $1$’s of length $2$ can be found in any $2$ positions. Show that if $$C(n, k) 2^k (1-2^{-k})^m <1,$$ then there exists a $k$-complete set of size at most $m$. - i don't see probability anywhere... Further, the problem could seem "very abstract", but the provided example should make it clear. – leonbloy Aug 1 '12 at 18:58 You say you don't understand "$k$-sample", but there seems to be no mention of a $k$-sample in the question? – joriki Aug 1 '12 at 20:09 @leonbloy: Perhaps "with probability" was a bad way of saying "using the probabilistic method" -- see my answer. – joriki Aug 1 '12 at 20:53 Choose $m$ binary strings of length $n$ randomly with independently uniformly distributed digits (i.e. throw a coin for each digit of each string). Each string has probability $2^{-k}$ of covering any given pattern of length $k$. There are $\binom nk2^k$ such patterns, so the expected number of patterns not covered is $\binom nk2^k(1-2^{-k})^m$. If this is less than $1$, that means that there must be at least one set of $m$ strings that leaves no patterns uncovered. @user1489975: Sorry, I don't understand the question in your first comment; please repharse it. Regarding the second comment, which part of that statement do you find confusing? By the way, note that you can use $\TeX$ on this site to make your mathematical notation more readable. Enclose it in single dollar signs for inline formulas or double dollar signs for displayed equations. If you don't know how to format something, you can get the code for any math you see on this site by right-clicking on it and selecting "Show Math As: TeX Commands". – joriki Aug 2 '12 at 14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999732732772827, "perplexity": 332.4775373630514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00238-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/207379/proving-the-limit-of-an-improper-integral-of-a-sequence-of-functions?answertab=oldest
# Proving the limit of an improper integral of a sequence of functions. I was trying to prove that the following limit $$\lim_{n\to\infty}\int^{\infty}_{1}\frac{\sin{x}}{x^{n+1}}\mathrm{d}x$$ is equal to $0$. I believe that the easiest option in similar cases - and the only one I know... - is proving that that $f_{n}$ converges uniformly to $f$ on the interval of the integral. However, this is not the case here, as for $x=1$ we have $\lim_{n\to\infty}f_{n}=1$ and for $x>1$ $\lim_{n\to\infty}f_{n}=0$. I would be very thankful for thoughts on how this should be proven. - Are you familiar with the monotone convergence theorem for integrals? –  Christopher A. Wong Oct 4 '12 at 19:59 The problem that I see with applying it here is that our function $f$, to which $f_{n}$ converges, is a branched function... and I guess even that wouldn't be problem, if it wasn't the fact that $f(1)=1$. –  Johnny Westerling Oct 4 '12 at 20:27 @JohnnyWesterling You might add where the problem comes from. I assume your definition of integral is that of Riemann? –  AD. Oct 5 '12 at 5:47 @AD. Oh, its just a set of problems I have, not even in English, and I don't think it has some online source either, but it got into my hands and I thought of solving it. The question doesn't ask for more than what I wrote, and does not set forth any assumptions. –  Johnny Westerling Oct 5 '12 at 14:19 Hint: What is $\int_1^\infty \dfrac{dx}{x^{n+1}}$ ? - That gives us $1/n$, which obviously tends to 0. Thus, I believe we can write that $\int^{\infty}_{1}\frac{\sin{x}}{x^{n+1}}dx\leq\int^{\infty}_{1}\frac{dx}{x^{n+1‌​}}$. Now, do we need to bound our integral by something from below or is that sufficient? –  Johnny Westerling Oct 4 '12 at 20:11 @JohnnyWesterling Approximate the integral of $|\sin x/ x^{n+1}|$. –  David Mitra Oct 4 '12 at 20:16 Sorry, but I am not exactly sure what do you mean by "approximate" in this case... What I do see is that for any $x\geq{1}$ and $n\geq{1}$, we can write $0\leq|\sin{x}/x^{n+1}|\leq1/x^{n+1}$... –  Johnny Westerling Oct 4 '12 at 20:23 @JohnnyWesterling Sorry, that wasn't what I meant to say; but I think you have the idea... –  David Mitra Oct 4 '12 at 20:30 Hint: use $\triangle$ –  AD. Oct 4 '12 at 20:33 Your idea almost works, to proceed with it you might first consider the sequence $f_n$ on $[1+\varepsilon,\infty)$ for fixed $\varepsilon>0$. Do you see the next step? - Well, our sequence is indeed uniformly convergent on $[1+\varepsilon,\infty)$. Now we would have to check our integral on $[1,1+\varepsilon]$. Well, in this case I believe we should consider $\varepsilon\to{1}^{+}$... is that the correct direction...? –  Johnny Westerling Oct 4 '12 at 20:16 sure, what can you say about $f_n$ there ? –  AD. Oct 4 '12 at 20:30 Well, we know that its integral on this interval would be equal to $F_{n}(\varepsilon)-F_{n}(1)$ where $F_{n}$ is the antiderivative of $f_{n}$, which must be continuous on $[1,1+\varepsilon]$; thus, we can conclude that $\lim_{\varepsilon\to{1}^{+}}(F(\varepsilon)-F(1))=0$. Is that correct? –  Johnny Westerling Oct 4 '12 at 20:37 There is a problem on when to switch limits..(btw i would stick to the less confusingr $1+\varepsilon$). –  AD. Oct 4 '12 at 20:49 I would try to estimate the integral, it is like the area of a thin rectangl. –  AD. Oct 4 '12 at 20:53 STEP 1: NOTE THAT, SINCE $x>1$ $$\left|\frac{sin x}{x^{n+1}}\right|\leq \frac{1}{x^{n+1}}$$ STEP 2: TAKING INTEGRAL $$\int_{1}^{\infty}\left|\frac{\sin x}{x^{n+1}}\right| dx\leq \int_{1}^{\infty}\frac{dx}{x^{n+1}}=\lim_{t\to\infty}\left(\frac{x^{-n}}{-n}\right)\Big|_{1}^t=\frac{1}{n}.$$ STEP 3: TAKING LIMIT WHEN $n\to\infty$ $$0\leq\lim_{n\to\infty}\left|\int_{1}^{\infty}\frac{\sin x}{x^{n+1}}\mathrm{d}x\right|\leq \lim_{n\to\infty}\int_{1}^{\infty}\left|\frac{\sin x}{x^{n+1}}\right| dx\leq\lim_{n\to \infty}\frac{1}{n}=0$$ Therefore $$\lim_{n\to\infty}\left|\int_{1}^{\infty}\frac{\sin x}{x^{n+1}}\mathrm{d}x\right|=0,$$ and so $$\lim_{n\to\infty}\int_{1}^{\infty}\frac{\sin x}{x^{n+1}}\mathrm{d}x=0$$ - $\sin{x}$ can indeed take on negative values, while for $x\geq{1}$ the denominator can't. Why would say then in step three that this integral is always greater or equal to $0$? –  Johnny Westerling Nov 3 '12 at 6:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9733579158782959, "perplexity": 396.32753058505034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115863825.66/warc/CC-MAIN-20150124161103-00161-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/99926-express-simplest-form.html
# Thread: Express in Simplest Form 1. ## Express in Simplest Form Express [f(x + h) - f(x)]/h, where h cannot = 0 when f(x) = 5 The answer is 0 but I have no idea how to solve it. Believe it or not, this question comes from a pre-algebra textbook NOT calculus. 2. Originally Posted by sharkman Express [f(x + h) - f(x)]/h, where h cannot = 0 when f(x) = 5 The answer is 0 but I have no idea how to solve it. Believe it or not, this question comes from a pre-algebra textbook NOT calculus. Hi Since f(x)=5 (a constant) , so f(x+h)=5 too . (5-5)/h=0/h=0 3. ## ok... Originally Posted by mathaddict Hi Since f(x)=5 (a constant) , so f(x+h)=5 too . (5-5)/h=0/h=0 After playing it with further, I also determined that f(x+h) also = 5. Thanks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225651025772095, "perplexity": 1888.991938439187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00039-ip-10-145-167-34.ec2.internal.warc.gz"}
https://rupress.org/jem/article/173/2/511/24481/Human-immunodeficiency-virus-HIV-infection-in-CD4
In the present study, we demonstrated that expression of the LFA-1 molecule is necessary for cell fusion and syncytia formation in human immunodeficiency virus (HIV)-infected CD4+ T lymphocytes. In contrast, the lack of expression of LFA-1 does not influence significantly cell-to-cell transmission of HIV. In fact, LFA-1- T lymphocytes obtained from a leukocyte adhesion deficiency patient were unable to fuse and form syncytia when infected with HIV-1 or HIV-2, despite the fact that efficiency of HIV infection (i.e., virus entry, HIV spreading, and levels of virus replication) was comparable with that observed in LFA-1+ T lymphocytes. In addition, we provide evidence that LFA-1 by mediating cell fusion contributes to the depletion of HIV-infected CD4+ T lymphocytes in vitro. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837112545967102, "perplexity": 4603.0573592013925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00472.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/19537-physics-help.html
1. ## [physics] Help Hello two particultes A and B, of masses my and mB respectively, are connect by an inextensible wire and of negligible mass, the wire passes on the throat of a negligible pulley supposee of mass. At the moment t0=0 the system S (pulley, masses, wire) east gives up without initial speed, the particle B being a distance H of the axis of the pulley. the level of reference of the potential enegrgie of gravity is on the level of horizontal plane contenat the axis of the pulley and particle A. one neglects frictions a>calculer has t0=0, the mechanical energy of system (S, ground) according to mB, G and H b>calculez the mechanical energy of the system (S, terre_apres a course X of A c>eb applying the consrvation of the mechanical enegrie, calculate the speed V of A and B according to X, my, mB and G. deduce acceleration from the Movement i tried but bad results for the first question: i got mB.V^2.g.h for the second V^2.h Thanks 2. ok ok i Got it now just spent 1 Hour and Fixed Thanks I LIKE THIS GREAT FORUM WITH GREAT MEMBERS
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495097160339355, "perplexity": 3596.7727863995115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133032.51/warc/CC-MAIN-20170824043524-20170824063524-00565.warc.gz"}
http://physics.stackexchange.com/questions/47155/uniform-chain-falls-off-table-diff-eq
# Uniform chain falls off table Diff EQ I really need some assistance setting up this problem. any assistance would be a Godsend: a uniform heavy chain of length a initially has length b hanging off of a table. The remaining part of chain a - b, is coiled on the table. Show that if the chain is released, the velocity of the chain when the last link leaves the table is $\sqrt{2g\frac{a^3 - b^3}{3a^2}}$ Okay, so this is a variable mass problem, so momentum is constantly changing: $$F_{ext}=m(t)g=\frac{dP}{dt}= \frac{d(m(t)v(t))}{dt}=ma +v\dot{m}$$ $$mg = ma + v\dot{m}$$ Gravity acts on the mass hanging off of the table, mass can be written as a function of length, as can velocity (where $\lambda$ is the linear mass density) $m(t) = \lambda l(t)$ $\dot{m(t)} = \lambda v(t)=\lambda \dot {l(t)}$ $v(t) = \dot {l(t)}$ $a(t) = \ddot{l(t)}$ $$\lambda l(t)g=\lambda l(t) \ddot{l(t)} + \dot {l(t)} \lambda \dot {l(t)}$$ Assuming this all to be correct $\implies$ $0 =l(t)(g -\ddot{l(t)}) +\dot{l(t)}^2$ I've tried solved this DE, but I don't know many methods for non linear DEs. - Quick follow up question from a physics student - when I saw this problem, my first instinct was to use conservation of energy (compare initial and final potential energies of the chain in terms of the length hanging over). The result by doing the problem this way, however, is incorrect. Why does conservation of energy not apply? –  Draksis Dec 18 '12 at 23:57 Energy would not be easy to deal with because both kinetic and potential energy depend upon the mass, which is changing with time. Potential energy also depends upon the center of mass which is also changing in time. In theory, energy would work (lol) but it would not be practical. –  Cactus BAMF Dec 19 '12 at 0:04 Are you sure there is no missing information? As Draksis points out, this is really very easy to set up in terms of energy, but the velocity then comes out to be $v=\sqrt{g(a^2-b^2) / a}$. –  Jaime Dec 19 '12 at 1:38 I just threw together some quick Python code to solve Cactus BAMF's DE for a = 30 and b = 10, and the final answer (13.757104646790195) agreed well with Cactus BAMF's provided solution. The energy argument seems to therefore be flawed, but I don't see why it would be. –  Draksis Dec 19 '12 at 2:54 I gave this a shot with energy and got the same thing as Jaime... haven't checked the momentum method yet, but I'm suspecting there's something wrong there... –  Kyle Dec 19 '12 at 4:24 With $\ell(t)$ as the length of the chain hanging off the table, the differential equation $$\ell(g-\ddot \ell)=\dot \ell^2$$ from the question can be rewritten as $$y\dot y=g\ell^2 \dot\ell,$$ where $y=\ell \dot\ell$. Then, integrating over the appropriate time interval will yield a final velocity of$$v_f=\sqrt{\frac{2g}{3}\frac{a^3-b^3}{a^2}}.$$ However, the differential equation above is incorrect, as it fails to take into account the tension in the chain. The correct equation should be $$\ddot\ell=\frac{g}{a}\ell,$$ giving a final velocity of $$v_f=\sqrt{\frac{g}{a}(a^2-b^2)},$$ which is in agreement with the conservation of energy. - Beat me to it :) Glad to see my intuition was correct. –  Kyle Dec 19 '12 at 16:18 I'll add the conservation of energy based solution just for completeness. I wrote the change in gravitational potential energy based on the picture that the piece of chain that begins on the table ends hanging vertically from a point $b$ below the table (and the rest of the chain is unchanged). $\Delta U_g = -g\lambda\int_0^{a-b}(b+x)dx$ $\Delta K = \frac{1}{2}\lambda av^2$ Where $\lambda$ is mass per unit length of the chain. $\frac{1}{2}av^2 = g[bx + \frac{1}{2}x^2]_0^{a-b}$ $v^2 = \frac{2g}{a}(ba-b^2+\frac{1}{2}a^2-ab+\frac{1}{2}b^2)$ $v = \sqrt{\frac{g}{a}(a^2-b^2)}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015209674835205, "perplexity": 276.8968393050057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647629.9/warc/CC-MAIN-20141024030047-00006-ip-10-16-133-185.ec2.internal.warc.gz"}
http://community.thomsonreuters.com/t5/EndNote-Styles-Filters-and/Bullet-points-added-to-bibliography-each-time-it-is-refreshed/m-p/136025
## EndNote Styles, Filters, and Connections Showing results for Do you mean New User Posts: 1 Registered: ‎11-23-2016 # Bullet points added to bibliography each time it is refreshed Using word on a Mac my endnote X7 bibliography always has very strange formatting in that whatever style I use it comes out as a bulleted list. If I use a style with a numbered list, I still get a bullet point followed by a number.  I can remove the bullet points manually, but every time the bibliography is updated the bullet points appear again. Is there a way to stop this from happening? Mentor Posts: 7,343 Registered: ‎04-10-2008 ## Re: Bullet points added to bibliography each time it is refreshed I thought they fixed this, but this can happen if there is a paragraph at the end of your document with that formating and when Endnote inserts the Bibliography it adopts that "normal" formating for the bibliography style. Make a copy of your document, and in word, from the Endnote Ribbon, "Convert Citations and Bibliography (dropdown) convert to unformated  citations" and remove any residual bibliography.  make sure there is a carriage return at the end of the document and clear formating from that paragraph, or assign it as "normal" so it has the correct font information compatible with the rest of your document. Now "Update Citations and Bibliography" and see if this stops happening and you may need to turn "Instant formating on". (long time Endnote user) New User Posts: 1 Registered: ‎09-13-2017 ## Re: Bullet points added to bibliography each time it is refreshed Hi, This has been happening to me also, and the problem persists even after clearing the formatting and following the steps you suggested. I cannot figure out what else I can do to remove this, any alternative thoughts? Thanks, Marie Highlighted Mentor Posts: 7,343 Registered: ‎04-10-2008 ## Re: Bullet points added to bibliography each time it is refreshed Have you looked at the Endnote Bibliography word style that endnote creates?  It might have gotten into that for this paper.  After doing the steps below delete that word style. Then update citations and bibliography and see if it now creates a new one with the appropriate font and paragraph settings. (long time Endnote user)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486740231513977, "perplexity": 3091.745728696257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00614.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-3w-3z-8-5#597592
Algebra Topics # How do you simplify (-3w^3z^8)^5? ${\left(- 3\right)}^{5} {w}^{15} {z}^{40} = - 243 {w}^{15} {z}^{40}$ It is a big number but there is no other way to simplify it any more than that. You simply take the 5 and multiply it by every exponent. Even the -3. Imagine that the -3 is actually -3 to the first power. Taking 5 times every exponent would result in ${\left(- 3\right)}^{5} {w}^{15} {z}^{40}$. If you have any questions, please free to ask.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014918804168701, "perplexity": 258.75429855590676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00719.warc.gz"}
https://sites.ualberta.ca/~gingrich/courses/phys512/node50.html
Next: Rotations Up: Dirac Equation Previous: Covariant Form of the # Proof of Covariance To prove Lorentz covariance two conditions must be satisfied: 1. If then . 2. Given of observer , there must be a prescription for observer to compute , which describes to the same physical state. It can be shown that all matrices (with hermitian and anti-hermitian) are equivalent up to a unitary transformation: (5.113) where . We drop the distinction between and and write (5.114) where . We require that the transformation between and be linear since the Dirac equation and Lorentz transformation are linear. (5.115) is a matrix which depends only on the relative velocities of and . has an inverse if and also . The inverse is (5.116) or we could write (5.117) (5.118) We can now write (5.119) Using we have (5.120) Therefore we require (5.121) or (5.122) This relationship defines only up to an arbitrary factor. this factor is further restricted to a sign if we require that the form a representation of the Lorentz group. We obtain thus the two-valued spinor representation, in agreement with our previous assumptions. A wave function transforming according to equation 5.115 and equation 5.116 by means of equation 5.122 is a four-component Lorentz spinor. Such a spinor is also frequently called a bi-spinor, since it consists of two 2-component spinors, known to us from the Pauli equation. Consider an infinitesimal proper Lorentz transformation (5.123) where is anti-symmetric for an invariant proper time interval. Each of the six independent non-vanishing generates an infinitesimal Lorentz transformation. (5.124) for a transformation to a coordinate system moving with velocity along the -direction. (5.125) for a rotation through an angle about the -direction. We expand in powers of to first order, (5.126) (5.127) with . We now solve for . Equation 5.122 becomes (5.128) Also (5.129) Combining equation 5.128 and equation 5.129 gives, (5.130) We must find six matrices which satisfy the above equation. We try the anti-symmetric product of two matrices (5.131) Substituting (5.131) into the right-hand side of (5.130) gives (5.132) which is the left-hand side of (5.130). Therefore (5.133) is a solutions to equation 5.130. Thus (5.134) We now construct finite proper transformations. We define (5.135) where is an infinitesimal parameter of the Lorentz group. is a matrix for a general unit space-time rotation around an axis in the direction labelled by . For proper Lorentz transformations has the property . labels the row and labels the column. We can write the finite transformation using as (5.136) For Lorentz translations, and we can write (5.137) Similarly for space-rotations, and we can write (5.138) Turning now to the construction of a finite spinor transformation , we have (5.139) The following sections consider finite transformations for a rotation in 3-space, a general Lorentz boost, and spatial inversion. Subsections Next: Rotations Up: Dirac Equation Previous: Covariant Form of the Douglas M. Gingrich (gingrich@ ualberta.ca) 2004-03-18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933308959007263, "perplexity": 1308.3774051469538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00143.warc.gz"}
https://electronics.stackexchange.com/questions/446529/ltspice-how-to-setup-sinusoidal-or-exponential-voltage-source
# LTSpice: how to setup sinusoidal or exponential voltage source? My textbook proposes to calculate power for the voltage source which has sinusoidal or exponential waveform. Probably my question is very silly, but I really don't know how to input textbook's equations using LTSpice independent voltage source setup menu: v(t) = 10*sin(24t) + 14 and v(t) = 12 + 15*exp(-200t) What are DC offset, amplitude, frequency, Tdelay, Theta, Rise delay, Rise tau, etc? How to proceed if my volatge source has cosine (not sine) waveform? Why not using an Arbitrary Behavioral Voltage? In LTspice, you can enter the equations directly by adjusting the V=F(...) to V=10*sin(24*time) + 14 and for the other source with V= 12 + 15*exp(-200*time). (Note you should use "time", not "t" ) • Thank you! Finally the simplest solution! Jul 3 '19 at 9:40 For the exponential voltage source, it's a bit trickier, since the usual parameters aren't the ones you're used to. LTspice seems to be using these equations : When t is between the Rise Delay and the Fall Delay: $$V(t) = V_{initial}+(V_{pulsed}-V_{initial})*(1-e^{-\dfrac{t-T_{Rise Delay}}{\tau_{Rise}}})$$ This assumes the Rise Delay is lower than the Fall Delay. If that's not the case, swap the Rise Delay and Rise Tau for the Fall Delay and Fall Tau. When t is larger than both delays : $$V(t) = V_{initial}+(V_{pulsed}-V_{initial})*(1-e^{-\dfrac{t-T_{Rise Delay}}{\tau_{Rise}}}) \\-(V_{pulsed}-V_{initial})*(1-e^{-\dfrac{t-T_{Fall Delay}}{\tau_{Fall}}})$$ Of course, when t is lower than both delays, the voltage source outputs the initial voltage. Here is how you would set up the parameters for your source : • I got it. This is my "normal" equation: v(t) = 12 + 15*exp(-200t). According to your formulas: Vinitial =12; Vpulsed - Vinitial = 15; ==> Vpulsed = 27. My goodness, why so complicated! I even don't mention the exponential part. Jul 3 '19 at 9:10 To add phase to your sine voltage source in LTSpice, use Phi[deg]. Cosine is basically sine with a phase. phase of 180° will completely invert your signal. DC Offset adds a DC level to your sine wave signal e.g. 1V DC Offset means your sine wave will "oscillate" around 1V rather than 0V. Frequency is the number of cycles the sine wave will complete in a second. Tdelay adds delay before starting the source. I would say you experiment with these values and you will learn alot more. LTSpice is free to use and it should not take you long to setup basic circuit with a resistor and source to play around with these values. v(t) = 10*sin(24t) + 14 V = A * sin(2 * pi * F * t) + DC_OFFSET for the signal above the amplitude is 10. frequency is part before the t (time) inside the bracket. 24 = 2*pi*F 24/(2*pi) = F • What about exponent? Jul 3 '19 at 8:23 • I suggest you download LTSpice and experiment, it really is helpful for learning. Jul 3 '19 at 8:26 • I have it. The problem is that I can't blindly input v(t) = 12 + 15*exp(-200t) into voltage source menu because 1) I don't know what is rise delay, rise tau, all these values (I'm good in math, but my calculus textbook doesn't mention these concepts neither); 2) I have no any standard curve to make a comparison. Let's say, if LTSpice Help could provide a graph for, say, v(t) = exp(At), I could find out how to setup A. Jul 3 '19 at 8:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111477494239807, "perplexity": 2177.8515280223937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00269.warc.gz"}
http://cognet.mit.edu/journal/10.1162/089976698300017485
## Neural Computation May 15, 1998, Vol. 10, No. 4, Pages 807-814 (doi: 10.1162/089976698300017485) © 1998 Massachusetts Institute of Technology Weight-Value Convergence of the SOM Algorithm for Discrete Input Article PDF (135.4 KB) Abstract Some insights on the convergence of the weight values of the self-organizing map (SOM) to a stationary state in the case of discrete input are provided. The convergence result is obtained by applying the Robbins-Monro algorithm and is applicable to input-output maps of any dimension.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200694918632507, "perplexity": 1087.8233162193653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00006.warc.gz"}
http://www.rzuser.uni-heidelberg.de/~as3/SolveMeas.html
May 2005 - last revised October 2009 ## How decoherencecan solve the measurement problem ### H. D. Zeh Decoherence may be defined as the uncontrollable dislocalization of quantum mechanical superpositions. It is an unavoidable consequence of the interaction of all local systems with their environments according to the Schrödinger equation. Since the dislocalization propagates in general without bounds, this concept of decoherence does not depend on any precise definition of (sub)systems. All systems should thus be entangled with their growing environments, and generically cannot possess pure quantum states by their own. They may then formally be described by a reduced density matrix \rho that represents a "mixed state", with a von Neumann entropy -trace(\rho ln\rho) that grows in time (unless there were "advanced entanglement"). This reduced density matrix is operationally indistinguishable (by means of local operations) from that describing an ensemble of states – as though some really existing pure state were just incompletely known. One of the states diagonalizing this density matrix could then be selected by a mere increase of knowledge. For this reason, the mixed state arising from entanglement is often erroneously identified with such an ensemble. Since the dynamical situation of increasing entanglement applies in particular to systems representing macroscopic outcomes of quantum measurements ("pointer positions"), decoherence has occasionally been claimed to explain the probabilistic nature of quantum mechanics ("quantum indeterminism"). However, such a conclusion would evidently contradict the determinism of the assumed global unitary dynamics. (Note that this claimed solution – even if correct – would require decoherence to be irreversible, as the measurement could otherwise be undone or "erased" – see Quantum teleportation and other quantum  misnomers). Although the claim would then be operationally unassailable, it is wrong. The very concept of a density matrix is already based on local operations (measurements), which presume the probability interpretation to replace global unitarity at some point. Because of the popularity of this "naive" misinterpretation of decoherence, I have often emphasized that the latter does "not by itself solve the measurement problem". This remark has in turn been quoted to argue that decoherence is irrelevant for the understanding of quantum measurements. This argument has mainly been used by physicists who insist on a traditional solution: by means of a stochastic interpretation that has to complement unitary dynamics. Their hope can indeed not be fulfilled by decoherence. In particular, "epistemic" interpretations of the wave function (as merely representing incomplete knowledge) usually remain silent about what this missing knowledge is about, in order to avoid inconsistencies. A stochastic collapse of the wave function, on the other hand, would require a fundamental non-linear modification of the Schrödinger equation. Since, in Tegmark's words, decoherence "looks and smells like a collapse", it is instructive first to ask in what sense such collapse theories would solve the measurement problem if their prospective non-linear dynamics were ever confirmed empirically (for example, by studying systems that are completely shielded against decoherence – a very difficult condition to be achieved in practice). Some physicists prefer the questionable alternative that the Schrödinger equation is exact but applicable only between the "preparation" and "measurement" of a quantum state. However, it appears absurd to assume that the wave function exists only for the purpose of experimental physicists to make predictions for their experiments. It would then remain completely open how macroscopic objects, including preparation and measurement devices themselves, could ever be consistently described as physical systems consisting of atoms. It is well known that superpositions of two or more possible states may represent (new) individual physical properties as long as the system remains isolated, while they seem to turn into statistical ensembles when measured and hence subjected to decoherence. (As to my knowledge, no "real", that is, irreversible, measurement has ever been performed in the absence of decoherence.) So what would it mean if appropriate non-linear collapse terms in the dynamics were confirmed to exist? These theories require that any superposition of different positions of a macroscopic pointer (or any other macroscopic variables) indeterministically evolves or jumps into one of many possible narrow wave packets that may represent pointer states with an uncertainty that is negligible but quite natural for wave packets. These wave packets resemble Schrödinger's coherent states, which he once used to describe quasi-classical oscillators, and which he hoped to be representative for all quasi-classical objects (apparent particles, in particular). His hope failed not only for micrroscopic particles because of the dynamical dispersion of the wave packet under the Schrödinger equation, while coherent states do successfully describe time-dependent quasi-classical states of electromagnetic field modes, which interact very weakly with their environment. The ensemble of all possible outcomes of the postulated collapse into such wave packets, weighted by the empirical Born probabilities, can be described by a density matrix that is essentially the same as the reduced density matrix arising from decoherence. The collapse assumption would mean, though, that no fundamental classical concepts are needed any more for an interpretation of quantum mechanics. Since macroscopic pointer states are assumed to collapse into narrow wave packets in their position representation, there is no eigenvalue-eigenfunction link problem that is claimed to arise in epistemic interpretations. Observables are no fundamental concepts any more, as they can be derived from the specific measurement interaction Hamiltonian. As an example, consider the particle track arising in a Wilson or bubble chamber, described by a succession of collapse events. All the little droplets (or bubbles in a bubble chamber) can be interpreted as macroscopic "pointers" (or documents). They can themselves be observed without being disturbed by means of "ideal measurements". According to Mott's unitary description, the state of the apparently observed "particle" (its wave function) becomes entangled with all these pointer states in a way that describes a superposition of many different tracks, each one consisting of a number of droplets at correlated positions. This superposition would disappear according to the collapse, which is assumed to remove all but one of the tracks. Individual tracks are globally described by wave packets that approximately factorize into localized final states of the particle, droplet positions, and their environment. So one assumes that the kinematical concept of a wave function is complete, which means that there are no particles. In contrast, many interpretations of quantum theory, such as the Copenhagen interpretation or those based on Feynman paths or Bohm trajectories, are all entertaining the prejudice that classical concepts are fundamental at some level. As mentioned above, decoherence leads to the same reduced density matrix (for the combined system of droplets and "particle"), which therefore seems to represent an ensemble of tracks. This was all known to Mott in the early days of quantum mechanics, but he did not yet take into account the subsequent and unavoidable process of decoherence of the droplet positions by their environment. Mott did not see the need to solve any measurement problem, as he accepted the probability interpretation in terms of classical variables. In a global unitary quantum description, however, there is still just one global superposition of all "potential" tracks consisting of droplets, entangled with the particle wave function and the environment: a universal Schrödinger cat. Since one does not obtain a genuine ensemble of pointer states, one cannot select one of its members by a mere increase of information. Since such a selection seems to occur in a measurement, it is this apparent increase of information that requires further analysis. For this purpose, one has to include an observer of the pointer or the Wilson tracks into the description. According to the Schrödinger equation, he, too, would necessarily become part of the entanglement with the "particle", the device, and the environment. Clearly, the phase relations originating from the initial superposition have now been irreversibly dislocalized (become an uncontrollable property of the state of the whole universe). They can never be experienced any more by an observer who is assumed to be local for dynamical reasons. This dynamical locality also means that decohered components of the universal wave function are dynamically autonomous (see Quantum nonlocality vs. Einstein locality). The in this way arising branches of the global wave function form entirely independent "worlds", which may contain different states of all observers who are involved in the process. If we intend to associate unique contents of consciousness with physical states of local observers, we can do this only separately with their thus dynamically defined component states. The observed quantum indeterminism must then be attributed to the indeterministic history of these quasi-classical branch wave functions with their internal observers. No indeterminism is required for the global quantum state. This identification of observers with states existing only in certain branching components of the global wave function  is the only novel element that has to be added to the quantum formalism for a solution of the measurement problem. Different observers of the same measurement result living in the same branch world are consistently correlated with one another in a similar way as the positions of different droplets forming an individual track in the Wilson chamber. Redefining the very concept of reality operationally (that is, applying it only to "our" branch) would eliminate from reality most of what we just concluded to exist according to the unitary dynamics! The picture of branching "worlds" perfectly describes quantum measurements – although in an unconventional manner. Decoherence may be regarded as a "collapse without a collapse". (Note, however, that decoherence occuring in quantum processes in the brain must be expected to lead to further indeterministic branching even after the information about a measurement result has arrived at the sensorial system already in a quasi-classical form.) Why should we reject the consequence of the Schrödinger equation that there must be myriads of (by us) unobserved quasi-classical worlds, or why should we insist on the existence of fundamental classical objects that we seem to observe, but that we don't need at all for a consistent physical description our observations? Collapse theories (when formulated by means of fundamental stochastic quantum Langevin equations) would not only have to postulate the indeterministic transition of quantum states into definite component states, but also their relative probabilities according to the Born rules. While, even without a collapse, the relevant components (or robust "branches" of the wave function) can be dynamically justified by the dislocalization of superpositions (decoherence) as described above, the probabilities themselves can not. Since all outcomes are assumed to exist in this picture, all attempts to derive the empirical probabilities are doomed to remain circular. According to Graham, one may derive the observed relative frequencies of measurement outcomes (their statistical distribution) by merely assuming that our final (presently experienced) branch of the universal wave function (in which "we" happen to live) does not have an extremely small norm in comparison to the others. Although the choice of the norm is here completely equivalent to assuming the Born probabilities for all individual branchings, it is a natural choice for such a postulate, since the norm is conserved under the Schrödinger equation (just as phase space is conserved in classical theories, where it likewise serves as an appropriate probability measure). Nonetheless, most physicists seem to insist on a metaphysical (pre-Humean) concept of dynamical probabilities, which would explain the observed frequencies of measurement results in a "causal" manner. However, this metaphysics seems to represent a prejudice resulting from our causal experience of the classical world. There is now a wealth of observed mesoscopic realizations of "Schrödinger cats", produced according to a general Schrödinger equation. They include superpositions of different states of electromagnetic fields, interference between partial waves describing biomolecules passing through different slits of an appropriate device, or superpositions of currents consisting of millions of electrons moving collectively in opposite directions. They can all be used to demonstrate their gradual decoherence by interaction with the environment (in contrast to previously assumed spontaneous quantum jumps), while there is so far no indication whatsoever of a genuine collapse. However, complex biological systems (living beings) can hardly ever be sufficently isolated, since they have to permanently get rid of entropy. Such systems depend essentially on the arrow of time that is manifest in the growing correlations (most importantly in the form of quantum entanglement, and hence decoherence). Only in a Gedanken Experiment may we conceive of an isolated observer, who for some interval of time interacts with an also isolated measurement device, or even directly with a microscopic system (by absorbing a single photon, for example). One may similarly imagine an observer who is himself passing through an interference device while being aware of the slit he passes through. What would that mean according to a universal Schrödinger equation? Since the observer's internal state of knowledge must become entangled with the variables that he has observed, or with his path through the slits, he would subjectively believe to pass through one slit only. Could we confirm such a prediction in principle? If we observed the otherwise isolated observer from outside, he should behave just as any microscopic system – thus allowing for interference when "erasing" his memory. So he would have to lose all his memory about what he experienced in order to restore the complete superposition locally. Can we then not ask him before this recoherence occurs? This would require him to emit information in some physical form, thereby preventing recoherence and interference. An observer in a state that allows interference could never tell us which passage he was aware of! This demonstrates that the Everett branching is ultimately subjective, although we may always assume it to happen objectively as soon as decoherence has become irreversible for all practical purposes. As this usually occurs in the apparatus of measurement, this description justifies the pragmatic Copenhagen rules – albeit in a conceptually consistent manner and without presuming classical terms. See also "Quantum discreteness is an illusion" (in particular Sects. 3 and 4) or "Roots and Fruits of Decoherence" (in particular Sects. 3, 5 and 6). Back to www.zeh-hd.de
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917579174041748, "perplexity": 788.5403468764612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00511-ip-10-171-10-70.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Beginning_Rigorous_Mathematics/Sets_and_Functions
# Beginning Rigorous Mathematics/Sets and Functions We will now discuss some important set operations. Recall from the preliminaries that we only intuitively define a set to be a collection of distinct mathematical objects. There are much better and rigorous definitions of what sets are and form a subject for study all by itself, into which we will not delve. Recall the meaning of the symbol "${\displaystyle \in }$", which reads "is an element of" so that if ${\displaystyle A}$ is a set and ${\displaystyle x}$ an object, then ${\displaystyle x\in A}$ a is a statement (which is either true of false, depending on whether ${\displaystyle x}$ is an element of ${\displaystyle A}$). # Sets and Set operations In the following discussion, ${\displaystyle A}$ and ${\displaystyle B}$ denote any sets. ## Definitions ### The empty set: ${\displaystyle \emptyset }$ We will assume that the empty set exists, and is denoted by ${\displaystyle \emptyset }$. As the name suggests, the empty set contains no elements so that for any object ${\displaystyle x}$ the statement ${\displaystyle x\in \emptyset }$ is false. ### Set equality Usually there is no ambiguity when we use the symbol "=" to refer to equality between sets. It is important that equality between sets is completely different to equality between numbers. We define the logical statement "${\displaystyle A=B}$" to be true by definition when the statement "${\displaystyle x\in A\Leftrightarrow x\in B}$" (which reads "${\displaystyle x}$ is contained in ${\displaystyle A}$ if and only if ${\displaystyle x}$ is contained in ${\displaystyle B}$") is true, and false otherwise. Intuitively this means that sets are equal if and only if they contain exactly the same elements. For example, ${\displaystyle \{1,2,3\}=\{1,2\}}$ is false since "${\displaystyle 3\in \{1,2\}}$" is false. It might be helpful to check the truth table to see that "${\displaystyle 3\in \{1,2,3\}\Leftrightarrow 3\in \{1,2\}}$" is a false statement. It should then be clear that "${\displaystyle \{5,6,7\}=\{5,6,7\}}$" is true. ### Subsets If every element of the set ${\displaystyle A}$ is an element of ${\displaystyle B}$, we then say that ${\displaystyle A}$ is a subset of ${\displaystyle B}$. Rigorously, we say that the statement "${\displaystyle A\subset B}$" is true by definition when the statement "${\displaystyle x\in A\Rightarrow x\in B}$" (which reads "If ${\displaystyle x}$ is contained in ${\displaystyle A}$ then ${\displaystyle x}$ is contained in ${\displaystyle B}$") is true. We have seen previously that the statement ${\displaystyle \{1,2\}=\{1,2,3\}}$ is false, however the statement ${\displaystyle \{1,2\}\subset \{1,2,3\}}$ is true. It should be clear that ${\displaystyle \{1,2,3,5\}\subset \{1,2,3,4\}}$ is false, since the statement "${\displaystyle 5\in \{1,2,3,5\}\Rightarrow 5\in \{1,2,3,4\}}$" is false. ### Intersection We define the intersection of sets by the symbol "${\displaystyle \cap }$". Rigorously we write "${\displaystyle A\cap B:=\{x|x\in A\land x\in B\}}$" which reads "The intersection of the sets ${\displaystyle A}$ and ${\displaystyle B}$ is by definition equal to the set which contains exactly the elements which are contained in both ${\displaystyle A}$ and ${\displaystyle B}$". For example "${\displaystyle \{1,2,3,4\}\cap \{1,4,5\}:=\{1,4\}}$". We say that ${\displaystyle A}$ and ${\displaystyle B}$ are disjoint when ${\displaystyle A\cap B=\emptyset }$. ### Union We define the union of sets by the symbol "${\displaystyle \cup }$". Rigorously we write "${\displaystyle A\cup B:=\{x|x\in A\lor x\in B\}}$" which reads "The union of the sets ${\displaystyle A}$ and ${\displaystyle B}$ is by definition equal to the set which contains exactly the elements which are contained in either one of ${\displaystyle A}$ and ${\displaystyle B}$". For example "${\displaystyle \{1,2,3,4\}\cup \{1,4,5\}:=\{1,2,3,4,5\}}$". ### Complement To define the complement of the set ${\displaystyle A}$ we assume that the set ${\displaystyle A}$ is a subset of some universal set ${\displaystyle X}$. We say "${\displaystyle A}$ lives in ${\displaystyle X}$". Often the universal set is implicitly clear, for example when we are studying real analysis we often just assume ${\displaystyle X=\mathbb {R} }$ or when studying complex analysis we assume ${\displaystyle X=\mathbb {C} }$. We define the complement of a set by the superscript "${\displaystyle c}$". Rigorously "${\displaystyle A^{c}:=\{x\in X|\lnot x\in A\}}$" which reads "the complement of ${\displaystyle A}$ in ${\displaystyle X}$ is the set of all elements which are contained in ${\displaystyle X}$ and not in ${\displaystyle A}$". For example, if we assume ${\displaystyle X=\{1,2,3,4,5\}}$ then ${\displaystyle \{1,5\}^{c}:=\{2,3,4\}}$. ### Relative complement We define the relative complement of sets by the symbol "${\displaystyle \backslash }$". Rigorously, "${\displaystyle A\backslash B:=\{x\in A|\lnot x\in B\}}$" which reads "the relative complement of ${\displaystyle B}$ in ${\displaystyle A}$ is by definition equal to the set containing all the elements contained in ${\displaystyle A}$ and is not contained in ${\displaystyle B}$". For example "${\displaystyle \{1,2,3,4\}\backslash \{1,4,5\}:=\{2,3\}}$". ## Some basic results ### Lemma 1 ${\displaystyle (A=B)\Leftrightarrow (A\subset B)\land (B\subset A)}$. Which reads "A equals B if and only if A is a subset of B AND B is a subset of A" proof As explained in the previous chapter, ${\displaystyle (A=B)\Leftrightarrow (A\subset B)\land (B\subset A)}$ will be true by adjunction when both ${\displaystyle (A=B)\Rightarrow (A\subset B)\land (B\subset A)}$ and ${\displaystyle (A\subset B)\land (B\subset A)\Rightarrow (A=B)}$ are true. We prove first ${\displaystyle (A=B)\Rightarrow (A\subset B)\land (B\subset A)}$. Let ${\displaystyle A=B}$, then by definition we have ${\displaystyle x\in A\Leftrightarrow x\in B}$, which is logically equivalent to ${\displaystyle (x\in A\Rightarrow x\in B)\land (x\in A\Rightarrow x\in B)}$. By simplification we have that ${\displaystyle x\in A\Rightarrow x\in B}$ is true, and that ${\displaystyle x\in A\Rightarrow x\in B}$ is true. Therefore by definition ${\displaystyle A\subset B}$, and ${\displaystyle B\subset A}$ are both true. By adjunction ${\displaystyle (A\subset B)\land (B\subset A)}$ is true. Therefore ${\displaystyle (A=B)\Rightarrow (A\subset B)\land (B\subset A)}$ is true. Conversely, we prove ${\displaystyle (A\subset B)\land (B\subset A)\Rightarrow (A=B)}$. Let ${\displaystyle (A\subset B)\land (B\subset A)}$. Then, by simplification, both ${\displaystyle A\subset B}$ and ${\displaystyle B\subset A}$ are true. By definition both ${\displaystyle x\in A\Rightarrow x\in B}$ and ${\displaystyle x\in B\Rightarrow x\in A}$ are true. By adjunction, ${\displaystyle (x\in A\Rightarrow x\in B)\land (x\in B\Rightarrow x\in A)}$ is true, which is logically equivalent to ${\displaystyle x\in A\Leftrightarrow x\in B}$. Then by definition ${\displaystyle A=B}$ is true. Therefore ${\displaystyle (A\subset B)\land (B\subset A)\Rightarrow (A=B)}$ is true. QED. ### Lemma 2 ${\displaystyle A\subset A\cup B}$ ### Lemma 3 ${\displaystyle A\cap B\subset A}$ and ${\displaystyle A\cap B\subset B}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 103, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9674443006515503, "perplexity": 126.98801391174834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00177-ip-10-143-35-109.ec2.internal.warc.gz"}
http://www.ki-net.umd.edu/activities/abstract.php?user_id=4458&event_id=780
## Young Researchers Workshop: Current trends in kinetic theory ### Compactness and weak solutions of time fractional PDEs Lei Li Duke University [SLIDES] Abstract: The Aubin-Lions lemma and its variants play crucial roles for the existence of weak solutions of nonlinear evolutionary PDEs. I will talk about some compactness criteria that are analogies of Aubin-Lions lemma for the existence of weak solutions to time fractional PDEs. The existence of weak solutions for a special case of time fractional compressible Navier-Stokes equations with constant density and time fractional Keller-Segel equations in $R^2$ are then studied as model problems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506202936172485, "perplexity": 555.8533660726712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746639.67/warc/CC-MAIN-20181120191321-20181120212329-00003.warc.gz"}
https://arxiv.org/abs/1902.01881
astro-ph.EP (what is this?) # Title:So close, so different: characterization of the K2-36 planetary system with HARPS-N Abstract: K2-36 is a K dwarf orbited by two small ($R_{\rm b}=1.43\pm0.08$ $R_\oplus$ and $R_{\rm c}=3.2\pm0.3$ $R_\oplus$), close-in ($a_{\rm b}$=0.022 AU and $a_{\rm c}$=0.054 AU) transiting planets discovered by Kepler/K2. They are representatives of two families of small planets ($R_{\rm p}$<4 $R_\oplus$) recently emerged from the analysis of Kepler data, with likely a different structure, composition and evolutionary pathways. We revise the fundamental stellar parameters and the sizes of the planets, and provide the first measurement of their masses and bulk densities, which we use to infer their structure and composition. We observed K2-36 with the HARPS-N spectrograph over $\sim$3.5 years, collecting 81 useful radial velocity measurements. The star is active, with evidence for increasing levels of magnetic activity during the observing time span. The radial velocity scatter is $\sim$17 \ms due to the stellar activity contribution, which is much larger that the semi-amplitudes of the planetary signals. We tested different methods for mitigating the stellar activity contribution to the radial velocity time variations and measuring the planet masses with good precision. We found that K2-36 is likely a $\sim$1 Gyr old system, and by treating the stellar activity through a Gaussian process regression, we measured the planet masses $m_{\rm b}$=3.9$\pm$1.1 $M_\oplus$ and $m_{\rm c}$=7.8$\pm$2.3 $M_\oplus$. The derived planet bulk densities $\rho_{\rm b}$=7.2$^{+2.5}_{-2.1}$ $g/cm^{3}$ and $\rho_{\rm c}$=1.3$^{+0.7}_{-0.5}$ $g/cm^{3}$ point out that K2-36\,b has a rocky, Earth-like composition, and K2-36\,c is a low-density sub-Neptune. Composed of two planets with similar orbital separations but different densities, K2-36 represents an optimal laboratory for testing the role of the atmospheric escape in driving the evolution of close-in, low-mass planets after $\sim$1 Gyr from their formation. Comments: Accepted for publication on Astronomy $\&$ Astrophysics Subjects: Earth and Planetary Astrophysics (astro-ph.EP) Cite as: arXiv:1902.01881 [astro-ph.EP] (or arXiv:1902.01881v1 [astro-ph.EP] for this version) ## Submission history From: Mario Damasso [view email] [v1] Tue, 5 Feb 2019 19:32:00 UTC (2,739 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978142738342285, "perplexity": 4728.424285334937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249508792.98/warc/CC-MAIN-20190223162938-20190223184938-00446.warc.gz"}
https://gateoverflow.in/369487/exam-queries?show=371422
1,190 views Is it necessary to solve question which is in standard book :- ie(exercise question) Or only pyqs solving is sufficient?? Nothing is necessary and nothing is sufficient. @raja11sep then tell far from this? edited In general, solving selected QS from Textbook really helps. For example, 1. You can use Forouzan CN book for data link layer QS (also read specific parts ot it from the book) 2. Solve DMA questions from Tanenbaum COA book. by reshown i think GATE exame check everything @dd sir , can you please tell us which topic we need to study from Forouzan for CN. @raja11sep and and and ## https://dhruvil16.github.io/resource_gate.html See. if you solve all the questions of all standard books but without understanding the concept it would be worthless. But if you solve getting concepts it would help to build up your logic with which you can solve ither questions too related to the concept. Try to understand why that question is asked amd what is the motive of the examiner to ask that question and what concept he is trying to check. This should help. by ## This will help them to know about the structure of questions asked in the entrance.... ### 5. https://gatecse.in/i_want_to_crack_gate_in_cse by Sir i want to ask gate paper if every year  analysis of gate cse paper how much% question ask from trend of previous years paper & modified concept of it( avg) and standard text book ( concept &eg. &exercise ) a/c to gate syllabus ? one suggestion is enough to GATE why you give many suggestion candidate just confused fromm many suggestion Bro i am not confused just more easy& comprehensive strategy ☺ @Sonu12345
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031697869300842, "perplexity": 3993.266816663983}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00313.warc.gz"}
https://www.physicsforums.com/threads/electron-negativity.83245/
# Electron Negativity 1. Jul 26, 2005 ### juskovin Why are electrons negative? 2. Jul 26, 2005 ### El Hombre Invisible Because they are uncertain as to where they are and where they are going? Well, from a charge point of view it's arbitrary - we could easily call electrons positive and protons negative and the show will go on. But historically, current was observed before electrons were known about. Current flows from higher to lower potentials, and it probably would not have occurred to anyone to call this 'negative current' in absence of any corresponding current flowing the other way, so this was chosen as positive. Electrons flow in the opposite direction to conventional current flow, so are negative. Similar Discussions: Electron Negativity
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894410133361816, "perplexity": 1558.5183814634302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00556.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/leaf-maintain-temperature-30-c-reasonable-leaf-lose-280-w-m-2-transpiration-evaporative-he-q2685809
## Maintaining Temperature. If a leaf is to maintain a temperature of 30° C (reasonable for a leaf), it must lose 280 W/m^2 by transpiration (evaporative heat loss). Note that the leaf also loses heat by radiation, but we will neglect this. How much water is lost after 1 h through transpiration only? The area of the leaf is 0.007 m^2. _____g I got 2.24 the first time, but it was incorrect.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674057722091675, "perplexity": 1972.9437832556998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698203920/warc/CC-MAIN-20130516095643-00016-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/110762/arranging-couples-in-a-row
Arranging Couples in a Row Three couples are sitting in a row. Compute the number of arrangements in which no person is sitting next to his or her partner. Answer is 240. From wikipedia, This problem is called the menage problem have a fourmula called Touchard's formula: Let $M_n$ denote the number of seating arrangements for n couples. Touchard (1934) derived the formula $M_n = 2 \cdot n! \sum_{k=0}^n (-1)^k \frac{2n}{2n-k} {2n-k\choose k} (n-k)!$ How does one mnage to prove it - Touchard's formula, see en.wikipedia.org/wiki/Ménage_problem, applies to couples sitting around a table, not couples sitting in a row. Try it for $n=3$; do you get 240? –  Gerry Myerson Feb 19 '12 at 1:07 There's a very nice discussion of Touchard at math.dartmouth.edu/~doyle/docs/menage/menage/menage.html. As I say, Touchard isn't what you want, but the ideas at that site might apply to the row situation. –  Gerry Myerson Feb 19 '12 at 1:12 –  Henry Feb 19 '12 at 1:22 As suggested by Gerry, this can be related to the relaxed Menage problem as described by Bogart at the Dartmouth link. Let $m_n$ be the result of the relaxed Menage problem as described by Bogart, and let $M_n$ be as defined by the OP. Then $M_n=nM_{n-1}+m_n$ for $n\ge 1$. This is because every solution of the relaxed menage problem can be made into a solution of the OP's problem by "cutting" the circular table at a fixed point, but in addition to these solutions we have solutions where the two members of a couple are seated at opposite ends of the linear table. –  Ben Crowell Feb 19 '12 at 2:16 Oops, the above should be $M_n=2nM_{n-1}+m_n$. –  Ben Crowell Feb 19 '12 at 2:33 4 Answers There are $(2n)!$ ways of getting $2n$ people to sit in a row. If we wanted all aof them to sit as couples there would be $n!$ ways of arranging the couples and $2^n$ ways of arranging within the couples. If we require $k$ named couples to sit together then this becomes $(2n-k)$ units to arrange, giving $(2n-k)!$ possibilities. There are $2^k$ ways to arrange within the named couples. But there are ${n \choose k}$ ways of naming the couples. So this gives $2^k{n \choose k}(2n-k)!$. That previous figure will involve double counting so we need to use inclusion-exclusion to answer your question and get $$(2n)! - 2^1{n \choose 1} (2n-1)!+ 2^2{n \choose 2} (2n-2)! - \cdots 2^k{n \choose k} (2n-k)! \cdots 2^n n!$$ $$=\sum_{k=0}^n (-2)^k{n \choose k} (2n-k)!$$ and for $n=3$ this gives $720-720+288-48=240$. - Since we want to sit couples and not people, why wouldn't it be $n!$ minus everything else? Also, is this equal to $\frac {(2n)!} {e}$? –  kuhaku Apr 14 at 16:15 @kuhaku: We want to seat people (explicitly not couples) and there are $2n$ people. The number of ways is an integer so not exactly $\dfrac{(2n)!}{e}$, though for large $n$ the proportion with no couples approaches $\dfrac1e$. See another question for a proof –  Henry Apr 14 at 16:29 If we wanted to sit couples, would it be $n!$ then? –  kuhaku Apr 14 at 16:48 @kuhaku: More like $2^n n!$ as each couple can sit in two ways (swap the two individuals round) –  Henry Apr 14 at 16:50 I meant sitting $n$ couples such that no women sits with her spouse. –  kuhaku Apr 14 at 16:51 This is (the $n=3$ term in) http://oeis.org/A007060, "Number of ways n couples can sit in a row without any spouses next to each other." There are some formulas and links there that may be helpful. - [EDIT] The OP originally posted the $n=3$ case, then changed the question to ask for a general result. This was in response to the $n=3$ question. Here is a method with slightly less brute force than Ross Millikan's. Impose two extra rules: (1) that each couple must be introduced in alphabetical order, and (2) that each woman must come before her partner. Imposing these rules cuts the number of possibilities by a factor of $6\times 8$. Under these rules, there are two possible patterns for the sexes: FFFMMM (one possibility), and FFMFMM (four possibilities, since there are two choices for the fourth person, and then two ways of ordering the remaining two). The total number of possibilities is then $6\times8\times(1+4)=240$. - I don't know how to do it without a certain amount of brute force, but you can reduce the possibilities. Call the six Mr. and Mrs. A, B, C. By symmetry, it doesn't matter who starts, so let's say Mr. A and multiply by 6 at the end. Then you have 4 choices for the second slot and they are all the same, so say Mrs. B. For the third slot you have two types of possibilities-you have have Mrs. A (one choice) or Mr. or Mrs. C(two choices). The difference is that Mrs. A imposes no restriction on the fourth seat. If it is Mrs A, you have two ways to fill the remaining seats because you have BCC left and have to separate the C's. If one of the C's, you have 4 because you have ABC left and can't start with C. So there are a total of 6*4*(1*2+2*4)=240 possibilities. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331972002983093, "perplexity": 429.99945481033234}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651727.46/warc/CC-MAIN-20150417045731-00050-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.ps.uni-saarland.de/Publications/details/Forster:2021:CT_Coq.html
# Publication details ## Church’s Thesis and Related Axioms in Coq’s Type Theory Yannick Forster 29th EACSL Annual Conference on Computer Science Logic (CSL 2021), Leibniz International Proceedings in Informatics (LIPIcs), January 2021 Church's thesis (CT) as an axiom in constructive logic states that every total function of type $\mathbbN \to \mathbbN$ is computable, i.e. definable in a model of computation. CT is inconsistent in both classical mathematics and in Brouwer's intuitionism since it contradicts Weak König's Lemma and the fan theorem, respectively. Recently, CT was proved consistent for (univalent) constructive type theory. Since neither Weak König's Lemma nor the fan theorem are a consequence of just logical axioms or just choice-like axioms assumed in constructive logic, it seems likely that CT is only inconsistent with a combination of classical logic and choice axioms. We study consequences of CT and its relation to several classes of axioms in Coq's type theory, a constructive type theory with a universe of propositions which does neither prove classical logical axioms nor strong choice axioms. We thereby provide a partial answer to the question which axioms may preserve computational intuitions inherent to type theory, and which certainly do not. The paper can also be read as a broad survey of axioms in type theory, with all results mechanised in the Coq proof assistant. Coq Development
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621302485466003, "perplexity": 1141.7570826614538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00434.warc.gz"}
https://www.physicsforums.com/threads/cross-products-vs-dot-products.540717/
# Cross products vs. dot products 1. Oct 15, 2011 ### Joyci116 I have a question regarding cross products and dot products. What is the difference, and are there any similarities? What ARE cross products and what are their functions? What ARE dot products and what are their functions? What do we use them for? Thank you, Joyci116 2. Oct 16, 2011 ### paul2211 Well, first of all, you use cross products and dot products when you work with vectors, and I would say (at least how I convinced myself) they are something made up by mathematicians to make vectors useful. A cross product between two vectors products another vector that is orthogonal to the original two. For example, cross product can be used to find the moment: M = r x F A dot product, on the other hand, between two vectors produces a scalar, basically a number. Dot product can be used to calculate the work done by a force: W = F d P.S. The bold letters are vectors, and the unbold ones are scalar. 3. Oct 16, 2011 ### Joyci116 So what you are trying to convey is that you multiply with cross products and dot products, but it is WHAT you multiply determines whether it is a dot or cross product? Scalar is always dot and vector is always cross product? Similar Discussions: Cross products vs. dot products
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333891034126282, "perplexity": 575.6374925056638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691830.33/warc/CC-MAIN-20170925130433-20170925150433-00465.warc.gz"}
https://rdrr.io/cran/freqtables/f/inst/doc/descriptive_analysis.Rmd
# Descriptive Analysis with freqtables In freqtables: Make Quick Descriptive Tables for Categorical Variables Table of contents: Univariate percentages and 95% log transformed confidence intervals Univariate percentages and 95% Wald confidence intervals Bivariate percentages and 95% log transformed confidence intervals Interpretation of confidence intervals # Overview The freqtables package is designed to quickly make tables of descriptive statistics for categorical variables (i.e., counts, percentages, confidence intervals). This package is designed to work in a Tidyverse pipeline and consideration has been given to get results from R to Microsoft Word ® with minimal pain. The package currently consistes of the following functions: 1. freq_table(): Estimate Percentages and 95 Percent Confidence Intervals in dplyr Pipelines. 2. freq_test(): Hypothesis Testing For Frequency Tables. 3. freq_format(): Format freq_table Output for Publication and Dissemination. 4. freq_group_n(): Formatted Group Sample Sizes for Tables This vignette is not intended to be representative of every possible descriptive analysis that one may want to carry out on a given data set. Rather, it is intended to be representative of descriptive analyses that are commonly used when conducting epidemiologic research. library(dplyr) library(freqtables) data(mtcars) # Univariate percentages and 95% confidence intervals {#one-way-log} In this section we provide an example of calculating common univariate descriptive statistics for a single categorical variable. Again, we are assuming that we are working in a dplyr pipeline and that we are passing a grouped data frame to the freq_table() function. ## Logit transformed confidence intervals The default confidence intervals are logit transformed - matching the method used by Stata: https://www.stata.com/manuals13/rproportion.pdf mtcars %>% freq_table(am) Interpretation of results: var contains the name of the variable passed to the freq_table() function. cat contains the unique levels (values) of the variable in var. n contains a count of the number of rows in the data frame that have the value cat for the variable var. n_total contains the sum of n. percent = n / n_total. se = $\sqrt{proportion * (1 - proportion) / (n - 1)}$ t_crit is the critical value from Student's t distribution with n_total - 1 degrees of freedom. The default probability value is 0.975, which corresponds to an alpha of 0.05. lcl is the lower bound of the confidence interval. By default, it is a 95% confidence interval. ucl is the upper bound of the confidence interval. By default, it is a 95% confidence interval. Compare to Stata: {width=600px} ## Wald confidence intervals {#one-way-wald} Optionally, the ci_type = "wald" argument can be used to calculate Wald confidence intervals that match those returned by SAS. The exact methods are documented here: https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_surveyfreq_a0000000221.htm https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_surveyfreq_a0000000217.htm mtcars %>% freq_table(am, ci_type = "wald") Compare to SAS: {width=600} top ## Returning arbitrary confidence intervals The default behavior of freq_table() is to return 95% confidence intervals (two-sided). However, this behavior can be adjusted to return any alpha level. For example, to return 99% confidence intervals instead just pass 99 to the percent_ci parameter of freq_table() as demonstrated below. mtcars %>% freq_table(am, percent_ci = 99) Notice that the lower bounds of the 99% confidence limits (34.88730 and 20.05315) are less than the lower bounds of the 95% confidence limits (40.94225 and 24.50235). Likewise, the upper bounds of the 99% confidence limits (79.94685 and 65.11270) are greater than the upper bounds of the 95% confidence limits (75.49765 and 59.05775) # Bivariate percentages and 95% log transformed confidence intervals {#two-way-log} In this section we provide an example of calculating common bivariate descriptive statistics for categorical variables. Currently, all confidence intervals for (grouped) row percentages, and their accompanying confidence intervals, are logit transformed - matching the method used by Stata: https://www.stata.com/manuals13/rproportion.pdf At this time, you may pass two variables to the ... argument to the freq_table() function. The first variable is labeled row_var in the resulting frequency table. The second variable passed to freq_table() is labeled col_var in the resulting frequency table. These labels are somewhat arbitrary and uniformative, but are used to match common naming conventions. Having said that, they may change in the future. The resulting frequency table is organized so that the n and percent of observations where the value of col_var equals col_cat is calculated within levels of row_cat. For example, the frequency table below tells us that that there are 11 rows (n_row) with a value of 4 (row_cat) for the variable cyl (row_var). Among those 11 rows only, there are 3 rows (n) with a value of 0 (col_cat) for the variable am (col_var), and 8 rows (n) with a value of 1 (col_cat) for the variable am (col_var). mtcars %>% freq_table(cyl, am) Interpretation of results: row_var contains the name of the first variable passed to the ... argument of the freq_table() function. row_cat contains the levels (values) of the variable in row_var. col_var contains the name of the second variable passed to the ... argument of the freq_table() function. col_cat contains the levels (values) of the variable in col_var. n contains a count of the number of rows in the data frame that have the value row_cat for the variable row_var AND the value col_cat for the variable col_var. n_row contains the sum of n for each level of row_cat. n_total contains the sum of n. percent_total = n / n_total. se_total = $\sqrt{proportion_{overall} * (1 - proportion_{overall}) / (n_{overall} - 1)}$ t_crit_total is the critical value from Student's t distribution with n_total - 1 degrees of freedom. The default probability value is 0.975, which corresponds to an alpha of 0.05. lcl_total is the lower bound of the confidence interval around percent_total. By default, it is a 95% confidence interval. ucl_total is the upper bound of the confidence interval around percent_total. By default, it is a 95% confidence interval. percent_row = n / n_row. se_row = $\sqrt{proportion_{row} * (1 - proportion_{row}) / (n_{row} - 1)}$ t_crit_row is the critical value from Student's t distribution with n_total - 1 degrees of freedom. The default probability value is 0.975, which corresponds to an alpha of 0.05. lcl_row is the lower bound of the confidence interval around percent_row. By default, it is a 95% confidence interval. ucl_row is the upper bound of the confidence interval around percent_row. By default, it is a 95% confidence interval. Compare to Stata: {width=600} These estimates do not match those generated by SAS, which uses a different variance estimation method (https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_surveyfreq_a0000000217.htm). {width=600} top # Interpretation of confidence intervals {#interpretation} The following are frequentist interpretations for 95% confidence intervals taken from relevant texts and peer-reviewed journal articles. Biostatistics: A foundation for analysis in the health sciences In repeated sampling, from a normally distributed population with a known standard deviation, 95% of all intervals will in the long run include the populations mean. Daniel, W. W., & Cross, C. L. (2013). Biostatistics: A foundation for analysis in the health sciences (Tenth). Hoboken, NJ: Wiley. Fundamentals of biostatistics You may be puzzled at this point as to what a CI is. The parameter µ is a fixed unknown constant. How can we state that the probability that it lies within some specific interval is, for example, 95%? The important point to understand is that the boundaries of the interval depend on the sample mean and sample variance and vary from sample to sample. Furthermore, 95% of such intervals that could be constructed from repeated random samples of size n contain the parameter µ. The idea is that over a large number of hypothetical samples of size 10, 95% of such intervals contain the parameter µ. Any one interval from a particular sample may or may not contain the parameter µ. In Figure 6.7, by chance all five intervals contain the parameter µ. However, with additional random samples this need not be the case. Therefore, we cannot say there is a 95% chance that the parameter µ will fall within a particular 95% CI. However, we can say the following: The length of the CI gives some idea of the precision of the point estimate x. In this particular case, the length of each CI ranges from 20 to 47 oz, which makes the precision of the point estimate x doubtful and implies that a larger sample size is needed to get a more precise estimate of µ. Rosner, B. (2015). Fundamentals of biostatistics (Eighth). MA: Cengage Learning. Statistical modeling: A fresh approach Treat the confidence interval just as an indication of the precision of the measurement. If you do a study that finds a statistic of 17 ± 6 and someone else does a study that gives 23 ± 5, then there is little reason to think that the two studies are inconsistent. On the other hand, if your study gives 17 ± 2 and the other study is 23 ± 1, then something seems to be going on; you have a genuine disagreement on your hands. Kaplan, D. T. (2017). Statistical modeling: A fresh approach (Second). Project MOSAIC Books. Modern epidemiology If the underlying statistical model is correct and there is no bias, a confidence interval derived from a valid test will, over unlimited repetitions of the study, contain the true parameter with a frequency no less than its confidence level. This definition specifies the coverage property of the method used to generate the interval, not the probability that the true parameter value lies within the interval. For example, if the confidence level of a valid confidence interval is 90%, the frequency with which the interval will contain the true parameter will be at least 90%, if there is no bias. Consequently, under the assumed model for random variability (e.g., a binomial model, as described in Chapter 14) and with no bias, we should expect the confidence interval to include the true parameter value in at least 90% of replications of the process of obtaining the data. Unfortunately, this interpretation for the confidence interval is based on probability models and sampling properties that are seldom realized in epidemiologic studies; consequently, it is preferable to view the confidence limits as only a rough estimate of the uncertainty in an epidemiologic result due to random error alone. Even with this limited interpretation, the estimate depends on the correctness of the statistical model, which may be incorrect in many epidemiologic settings (Greenland, 1990). Furthermore, exact 95% confidence limits for the true rate ratio are 0.7–13. The fact that the null value (which, for the rate ratio, is 1.0) is within the interval tells us the outcome of the significance test: The estimate would not be statistically significant at the 1 - 0.95 = 0.05 alpha level. The confidence limits, however, indicate that these data, although statistically compatible with no association, are even more compatible with a strong association — assuming that the statistical model used to construct the limits is correct. Stating the latter assumption is important because confidence intervals, like P-values, do nothing to address biases that may be present. Indeed, because statistical hypothesis testing promotes so much misinterpretation, we recommend avoiding its use in epidemiologic presentations and research reports. Such avoidance requires that P-values (when used) be presented without reference to alpha levels or “statistical significance,” and that careful attention be paid to the confidence interval, especially its width and its endpoints (the confidence limits) (Altman et al., 2000; Poole, 2001c). An astute investigator may properly ask what frequency interpretations have to do with the single study under analysis. It is all very well to say that an interval estimation procedure will, in 95% of repetitions, produce limits that contain the true parameter. But in analyzing a given study, the relevant scientific question is this: Does the single pair of limits produced from this one study contain the true parameter? The ordinary (frequentist) theory of confidence intervals does not answer this question. The question is so important that many (perhaps most) users of confidence intervals mistakenly interpret the confidence level of the interval as the probability that the answer to the question is “yes.” It is quite tempting to say that the 95% confidence limits computed from a study contain the true parameter with 95% probability. Unfortunately, this interpretation can be correct only for Bayesian interval estimates (discussed later and in Chapter 18), which often diverge from ordinary confidence intervals. Rothman, K. J., Greenland, S., & Lash, T. L. (2008). Modern epidemiology (Third). Philadelphia, PA: Lippincott Williams & Wilkins. Greenland, 2016 The specific 95 % confidence interval presented by a study has a 95% chance of containing the true effect size. No! A reported confidence interval is a range between two numbers. The frequency with which an observed interval (e.g., 0.72–2.88) contains the true effectis either 100% if the true effect is within the interval or 0% if not; the 95% refers only to how often 95% confidence intervals computed from very many studies would contain the true size if all the assumptions used to compute the intervals were correct. The 95 % confidence intervals from two subgroups or studies may overlap substantially and yet the test for difference between them may still produce P < 0.05. Suppose for example, two 95 % confidence intervals for means from normal populations with known variancesare (1.04, 4.96) and (4.16, 19.84); these intervals overlap, yet the test of the hypothesis of no difference in effect across studies gives P = 0.03. As with P values, comparison between groups requires statistics that directly test and estimate the differences across groups. It can, however, be noted that if the two 95 % confidence intervals fail to overlap, then when using the same assumptions used to compute the confidence intervals we will find P > 0.05 for the difference; and if one of the 95% intervals contains the point estimate from the other group or study, we will find P > 0.05 for the difference. Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. European Journal of Epidemiology, 31(4), 337–350. https://doi.org/10.1007/s10654-016-0149-3 Bottom Line Give the point estimate along with the 95% confidence interval. Say NOTHING about statistical significance. Write some kind of statement about the data's compatibility with the model. For example, "the confidence limits, however, indicate that these data, although statistically compatible with no association, are even more compatible with a strong association — assuming that the statistical model used to construct the limits is correct." ## Try the freqtables package in your browser Any scripts or data that you put into this service are public. freqtables documentation built on July 20, 2020, 9:06 a.m.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339431881904602, "perplexity": 912.5456288895188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00709.warc.gz"}
https://phys.libretexts.org/Bookshelves/Electricity_and_Magnetism/Book%3A_Electromagnetics_I_(Ellingson)/08%3A_Time-Varying_Fields/8.08%3A_The_Maxwell-Faraday_Equation
$$\require{cancel}$$ In this section, we generalize Kirchoff’s Voltage Law (KVL), previously encountered as a principle of electrostatics in Sections 5.10 and 5.11. KVL states that in the absence of a time-varying magnetic flux, the electric potential accumulated by traversing a closed path $$\mathcal{C}$$ is zero. Here is that idea in mathematical form: $V = \oint_{\mathcal{C}}{ {\bf E} \cdot d{\bf l} } = 0$ Now recall Faraday’s Law (Section [m0055_Faradays_Law]): $V = - \frac{\partial}{\partial t} \Phi = - \frac{\partial}{\partial t} \int_{\mathcal{S}} { {\bf B} \cdot d{\bf s} }$ Here, $$\mathcal{S}$$ is any open surface that intersects all magnetic field lines passing through $$\mathcal{C}$$, with the relative orientations of $$\mathcal{C}$$ and $$d{\bf s}$$ determined in the usual way by the Stokes’ Theorem convention. Note that Faraday’s Law agrees with KVL in the magnetostatic case. If magnetic flux is constant, then Faraday’s Law says $$V=0$$. However, Faraday’s Law is very clearly not consistent with KVL if magnetic flux is time-varying. The correction is simple enough; we can simply set these expressions to be equal. Here we go: $\boxed{ \oint_{\mathcal{C}}{ {\bf E} \cdot d{\bf l} } = - \frac{\partial}{\partial t} \int_{\mathcal{S}} { {\bf B} \cdot d{\bf s} } } \label{m0050_eMFEI}$ This general form is known by a variety of names; here we refer to it as the Maxwell-Faraday Equation (MFE). The integral form of the Maxwell-Faraday Equation (Equation \ref{m0050_eMFEI}) states that the electric potential associated with a closed path $$\mathcal{C}$$ is due entirely to electromagnetic induction, via Faraday’s Law. Despite the great significance of this expression as one of Maxwell’s Equations, one might argue that all we have done is simply to write Faraday’s Law in a slightly more verbose way. This is true. The real power of the MFE is unleashed when it is expressed in differential, as opposed to integral form. Let us now do this. We can transform the left-hand side of Equation \ref{m0050_eMFEI} into a integral over $$\mathcal{S}$$ using Stokes’ Theorem. Applying Stokes’ theorem on the left, we obtain $\int_{\mathcal{S}} { \left( \nabla \times {\bf E} \right) \cdot d{\bf s} } = - \frac{\partial}{\partial t} \int_{\mathcal{S}} { {\bf B} \cdot d{\bf s} }$ Now exchanging the order of integration and differentiation on the right hand side: $\int_{\mathcal{S}} { \left( \nabla \times {\bf E} \right) \cdot d{\bf s} } = \int_{\mathcal{S}} { \left( - \frac{\partial}{\partial t}{\bf B} \right) \cdot d{\bf s} }$ The surface $$\mathcal{S}$$ on both sides is the same, and we have not constrained $$\mathcal{S}$$ in any way. $$\mathcal{S}$$ can be any mathematically-valid open surface anywhere in space, having any size and any orientation. The only way the above expression can be universally true under these conditions is if the integrands on each side are equal at every point in space. Therefore, $\boxed{ \nabla \times {\bf E} = - \frac{\partial}{\partial t}{\bf B} } \label{m0050_eMFED}$ which is the MFE in differential form. What does this mean? Recall that the curl of $${\bf E}$$ is a way to take a directive of $${\bf E}$$ with respect to position (Section 4.8). Therefore the MFE constrains spatial derivatives of $${\bf E}$$ to be simply related to the rate of change of $${\bf B}$$. Said plainly: The differential form of the Maxwell-Faraday Equation (Equation \ref{m0050_eMFED}) relates the change in the electric field with position to the change in the magnetic field with time. Now that is arguably new and useful information. We now see that electric and magnetic fields are coupled not only for line integrals and fluxes, but also at each point in space.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811897873878479, "perplexity": 236.82622892551927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154053.17/warc/CC-MAIN-20210731043043-20210731073043-00044.warc.gz"}
https://www.physicsforums.com/threads/impulse-change-in-momentum-vs-ft.826890/
# Impulse. Change in Momentum vs Ft 1. Aug 9, 2015 ### Ocata My book states that impulse = Ft = mv_f - mv_i But it doesn't make sense to me because what if a Force of 5N is applied to an object for 10 seconds? Then I = Ft = 5N(10s) = 50 Ns. So if I apply 5N to an object with a friction force of 5N for 10s so that the object is traveling at a constant velocity, say 3m/s, for the period of time, then nothing has changed in the calculation because Ft = 5N(10s). However, if I make the same calculation using change of momentum, I get Impulse = mvf - mvi = m(3m/s) - m(3m/s) = 0. So why the different results if Ft = mv_f - mv_i ? 2. Aug 9, 2015 ### Orodruin Staff Emeritus The impulse formula using the initial and final velocities are for the total impulse. If you have a friction force and travel at constant velocity, then it is not the only force acting on the object. The impulse from friction is Ft but there will be an equal but opposite impulse from the other force, making the total impulse zero. If you only have the friction force, you will not travel at constant velocity. 3. Aug 9, 2015 ### Ocata If understand correctly, when the physics book (non calculus, applied physics) that I'm currently reading says Ft = mv_f - mv_i, what they specifically mean is F_net(Δt) = m(Δ v)? So that when Force applied = Force of friction, Fnet = 0, so F(t) = [(F_a - F_f) Δt = m(v_f - v_i)] = [(5N - 5N)10s = m(3m/s - 3m/s)] = [0 = 0] That is, Ft = mvf - mvi = 0 when velocity remains constant due to Fnet = 0. 4. Aug 9, 2015 ### Orodruin Staff Emeritus Correct. 5. Aug 9, 2015 ### Ocata Thank you Orodruin
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124462962150574, "perplexity": 1766.1344855136838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00426.warc.gz"}
http://mathhelpforum.com/differential-geometry/91310-parseval-relation.html
# Math Help - Parseval Relation 1. ## Parseval Relation Derive from $\sum_{k} \vert \vert ^2 = \|x\|^2$ the following formula which is called the parseval relation: $ = \sum_k \overline{}.$ 2. Use the formula $\langle x,y\rangle=\frac{1}{4}\left(||x+y||^2-||x-y||^2\right)-\frac{{\rm i}}{4}\left(||x+{\rm i}y||^2-||x-{\rm i}y||^2\right)$. 3. thank you for the hint! I'll see if I can use it to find the solution to get from A to B If I get stuck I'll let you know.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975522756576538, "perplexity": 604.4719859721387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678680766/warc/CC-MAIN-20140313024440-00025-ip-10-183-142-35.ec2.internal.warc.gz"}
https://physicscup.ee/physics-cup-taltech-2023-problem-1/
# Physics Cup – TalTech 2023 – Problem 1 By Jaan Kalda (TalTech). A solid cylinder of radius $R$ and height $H$ has density $\rho_c$ and is immersed in water of density $\rho_w$. The cylinder is initially kept at rest so that its axis is vertical, and the distance between its bottom face and the bottom of the water container is $h$. The water container has a flat rigid bottom, and the depth of water in it is larger than $H+h$. At a certain moment, the cylinder is released and starts falling. How long it will take for the cylinder to hit the bottom? Neglect viscosity. Assume that $\rho_c>\rho_w$, $H, and $10\rho_ch\ll \rho_wR$. Chinese translation of problem 1 (下沉的圆柱) Please submit the solution of this problem via e-mail to [email protected]. First intermediate results will be published on 20th November, 13:00 GMT. First hints will appear here on 27th November 2022, 13:00 GMT. After the publication of the first hint, the base score is reduced to 0.9 pts. For full regulations, see the “Participate” tab.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 9, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216210603713989, "perplexity": 752.1729402380978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00693.warc.gz"}
https://www.tutorialspoint.com/statistics/kolmogorov_smirnov_test.htm
# Statistics - Kolmogorov Smirnov Test This test is used in situations where a comparison has to be made between an observed sample distribution and theoretical distribution. ## K-S One Sample Test This test is used as a test of goodness of fit and is ideal when the size of the sample is small. It compares the cumulative distribution function for a variable with a specified distribution. The null hypothesis assumes no difference between the observed and theoretical distribution and the value of test statistic 'D' is calculated as: ## Formula $D = Maximum |F_o(X)-F_r(X)|$ Where − • ${F_o(X)}$ = Observed cumulative frequency distribution of a random sample of n observations. • and ${F_o(X) = \frac{k}{n}}$ = (No.of observations ≤ X)/(Total no.of observations). • ${F_r(X)}$ = The theoretical frequency distribution. The critical value of ${D}$ is found from the K-S table values for one sample test. Acceptance Criteria: If calculated value is less than critical value accept null hypothesis. Rejection Criteria: If calculated value is greater than table value reject null hypothesis. ### Example Problem Statement: In a study done from various streams of a college 60 students, with equal number of students drawn from each stream, are we interviewed and their intention to join the Drama Club of college was noted. B.Sc.B.A.B.ComM.A.M.Com No. in each class59111619 It was expected that 12 students from each class would join the Drama Club. Using the K-S test to find if there is any difference among student classes with regard to their intention of joining the Drama Club. Solution: ${H_o}$: There is no difference among students of different streams with respect to their intention of joining the drama club. We develop the cumulative frequencies for observed and theoretical distributions. StreamsNo. of students interested in joining${F_O(X)}$${F_T(X)}$${|F_O(X)-F_T(X)|}$ Observed (O) Theoretical (T) B.Sc.5125/6012/607/60 B.A.91214/6024/6010/60 B.COM.111225/6036/6011/60 M.A.161241/6048/607/60 M.COM.191260/4060/6060/60 Totaln=60 Test statistic ${|D|}$ is calculated as: $D = Maximum {|F_0 (X)-F_T (X)|} \\[7pt] \, = \frac{11}{60} \\[7pt] \, = 0.183$ The table value of D at 5% significance level is given by ${D_0.05 = \frac{1.36}{\sqrt{n}}} \\[7pt] \, = \frac{1.36}{\sqrt{60}} \\[7pt] \, = 0.175$ Since the calculated value is greater than the critical value, hence we reject the null hypothesis and conclude that there is a difference among students of different streams in their intention of joining the Club. ## K-S Two Sample Test When instead of one, there are two independent samples then K-S two sample test can be used to test the agreement between two cumulative distributions. The null hypothesis states that there is no difference between the two distributions. The D-statistic is calculated in the same manner as the K-S One Sample Test. ## Formula ${D = Maximum |{F_n}_1(X)-{F_n}_2(X)|}$ Where − • ${n_1}$ = Observations from first sample. • ${n_2}$ = Observations from second sample. It has been seen that when the cumulative distributions show large maximum deviation ${|D|}$ it is indicating towards a difference between the two sample distributions. The critical value of D for samples where ${n_1 = n_2}$ and is ≤ 40, the K-S table for two sample case is used. When ${n_1}$ and/or ${n_2}$ > 40 then the K-S table for large samples of two sample test should be used. The null hypothesis is accepted if the calculated value is less than the table value and vice-versa. Thus use of any of these nonparametric tests helps a researcher to test the significance of his results when the characteristics of the target population are unknown or no assumptions had been made about them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778519034385681, "perplexity": 661.842852054925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00024.warc.gz"}
http://gfm.cii.fc.ul.pt/events/seminars/20061031-arnaudon
# GFM ##### Sections You are here: Home Concentration of the Brownian bridge on Cartan-Hadamard manifolds with pinched negative sectional curvature # Concentration of the Brownian bridge on Cartan-Hadamard manifolds with pinched negative sectional curvature GFM seminar CIUL, B1-01 2006-10-31 14:00 .. 15:00 Add event to calendar:   vCal    iCal by M. Arnaudon (Univ. Poitiers, France) We study the rate of concentration of a Brownian bridge in time one around the corresponding geodesical segment on a Cartan-Hadamard manifold with pinched negative sectional curvature, when the distance between the two extremities tends to infinity. This improves on previous results by A. Eberle, and T. Simon. Along the way, we derive a new asymptotic estimate for the logarithmic derivative of the heat kernel on such manifolds, in bounded time and with one space parameter tending to infinity, which can be viewed as a counterpart to Bismut's asymptotic formula in small time.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897394061088562, "perplexity": 1003.9315880852229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315681.63/warc/CC-MAIN-20190820221802-20190821003802-00382.warc.gz"}
http://math.stackexchange.com/questions/127070/how-to-describe-a-sequence-of-integers-with-an-unique-number
# How to describe a sequence of integers with an unique number? I have a sequence of motions from images and I need to store all these values. For example: • a) 1, 2, 3, 4, 3, 2, 1 • b) 1, 2, 3, 2, 3, 4, 5, 4, 3, 2, 1 In order to save space, it would be necessary to have some function that could generate unique numbers to represent each of these sequences. But also, there is another problem: I need representations which could be compared, so, suppose I have • a) 1, 2, 3, 4, 4, 3, 2, 1 • b) 1, 2, 3, 4, 3, 2, 1 The motion in a) is very similar to b), so I would need some way to represent similarity. There is two patterns for increment/decrement in these sequences, either by 1 or 0, or any other integer value. I am starting to read about pairing functions and some other similar topics here but I am not sure if I can apply for this (like this topic). - I notice that in your examples, the differences are $1$, $0$, or $-1$. Is that always the case? Is the length of these sequences bounded? –  Lubin Apr 1 '12 at 23:46 Consider $1, 2, 3, 4, 4, 3, 2, 1.$ (resp. $1, 2, 3, 4, 3, 2, 1.$) You only need to know length $8$ (resp. $7.$) The sequences is $1, 2, \ldots, i-1, i, j, j-1, j-2,\ldots 2, 1.$ If length $= n$ is odd, then $i = 1 + \lfloor n/2 \rfloor$ and $j = i+1.$ If length $= n$ is even, then $i = j = n/2.$ Only store $n.$ –  user2468 Apr 1 '12 at 23:54 @Lubin Edited the question, thanks. –  IHawk Apr 1 '12 at 23:56 For a unique number to represent the sequence, you could consider a hash function. However, hash functions will give you ugly output that won't be readable and may not be easily reversible. It doesn't seem like that's what you're looking for. If you can't say for sure that your function will have integers, a hash table is a good way to do this. But, the hash values still won't be directly comparable in the way that you seem to want. For the sequence 1, 2, 3, 4, 3, 2, 1, you could simply consider the unique number 1234321. This might depend on the range of your numbers - it would help/may only be reasonable if you can put some bound on the numbers in your sequence. (if you know all the numbers are integers between 0 and 9, this is fine - we can extend this to numbers between 0 and N (or a and a+N) by representing the sequence in base N+1). This may not save you any space, though - certainly not compared to a hash table. If you know for sure that your terms will only go up and down by 0 or 1, you could represent them by the string $+++0−−0−$, for example, or equivalently as the number $22201101$ (where 2 is up, 1 is down). As an aside, this could be stored very efficiently by working in base 3. This seems like a good solution for you. Similarity can be measured in several ways (e.g Hamming distance). In the case of your example, where you have a "deletion" (a 4 has disappeared), it seems sensible to consider the Levenshtein distance. Levenshtein distance applies to strings of text characters, really, but you can easily represent your sequence this way as described in the paragraph above. It's difficult to understand exactly what you want without more context, but I hope this will help you to research what you need! Further comments welcomed. - If you have some upper bound $n$ on the length of the sequences, then you could store them using the Cantor pairing function as follows. Let $\langle x,y\rangle$ denote the output of the pairing function given $x,y$. Then let $$f(x_1,x_2,\ldots,x_k)=\langle \underset{n+1\text{ times}}{\cdots} \langle k,x_1\rangle,x_2\rangle\cdots,x_k\rangle,0\rangle,\cdots,0\rangle$$ which allows you to code for both the length of the sequence and its terms by iterating the standard pairing function until it has coded for all the information. However, if you don't have such an upper bound $n$, the only approach I know of is to let $$f(x_1,x_2,\ldots,x_k)=p_1^{x_1}p_2^{x_2}\cdots p_k^{x_k}$$ where $p_i$ denotes the $i$th prime number. Obviously this is much less efficient. I know, which is why I said "much less efficient". Side note: you should see some of the algorithms used in computability theory. It is not uncommon to see something along the lines of "Search over integers $n$. Factor each $n$, then factor each exponent, then factor each exponent of the result. Using [pre-determined map] interpret this as a program and run." –  Alex Becker Apr 2 '12 at 0:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556358814239502, "perplexity": 221.96299361591892}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011022670/warc/CC-MAIN-20140305091702-00083-ip-10-183-142-35.ec2.internal.warc.gz"}
http://entropictalks.blogspot.com/2018/07/data-science-101-radar-charts.html
## Monday, July 9, 2018 ### Data Science 101: Radar Charts I actually wanted to write down what I don't like about radar charts, but then I found out that this already has been done. So I won't. But, adding to "Misreading 1: Area" in Ghosts on the Radar , I was thinking of the following puzzles: Suppose you have $n$ real-valued, non-negative measurements $x_1$ through $x_n$. You are a super-smart data scientist and you want to use a radar chart to trick your customer to believe that the measurement result is "good" or "bad" -- i.e., the area under the polygon obtained by connecting points corresponding to these measurements should be large/small. Since the angle between the axes is fixed to $360/(n-1)$, you can influence the size of the polygonal area only by selecting the order in which you plot the measurements to the axes. What is the optimal assignment of measurement values to axes such that the area under the polygon is maximized/minimized? Edit on July 12th, 2018: Here is the solution to the problem of maxizing the area.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184640407562256, "perplexity": 434.6326884014679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247500089.84/warc/CC-MAIN-20190221051342-20190221073342-00402.warc.gz"}
https://hal.inria.fr/hal-01011420
# A Proof Theoretic Study of Soft Concurrent Constraint Programming Abstract : Concurrent Constraint Programming (CCP) is a simple and powerful model for concurrency where agents interact by telling and asking constraints. Since their inception, CCP-languages have been designed for having a strong connection to logic. In fact, the underlying constraint system can be built from a suitable fragment of intuitionistic (linear) logic --ILL-- and processes can be interpreted as formulas in ILL. Constraints as ILL formulas fail to represent accurately situations where "preferences" (called soft constraints) such as probabilities, uncertainty or fuzziness are present. In order to circumvent this problem, c-semirings have been proposed as algebraic structures for defining constraint systems where agents are allowed to tell and ask soft constraints. Nevertheless, in this case, the tight connection to logic and proof theory is lost. In this work, we give a proof theoretical interpretation to soft constraints: they can be defined as formulas in a suitable fragment of ILL with subexponentials (SELL) where subexponentials, ordered in a c-semiring structure, are interpreted as preferences. We hence achieve two goals: (1) obtain a CCP language where agents can tell and ask soft constraints and (2) prove that the language in (1) has a strong connection with logic. Hence we keep a declarative reading of processes as formulas while providing a logical framework for soft-CCP based systems. An interesting side effect of (1) is that one is also able to handle probabilities (and other modalities) in SELL, by restricting the use of the promotion rule for non-idempotent c-semirings.This finer way of controlling subexponentials allows for considering more interesting spaces and restrictions, and it opens the possibility of specifying more challenging computational systems. Keywords : Type de document : Pré-publication, Document de travail 2014 https://hal.inria.fr/hal-01011420 Contributeur : Catuscia Palamidessi <> Soumis le : lundi 23 juin 2014 - 19:19:17 Dernière modification le : mercredi 27 juillet 2016 - 14:48:48 ### Identifiants • HAL Id : hal-01011420, version 1 • ARXIV : 1405.2329 ### Citation Elaine Pimentel, Carlos Olarte, Vivek Nigam. A Proof Theoretic Study of Soft Concurrent Constraint Programming. 2014. 〈hal-01011420〉 ### Métriques Consultations de la notice
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316798448562622, "perplexity": 2879.659881027428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945448.38/warc/CC-MAIN-20180421203546-20180421223546-00428.warc.gz"}
https://brilliant.org/problems/an-imaginary-triangle/
# An Imaginary Triangle Algebra Level 3 If the roots of $$ax^3+ax^2+ax, a\neq0$$ are plotted on the Argand plane, the area of the triangle enclosed by them is given by $$\frac{\sqrt{b}}{c^2}$$, where $$b,c$$ are positive, square-free integers. Find $$b+c$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336373567581177, "perplexity": 231.36386762530043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00416.warc.gz"}
http://openstudy.com/updates/5397a324e4b01c619df2b0a4
Here's the question you clicked on: 55 members online • 0 viewing ## cupcakelover2222 9 months ago Find the x value for point F such that DF and EF form a 1:3 ratio. 0.75 −1 2 −1.75 Delete Cancel Submit • This Question is Closed 1. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 ##### 1 Attachment 2. myininaya • 9 months ago Best Response You've already chosen the best response. 2 so you know distance formula will be involved? and you want one distance to be 3 times the other, right? 3. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 I'm guessing so! :) @myininaya 4. myininaya • 9 months ago Best Response You've already chosen the best response. 2 great can you set the equation then? 5. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 6. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 $\sqrt{(2-(-3)^{2}+(4-(-3)^{2}}$ 7. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 Is this correct? @myininaya 8. myininaya • 9 months ago Best Response You've already chosen the best response. 2 ok we have this line |dw:1402446909617:dw| we want the distance from E to F be equal 3 times the distance from D to F You should have an equation here I just called F (x,y) since we don't know it yet and we are looking for that value x 9. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 yes I understand so far! 10. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 @myininaya 11. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 @jdoe0001 12. myininaya • 9 months ago Best Response You've already chosen the best response. 2 ok try making an equation from that sentence 13. cupcakelover2222 • 9 months ago Best Response You've already chosen the best response. 0 I am not sure on how to do this which is why I asked for help..@myininaya 14. myininaya • 9 months ago Best Response You've already chosen the best response. 2 I'm asking you to translate "the distance from E to F" is "3 times the distance from D to F" 15. myininaya • 9 months ago Best Response You've already chosen the best response. 2 replace the word is with an equal sign replace the distance from E to F with the distance formula and the points entered from E and F and do the same thing from D to F (which is on the other side of equation because it is on the other side of the is) 16. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994940757751465, "perplexity": 4788.1011818588195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299121.41/warc/CC-MAIN-20150323172139-00273-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/calulate-acceleration-from-friction.264126/
# Homework Help: Calulate Acceleration from Friction 1. Oct 13, 2008 ### halo9909 1. The problem statement, all variables and given/known data What would be the size and direction of the acceleration of the car? Why would it be constant? The coefficient of sliding friction between rubber tires and wet pavement is 0.50. The brakes are applied to a 710-kg car traveling 20 m/s, and the car skids to a stop. So m=710kg vi=20 vf=0 Coefficient Slideing friction=.50 a= 2. Relevant equations to get the Ff= .50 * 710kg * -9.8 = -3479 3. The attempt at a solution I have calulated the Force of the friction that the road exerts on the car which is -3479N From here how wouldyou go about ot calculate the acceleration? since ou do not have dispalcement or time 2. Oct 13, 2008 ### nicksauce Remember, F=ma, so a = F/m. You have F and you have m. The velocity information seems superfluous here. 3. Oct 13, 2008 ### halo9909 Negative signs can make a little error, I had 4.9 instead of -4.9 the whole the never really thought of a=F/m i just plug all of them into the F=ma in the base form
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271187543869019, "perplexity": 2231.9752005328164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213405.46/warc/CC-MAIN-20180818060150-20180818080150-00604.warc.gz"}
http://mathhelpforum.com/algebra/200647-solving-exponential-variable.html
# Thread: solving an exponential as a variable 1. ## solving an exponential as a variable Hi, I have a simple question; 10^x= 10000 . I know the answer is x=4, but what I want to know is how do I find the value of x algebraically. I looked on some websites, and they suggested finding the log or what but I havn't done logs yet. Tren301. 2. ## Re: solving an exponential as a variable Originally Posted by Tren301 Hi, I have a simple question; 10^x= 10000 . I know the answer is x=4, but what I want to know is how do I find the value of x algebraically. I looked on some websites, and they suggested finding the log or what but I havn't done logs yet. Tren301. \displaystyle \begin{align*} 10^x &= 10\,000 \\ \log_{10}{\left(10^x\right)} &= \log_{10}{\left(10\,000\right)} \\ x &= \log_{10}{\left(10^{4}\right)} \\ x &= 4 \end{align*} Technically the logarithms are redundant though, since it can be seen from inspection that \displaystyle \begin{align*} 10^4 = 10\,000 \end{align*}.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99997878074646, "perplexity": 1039.0469466306902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704943681/warc/CC-MAIN-20130516114903-00086-ip-10-60-113-184.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/70531/vertically-aligned-legend-to-pgfplots-subplots
# Vertically aligned legend to PGFPlots subplots What I want to do is to have 3 PGFPlots subplots and the legend arranged in four equally sized boxes. However, when I use the following code, in the output the legend is not vertically aligned with the bottom left plot. What should I do to make it vertically aligned at the center of the bottom row, which in my opinion would be more aesthetically pleasing? \documentclass[12pt]{article} \usepackage{pgfplots} \begin{document} \begin{tabular}{cc} \begin{tikzpicture} \begin{axis}[width=0.45\textwidth, legend columns=1, legend entries={blahblahblah\\}, legend to name=legend:aligning-subplots-legend] \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[width=0.45\textwidth] \end{axis} \end{tikzpicture} \\ \begin{tikzpicture} \begin{axis}[width=0.45\textwidth] \end{axis} \end{tikzpicture} & \ref{legend:aligning-subplots-legend} \end{tabular} - Do all of the plots have the same x and y axis ranges? –  Jake Sep 7 '12 at 5:03 In this case they do, which hopefully makes that easier... –  I Like to Code Sep 7 '12 at 16:07 I did manage to find another post that was useful to me, see Vertically center cells of a table? My solution and what the output looks like is below. \documentclass[a4paper]{article} \usepackage{array}% http://ctan.org/pkg/array \usepackage{pgfplots} \newcolumntype{M}{>{\centering\arraybackslash}m{\dimexpr.5\linewidth-2\tabcolsep}} \begin{document} \begin{figure}[h!] \begin{center} \begin{tabular}{MM} \begin{tikzpicture} \begin{axis}[width=0.45\textwidth, legend columns=1, legend entries={blahblahblah\\}, legend to name=legend:aligning-subplots-legend] \end{axis} \end{tikzpicture} & \begin{tikzpicture} \begin{axis}[width=0.45\textwidth] \end{axis} \end{tikzpicture} \\ \begin{tikzpicture} \begin{axis}[width=0.45\textwidth]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526148200035095, "perplexity": 1231.2233374424059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
https://jpt.spe.org/marcellus-wells-ultimate-production-accurately-predicted-initial-production2320
Share Production # Marcellus Wells: Ultimate Production Accurately Predicted From Initial Production ## In this work, the authors perform automatic decline analysis on Marcellus Shale gas wells and predict ultimate recovery for each well. In this work, the authors perform automatic decline analysis on Marcellus Shale gas wells and predict ultimate recovery for each well. A minimal model is used that captures the basic physics and geometry of the extraction process. A key discovery is that wells can have their estimated ultimate recovery (EUR) predicted early in life with surprising accuracy. ## Introduction There are many challenges to production in the Marcellus, ranging from the regulatory and price environment to the highly variable responses of rock to hydro­fracture treatment. The complete paper’s primary goal is to generate scenarios for future gas production from the Marcellus by use of varying gas prices and other assumptions. The first step in this process is determining technically recoverable reserves, which requires deciding how much each well will produce over its lifetime. To do this, the authors use recovery-­factor curves that describe total recovery in terms of two parameters: total gas in the stimulated reservoir volume (SRV) and the characteristic time to boundary-dominated flow (BDF). Most wells produce following these recovery curves, but not all. Upon examining production data, the authors found that several hundred wells have very slow decline for the first several years, slower than the production decline expected from linear flow. In discussions with multiple operators, the authors found that this is attributable to production choking. There have been several analytical and numerical models developed to predict production from horizontal, hydrofractured shale gas wells. This paper uses the scaling model developed by Patzek et al. (2013) to build recovery-factor curves to describe production for Marcellus wells. Fitting production of 5,275 wells, 404 wells are found that experience boundary-dominated flow. Among these wells, 175 are horizontally drilled. The original gas inside the SRV for these wells is compared with initial production, and a very high correlation is found. This correlation is used to estimate gas production for the remaining wells. Among the entire set of wells, an average EUR of 3.6 Bscf is estimated, with the middle 60% having EURs between 6.6 and 1.8 Bscf. The model for flow to the wellbore is detailed in the complete paper. ## Methods Production reports from wells in Pennsylvania are sparse, which makes decline analysis from publicly accessible data a difficult endeavor. Most of the analysis used in this paper relies upon having monthly production data, so semiannual and yearly production had to be converted into monthly production. Many Marcellus wells in northeastern Pennsylvania produce at rates far higher than those of any other shale gas field, with EURs predicted to be twice that of the best wells in the Haynesville Shale. In addition, there are infrastructure limits in some parts of the field because pipeline capacity has not kept up with production potential in Pennsylvania. In order to protect gathering equipment and not overwhelm available pipeline space, the production of these wells for the first 1 to 4 years is limited by use of a choke at the surface. This limits how quickly the flowing tubing pressure drops to its final value. There are two sweet spots in the Marcellus. In northeastern Pennsylvania, production is driven by a thick formation of dry gas. In southwestern Pennsylvania and West Virginia, production is driven by high porosity filled with high-Btu gas. There is also liquids production in southwestern Pennsylvania. Desorbing gas is another contributor to production. There is some debate about whether gas adsorption follows the Langmuir isotherm or the ­Brunauer-Emmett-Teller (BET) isotherm, but Langmuir isotherms are far more readily available for the Marcellus and have been used as the industry standard. The BET isotherm allows desorption in large quantities at far higher pressures than the Langmuir isotherm, owing to the ability of gas molecules to adsorb on multilayers with the BET model; the Langmuir model allows only a single layer to adsorb. Regardless of which isotherm is better suited for describing gas desorption, desorption contributes in several ways to gas production. First, it delays BDF. Second, production declines more slowly once it has entered BDF because gas desorption acts as an additional production drive, allowing pressure to fall at a slower rate as gas is produced than it otherwise would. Both of these effects increase the EUR. The authors create recovery-factor curves with 500-psi fracture-face pressures, initial reservoir pressures varying from 2,000 to 13,000 psi, Langmuir isotherms, and the two gas compositions specified in the complete paper. The curves were calculated by solving the pseudopressure diffusion equation and using Darcy’s law at the fracture face. Next, the recovery-factor curves are fitted to cumulative production from wells, ignoring the first 4 months for unchoked wells. For choked wells, production is fitted to the period after they come off the choke, or at the end of their third year if they have not yet come off the choke, by use of the following method. To predict choked wells accurately, wells were fitted only after they returned to normal decline. The first step in this process is identifying choked wells. A well is choked if the second year’s total gas production is more than 75% of the first year’s production or its third or fourth year’s production is more than 80% of the production of the preceding year. There are 304 wells that have been choked for 3 years; cumulative-­production fitting starts at Month 34. There are 869 wells that have been choked for at least 2 years, and production fitting starts at Month 22. For wells that have not yet come off choke, constant production at the last reported rate is assumed until they reach 36 months of age; then, they are allowed to decline normally following the appropriate recovery-factor curve. A recovery-factor curve is selected for each well to match its reservoir pressure and presumed fluid composition. Then a least-squares minimizing package is used to minimize the error function, making it as close to zero as possible. The authors found 404 wells in BDF. Among those wells, there is a very high correlation between initial productivity and total gas in place. Fitting leads to the recognition that gas in place scales linearly with initial production. Each well’s recovery has two fitting parameters, and with gas in place and initial productivity known, it is a simple matter to calculate the characteristic time to BDF. With the latter parameter and the SRV determined from each well’s initial productivity and the regression equation provided in the complete paper, production can be forecast. If one uses the assumption that gas in place actually scales linearly with initial production, then this means a constant characteristic time of 4 years (48.4 months). Keep in mind that this does not mean that all wells in the Marcellus will enter BDF at 48.4 months. There is a wide probability distribution around this value, but the average value remains 48 months regardless of initial productivity. Stochastically, it is predicted that the average time to BDF will be 4 years for the field, but that of an individual well cannot be predicted until the interference is observed. ## Conclusions Ultimate recovery has been estimated for each Marcellus well that has more than 18 months of production data. The EUR and time to interface for each predicted well are given in Fig. 1. The mean well is expected to produce 3.62 Bcf over its 25‑year lifetime and have a time to recovery of 3.9 years. The standard deviations around these averages are 3.62 Bcf and 0.87 years, respectively. The authors forecast production for 5,275 horizontal Marcellus wells in Pennsylvania and West Virginia. There are two groups of wells that were fitted: those that have and those that have not entered BDF. Among the wells that have entered BDF, the most influential predictor of gas in place, and therefore EUR, is initial production. This knowledge can be leveraged into making production predictions by regression onto a very simple equation. From this, production was predicted for wells that have not entered BDF. The median EUR for all wells was 3.9 Bcf. This has several implications. First, wells in a mature field can have their production predicted with reasonable uncertainty from an early age. Second, and more importantly, the time to BDF does not vary systematically with initial production. Time to BDF depends upon hydraulic diffusivity and distance between adjacent producing fractures, so this likely means that the distance between fractures does not systematically effect time to BDF, nor vice versa. If this is the case, then the largest driver of EUR is the extent of the producing fracture network. Previously, it could be reasonably assumed that generating a more-extensive fracture network would increase the initial production but decrease time to interference, leading to roughly the same EUR. However, that trend was not observed in this study. This result could mean that there is a minimum interfracture distance that is achieved by many wells in the field. Further fracture creation, then, leads to a larger, not more densely fractured, SRV. This, in turn, leads to larger initial production with the same time to BDF and, therefore, to a larger EUR. ## Reference Patzek, T.W., Male, F., and Marder, M. 2013. Gas Production in the Barnett Shale Obeys a Simple Scaling Theory. Proc., National Academy of Sciences 110 (49): 19,731–19,736. This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 180234, “Marcellus Wells: Ultimate Production Accurately Predicted From Initial Production,” by Frank Male, Michael P. Marder, John Browning, and Svetlana Ikonnikova, The University of Texas at Austin, and Tad Patzek, King Abdullah University of Science and Technology, prepared for the 2016 SPE Low-Permeability Symposium, Denver, 5–6 May. The paper has not been peer reviewed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318966031074524, "perplexity": 2190.1747065250397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00658.warc.gz"}
https://scholarship.rice.edu/browse?type=subject&value=13+TeV
Now showing items 1-2 of 2 • #### Pseudorapidity distribution of charged hadrons in proton–proton collisions at s=13 TeV  (2015) The pseudorapidity distribution of charged hadrons in pp collisions at s=13 TeV is measured using a data sample obtained with the CMS detector, operated at zero magnetic field, at the CERN LHC. The yield of primary charged long-lived hadrons produced in inelastic pp collisions is determined in the central region of the CMS pixel detector (|η|<2) using ... • #### Search for Direct Top Squark Pair Production via the Fully Hadronic Final State from proton-proton Collisions at sqrt(s) = 13 TeV  (2017-04-21) A search for direct pair production of top squarks, the hypothetical supersymmetric partner to the top quark, in proton-proton collisions is presented for two scenarios each requiring jets and a large transverse momentum imbalance. The CMS detector observed the proton-proton collisions which were generated by the LHC with a center- of-mass energy of ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998687505722046, "perplexity": 2814.6626479556835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00626.warc.gz"}
http://mathhelpforum.com/calculus/33074-differentiation-past-exam-question.html
# Math Help - differentiation past exam question 1. ## differentiation past exam question hello everybody! 1. if y= x(x^2 + 4) find dy/dx 2. then find the equation of the tangent to the curve where x=1 giving answer in the form y=mx+c 3. the normal to the curve y=x^3 + ax at coordinates (2,b) has gradient 1/2. find the values of the constants a and b. 1. dy/dx = 3x^2 + 4 2. I assume we would use y-y1 = m(m-m1) where the 1 is at the bottom of the y and m, but we only have one coordinate, which is x=1. So I am not sure about this 3. I haven't got a clue what the 'normal to the curve is' 2. 2. Substitute $x=1$ into the equation $y=x(x^2+4)$. So your other co-ordinate is $y=5$. 3. The normal to a curve at a given point P is the straight line perpendicular to the tangent at P and passing through P. If the tangent has gradient $m\ne0$, the gradient of the normal is $-\,\frac{1}{m}$. In your example, the normal to the curve $y=x^3+ax$ at $x=2$ is $\frac{1}{2}$; hence $\frac{\mathrm{d}y}{\mathrm{d}x}=-2$ at $x=2$. You should be able to find a from here. Once you’ve found a, substitute $x=2$ into $y=x^3+ax$ to find the y-coordinate, which is b. 3. Originally Posted by JaneBennet In your example, the normal to the curve $y=x^3+ax$ at $x=2$ is $\frac{1}{2}$; hence $\frac{\mathrm{d}y}{\mathrm{d}x}=-2$ at $x=2$. I don't understand the above, if the normal is -1/m and m=1/2 then the normal is -2, right? And what have you differentiated to give dy/dx = -2 ? 4. Here's what I got anyway, for y=x^3 + ax the normal = -1/m = -1/(1/2) = -2 = a so, y= b = x^3 + 2x b= 2^3 + (2*-2) = 4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045279622077942, "perplexity": 411.3581455871287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00232-ip-10-171-96-226.ec2.internal.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-02421898
# Generalized Iterated Fission Probability for Monte Carlo eigenvalue calculations * Corresponding author Abstract : The so-called Iterated Fission Probability (IFP) method has provided a major breakthrough for the calculation of the adjoint flux and more generally of adjoint-weighted scores in Monte Carlo eigenvalue calculations. So far, IFP has been exclusively devoted to the analysis of the standard $k$-eigenvalue equation, by resorting to a formal identification between the adjoint fundamental eigenmode $\varphi ^\dagger_k$ and the neutron importance $I_k$. In this work, we extend the IFP method to the $\alpha$-eigenvalue equation, enabling the calculation of the adjoint fundamental eigenmode $\varphi ^\dagger_k$ and the associated adjoint-weighted scores, including kinetics parameters. Such generalized IFP method is first verified in a simple two-group infinite medium transport problem, which admits analytical solutions. Then, $\alpha$-adjoint-weighted kinetics parameters are computed for a few reactor configurations by resorting to the Monte Carlo code Tripoli-4®, and compared to the $k$-adjoint-weighted kinetics parameters obtained by the standard IFP. The algorithms that we have developed might be of interest in the interpretation of reactivity measurements, in the context of reactor period calculations by Monte Carlo simulation. Keywords : Document type : Journal articles Domain : Cited literature [50 references] https://hal-cea.archives-ouvertes.fr/cea-02421898 Contributor : amplexor amplexor Connect in order to contact the contributor Submitted on : Friday, December 20, 2019 - 5:12:01 PM Last modification on : Monday, December 13, 2021 - 9:14:40 AM Long-term archiving on: : Saturday, March 21, 2020 - 8:57:29 PM ### File 201700000589.pdf Files produced by the author(s) ### Citation N. Terranova, A. Zoia. Generalized Iterated Fission Probability for Monte Carlo eigenvalue calculations. Annals of Nuclear Energy, Elsevier Masson, 2017, 108, pp.57-66. ⟨10.1016/j.anucene.2017.04.014⟩. ⟨cea-02421898⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769224882125854, "perplexity": 3924.743435434373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00177.warc.gz"}
https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_107B%3A_Physical_Chemistry_for_Life_Scientists/Chapters/1%3A_Properties_of_Gases/1.7%3A_The_Maxwell_Distribution_Laws
Skip to main content # 1.7: The Maxwell Distribution Laws In the context of the Kinetic Molecular Theory of Gases, a gas contains a large number of particles in rapid motions. Each particle has a different speed, and each collision between particles changes the speeds of the particles. An understanding of the properties of the gas requires an understanding of the distribution of particle speeds. ### Many molecules, many velocities At temperatures above absolute zero, all molecules are in motion. In the case of a gas, this motion consists of straight-line jumps whose lengths are quite great compared to the dimensions of the molecule. Although we can never predict the velocity of a particular individual molecule, the fact that we are usually dealing with a huge number of them allows us to know what fraction of the molecules have kinetic energies (and hence velocities) that lie within any given range. The trajectory of an individual gas molecule consists of a series of straight-line paths interrupted by collisions. What happens when two molecules collide depends on their relative kinetic energies; in general, a faster or heavier molecule will impart some of its kinetic energy to a slower or lighter one. Two molecules having identical masses and moving in opposite directions at the same speed will momentarily remain motionless after their collision. If we could measure the instantaneous velocities of all the molecules in a sample of a gas at some fixed temperature, we would obtain a wide range of values. A few would be zero, and a few would be very high velocities, but the majority would fall into a more or less well defined range. We might be tempted to define an average velocity for a collection of molecules, but here we would need to be careful: molecules moving in opposite directions have velocities of opposite signs. Because the molecules are in a gas are in random thermal motion, there will be just about as many molecules moving in one direction as in the opposite direction, so the velocity vectors of opposite signs would all cancel and the average velocity would come out to zero. Since this answer is not very useful, we need to do our averaging in a slightly different way. The proper treatment is to average the squares of the velocities, and then take the square root of this value. The resulting quantity is known as the root mean square (RMS) velocity $v_{rms} = \sqrt{\dfrac{\sum \nu^2}{n}}$ where $$n$$ is the number of molecules in the system. The formula relating the RMS velocity to the temperature and molar mass is surprisingly simple (derived below), considering the great complexity of the events it represents: $v_{rms} = \sqrt{\dfrac{3RT}{M}} \label{rms1}$ where • $$M$$ is the molar mass in kg mol–1, and • $$R$$ is gas constant. Equation $$\ref{rms1}$$ can also be expressed as $v_{rms} = \sqrt{\dfrac{3k_bT}{m}} \label{rms2}$ where • $$m$$ is the molecular mass in kg • $$k_b$$ is Boltzmann constant and is just the “gas constant per molecule" $k_b = \dfrac{R}{N_a} = \dfrac{R}{6.02 \times 10^{23}}$ Equation $$\ref{rms2}$$ is just the per atom version of Equation $$\ref{rms1}$$ which is expressed in terms of per mol. Either equation will work. Example $$\PageIndex{1}$$ What is the $$v_{rms}$$ of a nitrogen molecule at 300 K? SOLUTION The molar mass of $$N_2$$ is 28.01 g/mol. Substituting in the above equation and expressing R in energy units, we obtain $v^2 =\dfrac{(3)(8.31 \;J \; mol^{-1} \; K^{-1})(300\;K)}{28.01 \times 10^{-3}\; Kg \; mol^{-1}} = 2.67 \times 10^5 \; \dfrac{J}{Kg} \nonumber$ Recalling the definition of the joule (1 J = 1 kg m2 s–2) and taking the square root, $\bar{v} = \sqrt{ \left(2.67 \times 10^5\; \dfrac{\cancel{J}}{\cancel{Kg}}\right) \left( \dfrac{ 1 \;\cancel{kg}\; m^2\; s^{–2}}{ 1 \;\cancel{J }} \right)} = 517 \;m /s \nonumber$ or $517\, \dfrac{\cancel{m}}{s} \left(\dfrac{1\; km}{10^3\; \cancel{m}} \right) \left(\dfrac{3600\; \cancel{s}}{1\; hr} \right) = 1860\; km/hr \nonumber$ Comment: this is fast! The velocity of a rifle bullet is typically 300-500 m s–1; convert to common units to see the comparison for yourself. A simpler formula for estimating average molecular velocities than $$\ref{rms1}$$ is $v_{rms} =157 \sqrt{\dfrac{T}{M}} \nonumber$ in which $$v$$ is in units of meters/sec, $$T$$ is the absolute temperature and $$M$$ the molar mass in grams. ### The Boltzmann Distribution If we were to plot the number of molecules whose velocities fall within a series of narrow ranges, we would obtain a slightly asymmetric curve known as a velocity distribution. The peak of this curve would correspond to the most probable velocity. This velocity distribution curve is known as the Maxwell-Boltzmann distribution, but is frequently referred to only by Boltzmann's name. The Maxwell-Boltzmann distribution law was first worked out around 1850 by the great Scottish physicist, James Clerk Maxwell (left, 1831-1879), who is better known for discovering the laws of electromagnetic radiation. Later, the Austrian physicist Ludwig Boltzmann (1844-1906) put the relation on a sounder theoretical basis and simplified the mathematics somewhat. Boltzmann pioneered the application of statistics to the physics and thermodynamics of matter, and was an ardent supporter of the atomic theory of matter at a time when it was still not accepted by many of his contemporaries. Figure $$\PageIndex{1}$$: Maxwell (left) and Boltzman (right) are responsible for the velocity distrubtion of gas molecules The Maxwell-Boltzmann distribution is used to determine how many molecules are moving between velocities $$v$$ and $$v + dv$$. Assuming that the one-dimensional distributions are independent of one another, that the velocity in the y and z directions does not affect the x velocity, for example, the Maxwell-Boltzmann distribution is given by $\large \dfrac{dN}{N} = \left(\dfrac{m}{2\pi k_bT} \right)^{1/2}exp\left [\dfrac{-mv^2}{2k_b T}\right] dv \label{1}$ where • $$dN/N$$ is the fraction of molecules moving at velocity $$v$$ to $$v + dv$$, • $$m$$ is the mass of the molecule, • $$k_b$$ is the Boltzmann constant, and • $$T$$ is the absolute temperature.1 Additionally, the function can be written in terms of the scalar quantity speed $$v$$ instead of the vector quantity velocity. This form of the function defines the distribution of the gas molecules moving at different speeds, between $$v_1$$ and $$v_2$$, thus $f(v)=4\pi v^2 \left (\dfrac{m}{2\pi k_b T} \right)^{3/2}exp\left [\frac{-mv^2}{2k_bT}\right] \label{2}$ Finally, the Maxwell-Boltzmann distribution can be used to determine the distribution of the kinetic energy of for a set of molecules. The distribution of the kinetic energy is identical to the distribution of the speeds for a certain gas at any temperature.2 The Maxwell-Boltzmann distribution is a probability distribution  and just like any such distribution, can be characterized in a variety of ways including. • Average Speed: The average speed is the sum of the speeds of all of the particles divided by the number of particles. • Most Probable Speed: The most probable speed is the speed associated with the highest point in the Maxwell distribution. Only a small fraction of particles might have this speed, but it is more likely than any other speed. • Width of the Distribution: The width of the distribution characterizes the most likely range of speeds for the particles. One measure of the width is the Full Width at Half Maximum (FWHM). To determine this value, find the height of the distribution at the most probable speed (this is the maximum height of the distribution). Divide the maximum height by two to obtain the half height, and locate the two speeds in the distribution that have this half-height value. On speed will be greater than the most probably speed and the other speed will be smaller. The full width is the difference between the two speeds at the half-maximum value. ### Velocity distributions depend on temperature and mass Higher temperatures allow a larger fraction of molecules to acquire greater amounts of kinetic energy, causing the Boltzmann plots to spread out. Figure $$\PageIndex{2}$$ shows how the Maxwell-Boltzmann distribution is affected by temperature. At lower temperatures, the molecules have less energy. Therefore, the speeds of the molecules are lower and the distribution has a smaller range. As the temperature of the molecules increases, the distribution flattens out. Because the molecules have greater energy at higher temperature, the molecules are moving faster. Figure $$\PageIndex{2}$$: The Maxwell Distribution as a function of temperature for nitrogen molecules Notice how the left ends of the plots are anchored at zero velocity (there will always be a few molecules that happen to be at rest.) As a consequence, the curves flatten out as the higher temperatures make additional higher-velocity states of motion more accessible. The area under each plot is the same for a constant number of molecules. All molecules have the same kinetic energy (mv2/2) at the same temperature, so the fraction of molecules with higher velocities will increase as m, and thus the molecular weight, decreases. Figure $$\PageIndex{3}$$ shows the dependence of the Maxwell-Boltzmann distribution on molecule mass. On average, heavier molecules move more slowly than lighter molecules. Therefore, heavier molecules will have a smaller speed distribution, while lighter molecules will have a speed distribution that is more spread out. Figure $$\PageIndex{3}$$: The speed probability density functions of the speeds of a few gases at a temperature of 298.15 K ). The y-axis is in s/m so that the area under any section of the curve (which represents the probability of the speed being in that range) is dimensionless. The Maxwell-Boltzmann equation, which forms the basis of the kinetic theory of gases, defines the distribution of speeds for a gas at a certain temperature. From this distribution function, the most probable speed, the average speed, and the root-mean-square speed can be derived. Usually, we are more interested in the speeds of molecules rather than their component velocities. The Maxwell–Boltzmann distribution for the speed follows immediately from the distribution of the velocity vector, above. Note that the speed of an individual gas particle is: $v =\sqrt{v_x^2+ v_y^2 = v_z^2}$ Three speed expressions can be derived from the Maxwell-Boltzmann distribution: • the most probable speed, • the average speed, and • the root-mean-square speed. The most probable speed is the maximum value on the distribution plot (Figure $$\PageIndex{4}$$}. This is established by finding the velocity when the derivative of Equation $$\ref{2}$$ is zero $\dfrac{df(v)}{dv} = 0$ which is $\color{red} v_{mp}=\sqrt {\dfrac {2RT}{M}} \label{3a}$ Figure $$\PageIndex{4}$$: The Maxwell-Boltzmann distribution is shifted to higher speeds and is broadened at higher temperatures.Image used with permission from OpenStax. The speed at the top of the curve is called the most probable speed because the largest number of molecules have that speed. The average speed is the sum of the speeds of all the molecules divided by the number of molecules. $\color{red} v_{avg}= \bar{v} = \int_0^{\infty} v f(v) dv = \sqrt {\dfrac{8RT}{\pi M}} \label{3b}$ The root-mean-square speed is square root of the average speed-squared. $\color{red} v_{rms}= \bar{v^2} = \sqrt {\dfrac {3RT}{M}} \label{3c}$ where • $$R$$ is the gas constant, • $$T$$ is the absolute temperature and • $$M$$ is the molar mass of the gas. It always follows that for gases that follow the Maxwell-Boltzmann distribution: $v_{mp}< v_{avg}< v_{rms} \label{4}$ ### Problems 1. Using the Maxwell-Boltzman function, calculate the fraction of argon gas molecules with a speed of 305 m/s at 500 K. 2. If the system in problem 1 has 0.46 moles of argon gas, how many molecules have the speed of 305 m/s? 3. Calculate the values of $$C_{mp}$$, $$C_{avg}$$, and $$C_{rms}$$ for xenon gas at 298 K. 4. From the values calculated above, label the Boltzmann distribution plot (Figure 1) with the approximate locations of (C_{mp}\), $$C_{avg}$$, and $$C_{rms}$$. 5. What will have a larger speed distribution, helium at 500 K or argon at 300 K? Helium at 300 K or argon at 500 K? Argon at 400 K or argon at 1000 K? #### Answers 1. 0.00141 2. $$3.92 \times 10^{20}$$ argon molecules 3. cmp = 194.27 m/s, cavg = 219.21 m/s, crms = 237.93 m/s 4. As stated above, Cmp is the most probable speed, thus it will be at the top of the distribution curve. To the right of the most probable speed will be the average speed, followed by the root-mean-square speed. 5.  Hint: Use the related speed expressions to determine the distribution of the gas molecules: helium at 500 K. helium at at 300 K. argon at 1000 K. ### References 1. Dunbar, R.C. Deriving the Maxwell Distribution J. Chem. Ed. 198259, 22-23. 2. Peckham, G.D.; McNaught, I.J.; Applications of the Maxwell-Boltzmann Distribution J. Chem. Ed. 1992, 69, 554-558. 3. Chang, R. Physical Chemistry for the Biosciences, 25-27.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678826332092285, "perplexity": 430.82335515160185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257396.96/warc/CC-MAIN-20190523204120-20190523230120-00393.warc.gz"}
https://www.physics-world.com/ap-physics-c-mechanics/newtons-laws-motion/
# Newton’s Laws of Motion Newton’s First Law of Motion: I. Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. This we recognize as essentially Galileo’s concept of inertia, and this is often termed simply the “Law of Inertia”. Newton’s laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. They have been expressed in several different ways, over nearly three centuries, and can be summarised as follows. First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force. Second law: The vector sum of the external forces F on an object is equal to the mass m of that object multiplied by the acceleration vector a of the object: F = ma. Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body. Topical Notes, Topical Notes, Problems, Presentations, Quiz, Test, Investigations, and Videos Static Equilibrium(first law) Dynamics of a single particle ( second law) Systems of Two or more objects (third law) Test your Understanding: Chapter 3 MCQ Quiz 1 Here                                                                   Take Chapter 3 ReQuiz MCQ Quiz 2 Here The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687. Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the third volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation, explained Kepler’s laws of planetary motion. Newton’s laws were verified by experiment and observation for over 200 years, and they are excellent approximations at the scales and speeds of everyday life. Newton’s laws of motion, together with his law of universal gravitation and the mathematical techniques of calculus, provided for the first time a unified quantitative explanation for a wide range of physical phenomena. error: Content is protected !!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483362793922424, "perplexity": 261.41322679948246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00417.warc.gz"}
https://socialsci.libretexts.org/Bookshelves/Psychology/Book%3A_Psychology_(Noba)/00%3A_Front_Matter/01%3A_TitlePage
Skip to main content # TitlePage $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ Psychology Noba • Was this article helpful?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083219766616821, "perplexity": 168.07192490409744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00088.warc.gz"}
https://math.stackexchange.com/questions/4017837/hypotheses-for-g%C3%B6dels-condensation-lemma
# Hypotheses for Gödel's condensation lemma Gödel's condensation lemma is that given a suitable $$L_{\alpha}$$, and given $$X \prec L_{\alpha}$$, then upon taking the Mostowski collapse on $$X$$ we are returned with $$L_{\beta}$$ for some $$\beta < \alpha$$. I'm concerned about the exact requirements on $$\alpha$$ or $$L_{\alpha}$$. The way I learnt it was that $$L_{\alpha} \vDash T$$ is required, where $$T$$ is the finite fragment of $$ZF-P$$ that is used to formulate model theory, definability and construct the $$L$$-hierarchy. Then $$L_{\alpha} \vDash V = L$$, and is correct about the levels of the $$L$$ hierarchy so that $$(L)^{L_{\alpha}} = L_{\alpha}$$ and $$(L_{\gamma})^{L_{\alpha}} = (L_{\gamma})$$ for $$\gamma < \alpha$$. Then by elementarity + isomorphism the same holds for $$\pi(X)$$ ($$\pi$$ being the Mostowski collapse) and from this the result easily follows. However it seems that we can loosen the requirements beyond $$L_{\alpha} \vDash T$$. I've seen weakenings to $$\alpha$$ being a limit or $$\alpha$$ being such that $$\alpha = \omega \cdot \alpha$$. I'm not sure how this is supposed to work. If $$\alpha$$ is simply a limit, it seems to me that it might not satisfy a good chunk of $$ZF-P$$ and potentially $$T$$. For instance the proof of $$(Comprehension)^L$$ uses the reflection theorem scheme, and simply assuming $$\alpha$$ a limit, it seems that $$(Comprehension)^{L_{\alpha}}$$ may not work. Is Comprehension then not needed to formulate model theory, the $$L$$-hierarchy, etc needed in the proof above? If $$L_{\alpha} \not \vDash T$$, then it seems that the traditional argument (or the one I used above) doesn't follow through. So how far can the hypotheses on $$\alpha$$ be weakened? Can $$\alpha$$ really just be a limit ordinal or that $$\alpha = \omega \cdot \alpha$$? Does this follow simply from being very careful in observing what exactly is used in $$T$$ above and coming to the conclusion that limits are enough? If that was the case, then I was hoping to see/understand this analysis and find out what exactly is needed, and what is the complexity of this when put as a sentence, etc. Or is it a different proof entirely? • It works if $\alpha$ is admissible, that is, if $L_\alpha$ is a model of $\mathsf{KP}$. Feb 8, 2021 at 16:53 • @HanulJeon It works for arbitrary limit $\alpha$ in fact, and with only the hypothesis of $\prec_1$ in place of $\prec$ - see Devlin's proof here (page $80$). Feb 8, 2021 at 16:59 • @NoahSchweber Thank you for your reply. I did know that condensation holds for Jensen's $J$-hierarchy with $\prec_1$, and $J_\alpha=L_\alpha$ if $\alpha$ is admissible. I did not know that condensation holds for every limit ordinal, however. Feb 8, 2021 at 17:05 • @Noah I thought it held for successor levels too (though with much more difficult proof and maybe not for just $\Sigma_1$). Or am I making that up? Feb 8, 2021 at 17:34 • @spaceisdarkgreen I believe it does, but since I don't have a source on-hand I didn't want to claim it. Feb 8, 2021 at 17:38 Suppose $$\alpha$$ is a limit ordinal and $$X\prec_1L_\alpha$$. Then $$X\cong L_\beta$$ for a unique ordinal $$\beta$$, and the Mostowski collapse provides the unique isomorphism. Here "$$\prec_1$$" refers to elementarity for $$\Sigma_1$$ formulas only: $$A\prec_1 B$$ iff $$A$$ is a substructure of $$B$$ and for every $$\Sigma_1$$ formula $$\varphi(\overline{x})$$ and every $$\overline{a}\in A$$ we have $$A\models\varphi(\overline{a})\iff B\models\varphi(\overline{a})$$. The proof isn't really different from the proof of the weaker argument that this holds with $$\prec$$ in place of $$\prec_1$$ and restricting attention to $$\alpha$$s such that $$L_\alpha$$ satisfies a reasonably strong set theory, it's just much more tedious. Part of this is essentially the realization that arbitrary limit levels of $$L$$ satisfy a larger fragment of set theory than one might expect; the other part is showing that the construction of the $$L$$-hierarchy is a $$\Sigma_0$$ proccess in a precise sense. These fine-grained analyses are moot if we assume that $$L_\alpha$$ satisfies a reasonable fragment of set theory. Devlin's presentation (page $$80$$) of this is quite good in my opinion. Devlin's book has serious flaws elsewhere, but this material is solid. • Thanks for the link. So basically by a painful analysis of the complexity of $T$ you can conclude that being a limit is enough and that the sentence capturing $T$ is $\Sigma_1$! I vastly overestimated how complex $T$ is it seems. Feb 8, 2021 at 17:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 56, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726578593254089, "perplexity": 198.66855917935968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00085.warc.gz"}
http://comunidadwindows.org/standard-error/standard-error-and-standard-deviation.php
Home > Standard Error > Standard Error And Standard Deviation # Standard Error And Standard Deviation ## Contents Technical questions like the one you've just found usually get answered within 48 hours on ResearchGate. The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate. So standard deviation describes the variability of the individual observations while standard error shows the variability of the estimator. The mean age was 23.44 years. Check This Out Later sections will present the standard error of other statistics, such as the standard error of a proportion, the standard error of the difference of two means, the standard error of Star Fasteners Given that ice is less dense than water, why doesn't it sit completely atop water (rather than slightly submerged)? As an example of the use of the relative standard error, consider two surveys of household income that both result in a sample mean of \$50,000. A quantitative measure of uncertainty is reported: a margin of error of 2%, or a confidence interval of 18 to 22. https://en.wikipedia.org/wiki/Standard_error ## When To Use Standard Deviation Vs Standard Error They report that, in a sample of 400 patients, the new drug lowers cholesterol by an average of 20 units (mg/dL). Is the ability to finish a wizard early a good idea? NCBISkip to main contentSkip to navigationResourcesHow ToAbout NCBI AccesskeysMy NCBISign in to NCBISign Out PMC US National Library of Medicine National Institutes of Health Search databasePMCAll DatabasesAssemblyBioProjectBioSampleBioSystemsBooksClinVarCloneConserved DomainsdbGaPdbVarESTGeneGenomeGEO DataSetsGEO ProfilesGSSGTRHomoloGeneMedGenMeSHNCBI Web Standard Deviation of Sample Mean -1 Under what circomstances the sample standard error is likely to equal population standard deviation? 3 Why do we rely on the standard error? -3 What The next graph shows the sampling distribution of the mean (the distribution of the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. The mean age was 33.88 years. Standard Error Of The Mean The data set is ageAtMar, also from the R package openintro from the textbook by Dietz et al.[4] For the purpose of this example, the 5,534 women are the entire population Larger sample sizes give smaller standard errors As would be expected, larger sample sizes give smaller standard errors. Standard Error In R The confidence interval of 18 to 22 is a quantitative measure of the uncertainty – the possible difference between the true average effect of the drug and the estimate of 20mg/dL. From data (simulation) The next diagram takes random samples of values from the above population. Click Take Sample a few times and observe that the sample standard deviation varies from The confidence interval of 18 to 22 is a quantitative measure of the uncertainty – the possible difference between the true average effect of the drug and the estimate of 20mg/dL. For the purpose of hypothesis testing or estimating confidence intervals, the standard error is primarily of use when the sampling distribution is normally distributed, or approximately normally distributed. Standard Error Vs Standard Deviation Example Standard deviation Standard deviation is a measure of dispersion of the data from the mean. ISBN 0-7167-1254-7 , p 53 ^ Barde, M. (2012). "What to use to express the variability of data: Standard deviation or standard error of mean?". Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. ## Standard Error In R But technical accuracy should not be sacrificed for simplicity. you could check here plot(seq(-3.2,3.2,length=50),dnorm(seq(-3,3,length=50),0,1),type="l",xlab="",ylab="",ylim=c(0,0.5)) segments(x0 = c(-3,3),y0 = c(-1,-1),x1 = c(-3,3),y1=c(1,1)) text(x=0,y=0.45,labels = expression("99.7% of the data within 3" ~ sigma)) arrows(x0=c(-2,2),y0=c(0.45,0.45),x1=c(-3,3),y1=c(0.45,0.45)) segments(x0 = c(-2,2),y0 = c(-1,-1),x1 = c(-2,2),y1=c(0.4,0.4)) text(x=0,y=0.3,labels = expression("95% of the When To Use Standard Deviation Vs Standard Error The graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. Standard Error In Excel As the sample size increases, the sampling distribution become more narrow, and the standard error decreases. Choose your flavor: e-mail, twitter, RSS, or facebook... his comment is here In other words standard error shows how close your sample mean is to the population mean. Standard error of mean versus standard deviation In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation or the mean with the standard error. The sample mean will very rarely be equal to the population mean. Standard Error Calculator Thus, in inferential statistics, the use of sem is valid but the CI is more valuable. Consider the following scenarios. How do I respond to the inevitable curiosity and protect my workplace reputation? this contact form Sokal and Rohlf (1981)[7] give an equation of the correction factor for small samples ofn<20. ISBN 0-8493-2479-3 p. 626 ^ a b Dietz, David; Barr, Christopher; Çetinkaya-Rundel, Mine (2012), OpenIntro Statistics (Second ed.), openintro.org ^ T.P. Standard Error Of Estimate For illustration, the graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. Olsen CH. ## All journals should follow this practice.NotesCompeting interests: None declared.References1. The mean of these 20,000 samples from the age at first marriage population is 23.44, and the standard deviation of the 20,000 sample means is 1.18. If σ is known, the standard error is calculated using the formula σ x ¯   = σ n {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} where σ is the For a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above Standard Error Of Measurement My only comment was that, once you've already chosen to introduce the concept of consistency (a technical concept), there's no use in mis-characterizing it in the name of making the answer Scenario 1. Br J Anaesth 2003; 90: 514–16 Key words Keywords: statistics Previous SectionNext Section Accepted for publication: December 3, 2002 Previous SectionNext Section When reporting data in biomedical research papers, authors often As you collect more data, you'll assess the SD of the population with more precision. http://comunidadwindows.org/standard-error/standard-error-square-root-standard-deviation.php Naturally, the value of a statistic may vary from one sample to the next.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395207762718201, "perplexity": 1076.751539818954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945596.11/warc/CC-MAIN-20180422115536-20180422135536-00172.warc.gz"}
https://cs.stackexchange.com/questions/2347/reduction-to-equipartition-problem-from-the-partition-problem
Reduction to equipartition problem from the partition problem? Equipartition Problem: Instance: $2n$ positive integers $x_1,\dots,x_{2n}$ such that their sum is even. Let $B$ denote half their sum, so that $\sum x_{i} = 2B$. Query: Is there a subset $I \subseteq [2n]$ of size $|I| = n$ such that $\sum_{i \in I} x_{i} = B$? Can the partition problem - same as the above but without the restriction on $|I|$ - be reduced to the above problem ? migrated from cstheory.stackexchange.comJun 12 '12 at 18:35 This question came from our site for theoretical computer scientists and researchers in related fields. • Yes, Equipartition is NP-complete and there is p-time reduction to it from Partition problem. See SP12 in Garey and Johnson. – Mohammad Al-Turkistany Mar 23 '12 at 17:51 As noted in the comments, the EQUIPARTITION problem is also NP-complete. The basic idea of reducing PARTITION to it is to add elements so that they can be partitioned complementary to the original elements (thus yielding an equipartition) but in a way that ensures that new and old elements can not cancel each other out. The latter can be achieved by choosing the new elements sufficiently large. Let $A = \{a_1, \dots, a_n\} \subseteq \mathbb{N}_+$ be an instance of PARTITION. Construct an instance $f(A)$ of EQUIPARTITION by $\qquad \displaystyle f(A) = A \cup \{ a_{n+i} \mid a_{n+i} = 2B \cdot a_i, a_i \in A \}$ with $2B = \sum_{a \in A} a$. It is clear that $f$ can be computed in deterministic polynomial time. We have to show that $A$ is a YES-instance of PARTITION if and only if $f(A)$ is a YES-instance of EQUIPARTITION. • Let $A$ be a YES-instance of PARTITION, that is there is $I \subset [n]$ with $\sum_{i \in I} a_i = B$. Then, $\qquad \displaystyle I' = I \cup \{n+i \mid i \in \overline{I} = [n] \setminus I \}$ is a solution of $f(A)$; note that $|I'| = n$ and $\sum_{i \in I'} a_i = B + 2B^2 = \sum_{i \in \overline{I'}} a_i$. Thus, $f(A)$ is a YES-instance of EQUIPARTITION. • Let $f(A)$ be a YES-instance of EQUIPARTITION. Let $I' = I \cup J$ a solution of $f(A)$ with $I \subseteq [n]$ and $J \subseteq [2n] \setminus [n]$ ($I$ contains the indices chosen from $A$, $J$ the new ones), and $\overline{I'} = \overline{I} \cup \overline{J}$ its complement, similarly partitioned. That is, $\sum_{i \in I'} a_i = \sum_{i \in \overline{I'}} a_i$ Assume towards a contradiction that $\sum_{i \in I} a_i \neq \sum_{i \in \overline{I}} a_i$; without loss of generality assume $\sum_{i \in I} a_i > \sum_{i \in \overline{I}} a_i$. Now we have that $\qquad \displaystyle 0 < \sum_{i \in \overline{J}} a_i - \sum_{i \in J} a_i = \sum_{i \in I} a_i - \sum_{i \in \overline{I}} a_i < 2B$ which contradicts the construction of $f(A)$; elements with indices in $J \cup \overline{J}$ are all (positive) multiples of $2B$ so the assumed difference is impossible. Therefore, $I$ is a solution of $A$ which is thus a YES-instance of PARTITION. This concludes the proof, that is PARTITION $\leq_p$ EQUIPARTITION by virtue of $f$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514042735099792, "perplexity": 215.98807610028672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00252.warc.gz"}
http://conceptmap.cfapps.io/wikipage?lang=en&name=Neighbourhood_system
# Neighbourhood system In topology and related areas of mathematics, the neighbourhood system, complete system of neighbourhoods,[1] or neighbourhood filter ${\displaystyle {\mathcal {N}}(x)}$ for a point x is the collection of all neighbourhoods of the point x. ## Definitions An open neighbourhood of a subset S of X is any open set V such that SV. A neighbourhood of S in X is any subset TX such that T contains some open neighborhood of S. Explicitly, this means that TX is a neighbourhood of S in X if and only if there is some open set V such that SVT. The neighbourhood system for any non-empty set S is a filter called the neighbourhood filter for S. The neighbourhood filter for a point xX is the same as the neighbourhood filter of the singleton set { x }. A "neighborhood" does not have to be an open set; those neighbourhoods that also happen to be open sets are known as "open neighbourhoods." Similarly, those neighbourhoods that also happen to be closed sets are known as closed neighbourhoods. There are many other types of neighborhoods that are used in Topology and related fields like Functional Analysis. The family of all neighborhoods having a certain "useful" property often forms a neighbourhood basis, although many times, these neighbourhoods are not necessarily open. ### Basis A neighbourhood basis or local basis (or neighbourhood base or local base) for a point x is a filter base of the neighbourhood filter; this means that it is a subset ${\displaystyle {\mathcal {B}}(x)\subseteq {\mathcal {N}}(x)}$ such that for all ${\displaystyle V\in {\mathcal {N}}(x)}$ , there exists some ${\displaystyle B\in {\mathcal {B}}(x)}$  such that ${\displaystyle B\subseteq V.}$  That is, for any neighbourhood ${\displaystyle V}$  we can find a neighbourhood ${\displaystyle B}$  in the neighbourhood basis that is contained in ${\displaystyle V}$ . Equivalently, ${\displaystyle {\mathcal {B}}(x)}$  is a local basis at x if and only if the neighbourhood filter ${\displaystyle {\mathcal {N}}(x)}$  can be recovered from ${\displaystyle {\mathcal {B}}(x)}$  in the sense that the following equality holds: ${\displaystyle {\mathcal {N}}(x)=\left\{V\subseteq X~:~B\subseteq V{\text{ for some }}B\in {\mathcal {B}}(x)\right\}}$ .[2] ### Subbasis A neighbourhood subbasis at x is a family 𝒮 of subsets of X, each of which contains x, such that the collection of all possible finite intersections of elements of 𝒮 forms a neighborhood basis at x. ## Examples • In any topological space, the neighbourhood system for a point is also a neighbourhood basis for the point. • The set of all open neighborhoods at a point forms a neighbourhood basis at that point. • Given a space X with the indiscrete topology the neighbourhood system for any point x only contains the whole space, ${\displaystyle {\mathcal {N}}(x)=\{X\}}$ • In a metric space, for any point x, the sequence of open balls around x with radius 1/n form a countable neighbourhood basis ${\displaystyle {\mathcal {B}}(x)=\{B_{1/n}(x);n\in \mathbb {N} ^{*}\}}$ . This means every metric space is first-countable. • In the weak topology on the space of measures on a space E, a neighbourhood base about ${\displaystyle \nu }$  is given by ${\displaystyle \left\{\mu \in {\mathcal {M}}(E):\left|\mu f_{i}-\nu f_{i}\right|<\varepsilon _{i},\,i=1,\ldots ,n\right\}}$ where ${\displaystyle f_{i}}$  are continuous bounded functions from E to the real numbers. ## Properties In a seminormed space, that is a vector space with the topology induced by a seminorm, all neighbourhood systems can be constructed by translation of the neighbourhood system for the origin, ${\displaystyle {\mathcal {N}}(x)={\mathcal {N}}(0)+x.}$ This is because, by assumption, vector addition is separately continuous in the induced topology. Therefore, the topology is determined by its neighbourhood system at the origin. More generally, this remains true whenever the space is a topological group or the topology is defined by a pseudometric.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105566740036011, "perplexity": 318.3582612825636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00626.warc.gz"}
http://crypto.stackexchange.com/users/9096/bush?tab=activity
# Bush less info reputation 6 bio website location age member for 10 months seen Aug 14 at 14:11 profile views 30 # 79 Actions Jul2 awarded Curious Jun6 comment Hamiltonicity proof of knowledge @RickeyDemer 1.Why did you assigned 0 to $\kappa (|x|)$? 2.Why do you need a minimum-length witness? 3.$K_w$ cannot output a witness with certainity, if $P^*$ is lying then $K_w$ wouldn't be able to output a witness. 4. I don't understand your conclusion (where is it stems from?). Jun5 comment Hamiltonicity proof of knowledge $L_R$ is the set of all $x$s s.t. there exists $y$ s.t. $(x,y)\in R$. The construction of the knowledge extractor is trivial bur what can you say about the running time requirement, which related to the knowledge error? (As I wrote in the 'question' part) Jun4 comment Hamiltonicity proof of knowledge @RickyDemer, it's a typo - fixed Jun4 revised Hamiltonicity proof of knowledge deleted 4 characters in body Jun3 revised Hamiltonicity proof of knowledge added 51 characters in body Jun3 asked Hamiltonicity proof of knowledge May13 comment What are Cryptographic Multi-linear Maps? No, this is another problem that I encounter, even to find out what are the topics that I should study beforehand. May13 asked What are Cryptographic Multi-linear Maps? Feb7 accepted Can you prove the existance of a PRG $G$ s.t. for each even $k$: $G(k)=G(k+1)$? Feb1 comment Can you prove the existance of a PRG $G$ s.t. for each even $k$: $G(k)=G(k+1)$? Well, that's my question.. why does the cloned is considered as a PRG? How can you prove that the cloned is a PRG as well? Feb1 comment Can you prove the existance of a PRG $G$ s.t. for each even $k$: $G(k)=G(k+1)$? And since the attacker doesn't control the input to a PRG he cannot distinguish between its output and a random string. This is my intuition, I just don't know how to prove. Feb1 comment Can you prove the existance of a PRG $G$ s.t. for each even $k$: $G(k)=G(k+1)$? @PaŭloEbermann - note that it holds for even inputs $k$. so if it's true - it means that each to consecutive inputs to $G$ yield the same output. (If $k$ is odd the it's not mandatory that $G(k)=G(k+1)$ , only that $G(k)=G(k-1)$. Feb1 asked Can you prove the existance of a PRG $G$ s.t. for each even $k$: $G(k)=G(k+1)$? Jan31 comment Why does a perfect secrecy can be achieved when decryption correctness is not totally required? Can you please give a more concrete answer? (with a lower bound..) Jan31 asked Why does a perfect secrecy can be achieved when decryption correctness is not totally required? Jan12 comment Can you show how that RSA does/doesn't provide anonymity? @fgrieu - so the expected maximum is N-1 (in the case of a quadratic residue group) and the mean is $~e^{n/2} \mod N$? How can I analyze the probability of success here? Jan9 comment Can you show how that RSA does/doesn't provide anonymity? Also, changed the adversary to be CPA, it is not matter though because in public key an eavesdropper is also a cpa. Jan9 revised Can you show how that RSA does/doesn't provide anonymity? added 326 characters in body Jan9 comment Can you show how that RSA does/doesn't provide anonymity? This is a direction but I'm talking about the cpa experiment. Adding it to the question.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059638738632202, "perplexity": 1359.3683587117382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909040133-00391-ip-10-180-136-8.ec2.internal.warc.gz"}
http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/goldenmean.htm
## The 'Golden Mean' in number theory [abstract:] "Beginning with the most general fractal strings/sprays construction recently expounded in the book by Lapidus and Frankenhuysen, it is shown how the complexified extension of El Naschie's Cantorian-Fractal spacetime model belongs to a very special class of families of fractal strings/sprays whose scaling ratios are given by suitable pinary (pinary, p prime) powers of the Golden Mean. We then proceed to show why the logarithmic periodicity laws in Nature are direct physical consequences of the complex dimensions associated with these fractal strings/sprays. We proceed with a discussion on quasi-crystals with p-adic internal symmetries, von Neumann's Continuous Geometry, the role of wild topology in fractal strings/sprays, the Banach-Tarski paradox, tesselations of the hyperbolic plane, quark confinement and the Mersenne-prime hierarchy of bit-string physics in determining the fundamental physical constants in Nature." Castro's observation possibly linking the 'Golden String' to a function central to the behaviour of certain eigenvalues in random matrix theory (which in turn appears to be deeply linked to the behaviour of the nontrivial zeros of the Riemann zeta function). P. Cvitanovic, "Circle Maps: Irrationally Winding" from Number Theory and Physics, eds. C. Itzykson, et. al. (Springer, 1992) "We shall start by briefly summarizing the results of the 'local' renormalization theory for transitions from quasiperiodicity to chaos. In experimental tests of this theory one adjusts the external frequency to make the frequency ratio as far as possible from being mode-locked. this is most readily attained by tuning the ratio to the 'golden mean' (51/2 - 1)/2. The choice of the golden mean is dictated by number theory: the golden mean is the irrational number for which it is hardest to give good rational approximants. As experimental measurements have limited accuracy, physicists usually do not expect that such number-theoretic subtleties as how irrational a number is should be of any physical interest. However, in the dynamical systems theory to chaos the starting point is the enumeration of asymptotic motions of a dynamical system, and through this enumeration number theory enters and comes to play a central role." B.W. Ninham and S. Lidin, "Some remarks on quasi-crystal structure", Acta Crystallographica A 48 (1992) 640-650 [abstract:] "The Fourier transform of skeleton delta function that characterizes the most striking features of experimental quasi-crystal diffraction patterns is evaluated. The result plays a role analogous to the Poisson summation formula for periodic delta functions that underlie classical crystallography. The real-space distribution can be interpreted in terms of a backbone comprising a system of intersecting equiangular spirals into which are inscribed (self-similar) gnomons of isoceles triangles with length-to-base ratio the golden mean...In addition to the vertices of these triangles, there is an infinite number of other points that may tile space in two or three dimensions. Other mathematical formulae of relevance are briefly discussed." [from concluding remarks:] "Perhaps the most interesting feature is that our Fourier-transform sum seems to have much in common with the distribution of the zeros of the Riemann zeta function...! That indicates something of the depth of the problem. That the zeta function ought to come into the scheme of things somehow is not surprising - the Poisson and related summation formulae are special cases of the Jacobi theta function. [Indeed the Bravais lattices can be enumerated systematically through an integral over all possible products and sums of products of any three of the four theta functions in different combinations that automatically preserve translational and rotational symmetries.] The theta-function transformations are themselves just another way of writing the [functional equation of the zeta function]. Additionally, the properties of the zeta function are automatically connected to the theory of prime numbers. So one expects that the Rogers-Ramanujan relations must play a central role in the scheme of things for quasi-crystals." V. Dimitrov, T. Cooklev and B. Donevsky, "Number theoretic transforms over the golden section quadratic field.", IEEE Trans. Sig. Proc. 43 (1995) 1790-1797 V. Dimitrov,G. Jullien, and W. Miller, "A residue number system implementation of real orthogonal transforms", IEEE Trans. Sig. Proc. 46 (1998) 563-570. M.L. Lapidus and M. van Frankenhuysen, "A prime orbit theorem for self-similar flows and Diophantine approximation", Contemporary Mathematics volume 290 (AMS 2001) 113-138. "EXAMPLE 2.23 (The Golden flow). We consider the nonlattice flow GF with weights w1 = log 2 and w2 = \phi log2, where \phi = (1 + 51/2)/2 is the golden ratio. We call this flow the golden flow. Its dynamical zeta function is \zetaGF(s) = 1/(1 - 2-s - 2-\phis)" C. Bonanno and M.S. Mega, "Toward a dynamical model for prime numbers" Chaos, Solitons and Fractals 20 (2004) 107-118 [abstract:] "We show one possible dynamical approach to the study of the distribution of prime numbers. Our approach is based on two complexity methods, the Computable Information Content and the Entropy Information Gain, looking for analogies between the prime numbers and intermittency." The main idea here is that the Manneville map Tz exhibits a phase transition at z = 2, at which point the mean Algorithmic Information Content of the associated symbolic dynamics is n/log n. n is a kind of iteration number. For this to work, the domain of Tz [0,1] must be partitioned as [0,0.618...] U [0.618...,1] where 1.618... is the golden mean. The authors attempt to exploit the resemblance to the approximating function in the Prime Number Theorem, and in some sense model the distribution of primes in dynamical terms, i.e. relate the prime number series (as a binary string) to the orbits of the Manneville map T2. Certain refinements of this are then explored. The Phyllotaxis project's notes on the Farey Tree and the Golden Mean Selvam's attempts to link the Riemann zeta function to fluid flow, atmospheric turbulence, etc. (the Golden Mean appearing as a winding number) I have discovered a particularly simple Beurling generalised-prime configuration wherein the associated zeta function has a 'fixed point' at the Golden Ratio (i.e. zeta(1.618...) = 1.618...   Notes will be added here in due course. J. Dudon, "The golden scale", Pitch I/2 (1987) 1-7. "The Golden scale is a unique unequal temperament based on the Golden number. The equal temperaments most used, 5, 7, 12, 19, 31, 50, etc. are crystallizations through the numbers of the Fibonacci series, of the same universal Golden scale, based on a geometry of intervals related in Golden proportion. The author provides the ratios and dimensions of its intervals and explains the specific intonation interest of such a cycle of Golden fifths, unfolding into microtonal coincidences with the first five significant prime numbers ratio intervals (3:5:7:11:13)." [Note that here the Fibonacci sequence mentioned differs slightly from, but is closely related to, the usual one.] [abstract:] "The present paper is a review, a thesis of some very important contributes of E. Witten, C. Beasley, R. Ricci, B. Basso et al. regarding various applications concerning the Jones polynomials, the Wilson loops and the cusp anomaly and integrability from string theory. In this work, in Section 1, we have described some equations concerning the knot polynomials, the Chern–Simons from four dimensions, the D3-NS5 system with a theta-angle, the Wick rotation, the comparison to topological field theory, the Wilson loops, the localization and the boundary formula. We have described also some equations concerning electric-magnetic duality to $N = 4$ super Yang-Mills theory, the gravitational coupling and the framing anomaly for knots. Furthermore, we have described some equations concerning the gauge theory description, relation to Morse theory and the action. In Section 2, we have described some equations concerning the applications of non-abelian localization to analyze the Chern–Simons path integral including Wilson loop insertions. In the Section 3, we have described some equations concerning the cusp anomaly and integrability from string theory and some equations concerning the cusp anomalous dimension in the transition regime from strong to weak coupling. In Section 4, we have described also some equations concerning the "fractal" behaviour of the partition function. Also here, we have described some mathematical connections between various equation described in the paper and (i) the Ramanujan's modular equations regarding the physical vibrations of the bosonic strings and the superstrings, thence the relationship with the Palumbo-Nardelli model, (ii) the mathematical connections with the Ramanujan's equations concerning $\pi$ and, in conclusion, (iii) the mathematical connections with the golden ratio $\phi$ and with $1.375$ that is the mean real value for the number of partitions $p(n)$." In their paper "The golden mean as clock cycle of brain waves" (Chaos, Solitons and Fractals 18 No. 4 (2003) 643-652, Harald and Volkmar Weiss acknowledge this website as one of several "...without which our work would be impossible", and in a subsequent email, Volkmar Weiss wrote "Your site was very helpful to us in an extraordinary way." Although the article has no explicit number theoretical content, it relates closely to quite a few different areas of research which are relevant to this archive. [abstract:] "The principle of information coding by the brain seems to be based on the golden mean. Since decades psychologists have claimed memory span to be the missing link between psychometric intelligence and cognition. By applying Bose-Einstein-statistics to learning experiments, Pascual-Leone obtained a fit between predicted and tested span. Multiplying span by mental speed (bits processed per unit time) and using the entropy formula for bosons, we obtain the same result. If we understand span as the quantum number n of a harmonic oscillator, we obtain this result from the EEG. The metric of brain waves can always be understood as a superposition of n harmonics times $2\Phi$, where half of the fundamental is the golden mean $\Phi$ (= 1.618) as the point of resonance. Such wave packets scaled in powers of the golden mean have to be understood as numbers with directions, where bifurcations occur at the edge of chaos, i.e. $2\Phi = 3 + \phi^3$. Similarities with El Naschie's theory for high energy particle's physics are also discussed." archive      tutorial      mystery      new      search      home      contact
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835225701332092, "perplexity": 1029.697916234781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00506.warc.gz"}
http://math.stackexchange.com/questions/278485/projection-on-geodesic-lines-in-mathbbhn
Projection on geodesic lines in $\mathbb{H}^n$ Good morning everyone, I was wondering wether or not is the projection on a geodesic line in $\mathbb{H}^n$ $1$-lipschitz for the hyperbolic distance. I asked myself this question because i ran into the following result : if $a,b\in \mathbb{H}^n$ are at distance $s$ from a given geodesic line $\alpha$, and if we note $p$ the (orthogonal of course) projection on $\alpha$ then the following formula holds : $$d(a,b) \geq cosh(s). d(p(a),p(b))$$ Could we say anything relevant if we don't know how far are $a$ and $b$ from $\alpha$ ? - Yes, the projection is $1$-Lipschitz. In the half-space model, choose $\alpha$ to be the vertical half-line $\{(0,0,\dots,t):t>0\}$. For each $t$, the set of points that project to the point $P_t=(0,\dots,t)$ is the union of all geodesics leaving $P_t$ orthogonally to $\alpha$. These geodesics lie of the Euclidean sphere $S(t)=\left\{x:\sum_{k=1}^n x_k^2 = t^2\right\}$. Now the problem boils down to showing that for any $0<t<s$ the distance between $S(t)$ and $S(s)$ is attained by the pair of points $P_t$ and $P_s$. Consider any path $\gamma:[0,1]\to \mathbb H^n$ that begins on $S(t)$ and ends on $S(s)$. Write it in coordinates as $\gamma=(\gamma_1,\dots,\gamma_n)$, so that the length of $\gamma$ is $\int \frac{d\gamma}{\gamma_n}$. For comparison, consider the path $$\tilde \gamma = \left(0,0,\dots,\sqrt{\sum_{k=1}^n \gamma_k^2}\right)$$ which connects $P_t$ to $P_s$. Clearly, $|\tilde \gamma'|\le |\gamma'|$ (Euclidean norms), and $\tilde \gamma_n\ge \gamma_n$. Hence, $$d(P_t,P_s)\le \int \frac{d\tilde \gamma}{\tilde \gamma_n}\le \int \frac{d\gamma}{\gamma_n}$$ and therefore $d(P_t,P_s)\le d(a,b)$ whenever $a\in S(t)$ and $b\in S(s)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798105359077454, "perplexity": 98.27160335318355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00137-ip-10-185-217-139.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/64012/difference-between-electromagnetic-radiation-emr-and-electromagnetic-field
# Difference between electromagnetic radiation (EMR) and Electromagnetic Field? I'm a freshly graduated electrical engineer. One course that I really struggled with was Field Theory, because it was a lovely assortment of vector calculus and things that were explained to me well above my level. As a result, I can design a really awesome circuit board, but I don't really understand the fundamental rules of the universe that really let me do that. So I understand the electromagnetic spectrum -- electromagnetic radiation (EMR) is mediated by photons with energy $E = h\nu$, and so on. These photons are absorbed and emitted by atomic reactions. What I was told, or at least what I am under the impression is, that both electric and magnetic fields are also mediated by photons. This is where I get lost; I can't visualize how it works. When I hold up two magnets together, are they exchanging photons? When I close a switch, and current flows due to the existence of an electric field, where do these photons come in? I am perfectly find understanding that electrons flow under the potential gradient, I just don't see where photons come in. I took the physics classes, which describe what goes on at the lowest level, and electrical classes that described things at a much more abstract level, but neither side really put things together, so I've got this frustrating knowledge gap. As a corollary question: If $E = h\nu$ for photons, then how can antennas that emit a constant frequency have varying power? You see warnings about not to get too close to some high-power antennas because you'll get roasted. Is it because the quantities of photons being emitted? - Yes to both yes/no questions, for one. –  Emilio Pisanty May 10 '13 at 1:42 Hi @Bamako, and welcome to Physics Stackexchange! Your last paragraph is sufficiently different that I'd suggest breaking that off into a new question. (Your hunch is right by the way, but a full-fledged answer would probably prove more insightful than this lowly comment.) –  Chris White May 10 '13 at 1:42 There is a great book of popular lectures, Feynman: QED, The Strange Theory of Light and Matter. –  firtree May 10 '13 at 7:00 Possible duplicate: physics.stackexchange.com/q/3580/2451 –  Qmechanic May 10 '13 at 9:49 So I understand the electromagnetic spectrum -- electromagnetic radiation is mediated by photons Briefly, electromagnetic radiation is due to real (observable) photons; electric and magnetic force are due to virtual photon exchange. The macroscopic electromagnetic wave phenomena we observe are due to an almost unimaginable number of photons, electromagnetic "quanta", coherently adding together. This is where I get lost; I can't visualize how it works. This topic is not something that one "groks" overnight or, if you're like me (an EE), over some period of years. It's a continuous process. Just today, while driving to Lowe's, something I had been thinking a long time about in quantum field theory suddenly became very clear. The fact is, no matter how many classes you take or books you read, much of the material must, like a great chili, "stew" for awhile before it's ready. - hv is the energy per photon, the power radiated is essentially this times the number of photons emitted per second, so a constant frequency antenna with variable power is emitting a variable number of photons per unit time. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910724520683289, "perplexity": 802.7314796942655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770815.80/warc/CC-MAIN-20141217075250-00109-ip-10-231-17-201.ec2.internal.warc.gz"}
https://helptest.net/english/39534/
What is a descriptive word, phrase, or clause that is not essential to the meaning of a sentence It is called a nonrestrictive element, ornonessential element I hope this helps!! Only authorized users can leave an answer! If you are not satisfied with the answer or you can’t find one, then try to use the search above or find similar answers below. More questions
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217546343803406, "perplexity": 1206.9427093553854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00347.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/R._K._Jain
• R K Jain Articles written in Pramana – Journal of Physics • WHEPP-X: Report of the working group on cosmology This is a summary of the activities of the working group on cosmology at WHEPP-X. The three main problems that were discussed at some length by the group during the course of the workshop were (i) canceling a `large' cosmological constant, (ii) non-Gaussianities in inflationary models and (iii) stability of interacting models of dark energy and dark matter. We have briefly outlined these problems and have indicated the progress made. • Development of underwater laser cutting technique for steel and zircaloy for nuclear applications In nuclear field, underwater cutting and welding technique is required for post-irradiation examination, maintenance, decommissioning and to reduce storage space of irradiated materials like used zircaloy pressure tubes etc., of nuclear power plants. We have developed underwater cutting technique for 4.2 mm thick zircaloy pressure tubes and up to 6 mm thick steel using fibre-coupled 250 W average power pulsed Nd:YAG laser. This underwater cutting technique will be highly useful in various nuclear applications as well as in dismantling/repair of ship and pipe lines in water. • Measurement of peak fluence of neutron beams using Bi-fission detectors Fission fragments and other charged particles leave tracks of permanent damage in most of the insulating solids. Damage track detectors are useful for personal dosimeters and for flux/dose determination of high-energy particles from accelerators or cosmic rays. A detector that has its principal response at nucleon energy above 50 MeV is provided by the fission of Bi-209. Neutrons produce the largest percentage of hadron dose in most high-energy radiation fields. In these fields, the neutron spectrum is typically formed by low-energy neutrons (evaporation spectrum) and high-energy neutrons (knock-on spectrum). We used Bi-fission detectors to measure neutron peak fluence and compared the result with the calculated value of neutron peak fluence. For the exposure to 100 MeV we have used the iThemba Facility in South Africa. • Efficient delivery of 60 J pulse energy of long pulse Nd:YAG laser through 200 𝜇m core diameter optical fibre Most of today’s industrial Nd:YAG lasers use fibre-optic beam delivery. In such lasers, fibre core diameter is an important consideration in deploying a beam delivery system. Using a smaller core diameter fibre allows higher irradiances at focus position, less degradation of beam quality, and a larger stand-off distance. In this work, we have put efforts to efficiently deliver the laser output of ‘ceramic reflector’-based long pulse Nd:YAG laser through a 200 𝜇m core diameter optical fibre and successfully delivered up to 60 J of pulse energy with 90% transmission efficiency, using a GRADIUM (axial gradient) plano-convex lens to sharply focus down the beam on the end face of the optical fibre and fibre end faces have been cleaved to achieve higher surface damage thresholds. • Characteristics of disintegration of different emulsion nuclei by relativistic 28Si nuclei at 3.7 A GeV An analysis of the data based on 924 inelastic interaction events induced by 28 Si nuclei in a nuclear emulsion is presented. The nuclear fragmentation process is studied by analysing the total charge (𝑄) distribution of the projectile spectators for different emulsion target groups along with the comparison of Monte Carlo Glauber model results. Probability distributions for total disintegrated events as a function of different projectile masses are shown and compared with cascade evaporation model results at same energy per nucleon. Further, mean multiplicities of different charged secondaries for different classes of events are presented and for each event, variation of mean multiplicities as a function of total charge (𝑄) is also presented. The pseudorapidity distributions and normalized pseudorapidity distributions of the produced charged particles in nucleus–nucleus collisions at 3.7 A GeV are analysed for total disintegration (TD) as well as minimum-bias events. • # Pramana – Journal of Physics Volume 94, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115478157997131, "perplexity": 3257.8186400037903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703528672.38/warc/CC-MAIN-20210121225305-20210122015305-00332.warc.gz"}
https://jmlr.org/papers/v24/21-0505.html
## Learning Mean-Field Games with Discounted and Average Costs Berkay Anahtarci, Can Deha Kariksiz, Naci Saldi; 24(17):1−59, 2023. ### Abstract We consider learning approximate Nash equilibria for discrete-time mean-field games with stochastic nonlinear state dynamics subject to both average and discounted costs. To this end, we introduce a mean-field equilibrium (MFE) operator, whose fixed point is a mean-field equilibrium, i.e., equilibrium in the infinite population limit. We first prove that this operator is a contraction, and propose a learning algorithm to compute an approximate mean-field equilibrium by approximating the MFE operator with a random one. Moreover, using the contraction property of the MFE operator, we establish the error analysis of the proposed learning algorithm. We then show that the learned mean-field equilibrium constitutes an approximate Nash equilibrium for finite-agent games. [abs][pdf][bib]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9810388684272766, "perplexity": 915.8820989963099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00176.warc.gz"}
https://txcorp.com/images/docs/vsim/latest/VSimReferenceManual/analyzers_reference.html
# Introduction to Analyzers Analyzers are executables provided with VSim for post processing simulation-produced data. They can produce one or a few numbers, such as mode frequencies, or they can produce large data files, like a density field from a particle file. In the latter case, the data is written into VizSchema compliant HDF5 files, which can then be visualized in the Visualization pane. # Using an Analyzer The analyzer executables are located in the same directory as the Vorpal executable. They may be used either within VSimComposer’s analysis tab Python environment or invoked on the command line. For command-line usage, one must have correctly set the environment as described in Setting Up Vorpal Command Line Environment of the User Guide.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721391558647156, "perplexity": 2426.0130148906815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00564.warc.gz"}
http://www.aimsciences.org/search/author?author=Edoardo%20Mainini
# American Institute of Mathematical Sciences ## Journals NHM We prove that the signed porous medium equation can be regarded as limit of an optimal transport variational scheme, therefore extending the classical result for positive solutions of [13] and showing that an optimal transport approach is suited even for treating signed densities. keywords: signed transport. Porous media equation changing sign-solutions gradient flow optimal transport DCDS-S We investigate carbon-nanotubes under the perspective ofgeometry optimization. Nanotube geometries are assumed to correspondto atomic configurations whichlocally minimize Tersoff-type interactionenergies. In the specific cases of so-called zigzag and armchairtopologies, candidate optimal configurations are analytically identifiedand their local minimality is numerically checked. Inparticular, these optimal configurations do not correspond neither tothe classical Rolled-up model [5] nor to themore recent polyhedral model [3]. Eventually, theelastic response of the structure under uniaxial testing is numericallyinvestigated and the validity of the Cauchy-Born rule is confirmed. keywords: Carbon nanotubes Tersoff energy variational perspective new geometrical model stability Cauchy-Born rule DCDS We prove uniqueness in the class of integrable and bounded nonnegative solutions in the energy sense to the Keller-Segel (KS) chemotaxis system. Our proof works for the fully parabolic KS model, it includes the classical parabolic-elliptic KS equation as a particular case, and it can be generalized to nonlinear diffusions in the particle density equation as long as the diffusion satisfies the classical McCann displacement convexity condition. The strategy uses Quasi-Lipschitz estimates for the chemoattractant equation and the above-the-tangent characterizations of displacement convexity. As a consequence, the displacement convexity of the free energy functional associated to the KS system is obtained from its evolution for bounded integrable initial data. keywords: Chemotaxis displacement convexity. Wasserstein distance Gradient flows Keller-Segel model
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822773098945618, "perplexity": 2349.703018807701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946453.89/warc/CC-MAIN-20180424022317-20180424042317-00284.warc.gz"}
http://mathhelpforum.com/math-challenge-problems/81198-missing-intercept-print.html
# The Missing Intercept • March 29th 2009, 03:26 AM Soroban The Missing Intercept The Missing Intercept I posted this puzzle some time ago, but no one provided a satisfactory answer. Given: . $\begin{Bmatrix}x \:=\:\dfrac{1-t^2}{1+t^2} \\ \\[-3mm] y \:=\:\dfrac{2t}{1+t^2} \end{Bmatrix}$ We have the parametric equations of a unit circle. Verification Square: . $\begin{Bmatrix}x^2 \:=\:\dfrac{(1-t^2)^2}{(1+t^2)^2} & [1]\\ \\[-3mm] y^2 \:=\:\dfrac{(2t)^2}{(1+t^2)^2} & [2]\end{Bmatrix}$ Add [1] and [2]: . $x^2+y^2\:=\:\frac{1 - 2t^2 + t^4}{(1+t^2)^2} + \frac{4t^2}{(1+t^2)^2}$ $= \;\frac{1 + 2t^2+t^4}{(1+t^2)^2} \:=\:\frac{(1+t^2)^2}{(1+t^2)^2} \;=\;1$ Hence, a unit circle: . $x^2+y^2\:=\:1$ To find the $y$-intercepts, let $x = 0.$ $x = 0\!:\;\;\frac{1-t^2}{1+t^2}\:=\:0 \quad\Rightarrow\quad t \:=\:\pm1$ Hence, the $y$-intercepts are: . $(0,1),\;(0,\text{-}1)$ To find the $x$-intercepts, let $y = 0.$ $y = 0\!:\;\;\frac{2t}{1+t^2} \:=\:0 \quad\Rightarrow\quad t \:=\:0$ Hence, the $x$-intercept is: . $(1,0)$ ? Where is the other $x$-intercept? • March 29th 2009, 04:16 AM running-gag Hi I don't know which answers have been given previously so I am trying (Happy) Let E the set defined by $\begin{Bmatrix}x \:=\:\dfrac{1-t^2}{1+t^2} \\ \\[-3mm] y \:=\:\dfrac{2t}{1+t^2} \end{Bmatrix}$ for t real By showing that $x^2+y^2=1$ you are proving that E is included in the unit circle, but not that it is equal to the unit circle. And it is not equal since the point (-1,0) is not inside E (there is no value of t such that x=-1 and y=0). Only if you are allowed to consider infinite values of t, you can find this point. • March 29th 2009, 06:15 AM CaptainBlack Quote: Originally Posted by Soroban The Missing Intercept I posted this puzzle some time ago, but no one provided a satisfactory answer. Given: . $\begin{Bmatrix}x \:=\:\dfrac{1-t^2}{1+t^2} \\ \\[-3mm] y \:=\:\dfrac{2t}{1+t^2} \end{Bmatrix}$ We have the parametric equations of a unit circle. Verification Square: . $\begin{Bmatrix}x^2 \:=\:\dfrac{(1-t^2)^2}{(1+t^2)^2} & [1]\\ \\[-3mm] y^2 \:=\:\dfrac{(2t)^2}{(1+t^2)^2} & [2]\end{Bmatrix}$ Add [1] and [2]: . $x^2+y^2\:=\:\frac{1 - 2t^2 + t^4}{(1+t^2)^2} + \frac{4t^2}{(1+t^2)^2}$ $= \;\frac{1 + 2t^2+t^4}{(1+t^2)^2} \:=\:\frac{(1+t^2)^2}{(1+t^2)^2} \;=\;1$ Hence, a unit circle: . $x^2+y^2\:=\:1$ To find the $y$-intercepts, let $x = 0.$ $x = 0\!:\;\;\frac{1-t^2}{1+t^2}\:=\:0 \quad\Rightarrow\quad t \:=\:\pm1$ Hence, the $y$-intercepts are: . $(0,1),\;(0,\text{-}1)$ To find the $x$-intercepts, let $y = 0.$ $y = 0\!:\;\;\frac{2t}{1+t^2} \:=\:0 \quad\Rightarrow\quad t \:=\:0$ Hence, the $x$-intercept is: . $(1,0)$ ? Where is the other $x$-intercept? Not much of a puzzle it's at $t=\pm\infty$ (or rather it's the limit point as $t \to \infty$ ) and there the point is $(-1,0)$. So the curve never reaches the second intercept and the curve is the unit circle with a hole at $(-1,0).$ CB • March 29th 2009, 11:19 AM Soroban Thank you, running-gag and The Cap'n! Those are the answers I was hoping for.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987004816532135, "perplexity": 1368.9386977295842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737911339.44/warc/CC-MAIN-20151001221831-00039-ip-10-137-6-227.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/213128-why-implicit-differentiation-print.html
# Why implicit differentiation? • February 14th 2013, 06:26 PM Paze Why implicit differentiation? Hi MHF. I have a short question with hopefully a simple answer. Why do we use implicit differentiation? What is the difference between a problem where we don't need implicit differentiation and a problem where we do need it? It seems to me that you can solve every problem explicitly, but maybe I just haven't gone far enough or perhaps later the problems become unbearable while computing them explicitly? Thanks! • February 14th 2013, 07:02 PM jakncoke Re: Why implicit differentiation? you cannot solve every problem explicity, for example compute $\frac{dy}{dx}$ (for points which are defined and meets criteria for implicit function theorem to be valid at that point) of $y^2 + y = x$, you cannot isolate this in the form y = f(x). • February 14th 2013, 07:55 PM Paze Re: Why implicit differentiation? Quote: Originally Posted by jakncoke you cannot solve every problem explicity, for example compute $\frac{dy}{dx}$ (for points which are defined and meets criteria for implicit function theorem to be valid at that point) of $y^2 + y = x$, you cannot isolate this in the form y = f(x). Hmm. I see. So in your problem we have $y'=\frac {1}{1+2y}$. So the slope at any point on the function is $\frac{1}{1+2y}$ correct? Because I know the derivative of your function. But if you ask me to differentiate it at $x=3$ can I just plug in $y^2+y=3$ and differentiate to find the slope at x=3? Thanks! • February 14th 2013, 08:05 PM Prove It Re: Why implicit differentiation? Quote: Originally Posted by jakncoke you cannot solve every problem explicity, for example compute $\frac{dy}{dx}$ (for points which are defined and meets criteria for implicit function theorem to be valid at that point) of $y^2 + y = x$, you cannot isolate this in the form y = f(x). Actually, in this case you can. \displaystyle \begin{align*} y^2 + y &= x \\ y^2 + y + \left( \frac{1}{2} \right)^2 &= x + \left( \frac{1}{2} \right)^2 \\ \left( y + \frac{1}{2} \right)^2 &= x + \frac{1}{4} \\ \left( y + \frac{1}{2} \right)^2 &= \frac{4x + 1}{4} \\ y + \frac{1}{2} &= \pm \frac{\sqrt{ 4x + 1 }}{2} \\ y &= \frac{ -1 \pm \sqrt{ 4x + 1 } }{2} \end{align*} But this demonstrates the point that while it may be possible to get an explicit form of y = f(x), it would be much more DIFFICULT to try to differentiate this explicit form, than it would be to differentiate the simple polynomial that it started with. Another good example is to find the derivative of the circle \displaystyle \begin{align*} x^2 + y^2 &= 1 \end{align*}. To do this explicitly, we would need to write \displaystyle \begin{align*} y = \pm \sqrt{ 1 - x^2 } \end{align*}, which is a much more difficult expression to differentiate than the simple polynomials that we have started with. • February 14th 2013, 08:24 PM jakncoke Re: Why implicit differentiation? Prove it showed that my example was flawed but in doing so he also illustrated why somtimes its better to use implicit differentiation even if you can write the equation explicity. Also in order to to quell your doubts as to if any equations exist which cannot be made explicit y = f(x), The example given on wikipedia is $y^5 - y = x$ (I'm a bit tired to verify if it is correct so take it at face value or verify it thy self) • February 14th 2013, 08:53 PM Paze Re: Why implicit differentiation? Quote: Originally Posted by jakncoke Prove it showed that my example was flawed but in doing so he also illustrated why somtimes its better to use implicit differentiation even if you can write the equation explicity. Also in order to to quell your doubts as to if any equations exist which cannot be made explicit y = f(x), The example given on wikipedia is $y^5 - y = x$ (I'm a bit tired to verify if it is correct so take it at face value or verify it thy self) Yea, I noticed that too when I looked deeper. Can you answer me question about simply plugging in x=3 to solve for 3...Would that give me the slope at x=3? • February 14th 2013, 09:23 PM hollywood Re: Why implicit differentiation? Quote: Originally Posted by Paze Can you answer me question about simply plugging in x=3 to solve for 3...Would that give me the slope at x=3? No, when you set x to 3, you have an equation with y only, so you no longer have a function to take the derivative of. - Hollywood • February 14th 2013, 10:31 PM Paze Re: Why implicit differentiation? Quote: Originally Posted by hollywood No, when you set x to 3, you have an equation with y only, so you no longer have a function to take the derivative of. - Hollywood Ow...How would I go about finding the slope at x=3 then (or any valid x value)? Do I need to know the y value...That sounds counter-productive since I suppose I need a x value to get a y value. I'm confused (Doh) • February 15th 2013, 06:38 AM hollywood Re: Why implicit differentiation? You would need to find the derivative function first, then assign x=3. And if you do implicit differentiation and end up with the derivative as a function of x and y, you'll need to figure out y and plug that in, too. - Hollywood
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945476651191711, "perplexity": 561.2508728011811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00298-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.thejournal.club/c/paper/103409/
#### A Hybrid Sampling Scheme for Triangle Counting ##### John Kallaugher, Eric Price We study the problem of estimating the number of triangles in a graph stream. No streaming algorithm can get sublinear space on all graphs, so methods in this area bound the space in terms of parameters of the input graph such as the maximum number of triangles sharing a single edge. We give a sampling algorithm that is additionally parameterized by the maximum number of triangles sharing a single vertex. Our bound matches the best known turnstile results in all graphs, and gets better performance on simple graphs like $G(n, p)$ or a set of independent triangles. We complement the upper bound with a lower bound showing that no sampling algorithm can do better on those graphs by more than a log factor. In particular, any insertion stream algorithm must use $\sqrt{T}$ space when all the triangles share a common vertex, and any sampling algorithm must take $T^\frac{1}{3}$ samples when all the triangles are independent. We add another lower bound, also matching our algorithm's performance, which applies to all graph classes. This lower bound covers "triangle-dependent" sampling algorithms, a subclass that includes our algorithm and all previous sampling algorithms for the problem. Finally, we show how to generalize our algorithm to count arbitrary subgraphs of constant size. arrow_drop_up
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587310314178467, "perplexity": 326.9570047478895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00415.warc.gz"}
https://cs.stackexchange.com/questions/30401/is-this-variant-of-atm-decidable/30405
# Is this variant of ATM decidable? Ok so I understand how $\mathrm{ATM} = \{\langle M,w \rangle \mid \text{$M$is a TM and$M$accepts$w$}\}$ is undecidable. Is this because $w$ is a variable? What if the parameter is fixed? Consider $\mathrm{BTM} = \{\langle M,w \rangle \mid \text{$M$is a TM and$M$accepts the string 101}\}$. BTM is decidable right? The diagnolization problem here doesn't seem to apply because it would seem trivial to build a Turing machine that is 100% capable of accepting only the input "101" and rejecting every other possible input, correct? And our machine would always reject itself as an input, since it only accepts "101", right? • Note our reference questions. – Raphael Sep 29 '14 at 7:47 • Just quote Rice's theorem! – Ryan Sep 29 '14 at 13:54 • What's the use of $w$ in BTM? – xskxzr May 25 '18 at 12:08 ## 2 Answers You can show that $\mathrm{BTM}$ is undecidable with a simple reduction from $\mathrm{ATM}$. Let $\langle M,w\rangle \in ATM$. We define the reduction function $f$, $\mathrm{HP} \leq \mathrm{BTM}$ as follow: $f(\langle M,w\rangle) = M_w$, Where $M_w$ on every input $x$, runs $M$ with input $w$. if $M$ stopped, then $M_w$ accepts $x$. if $M$ rejects, then $M_w$ rejects. Clearly, if $\langle M,w\rangle \in ATM$ then $L(M_w) = \Sigma^*$, and in particular $101 \in L(M_w)$. if $\langle M,w\rangle \notin ATM$ then $L(M_w) = \emptyset$, and $101 \notin L(M_w)$. Because $\mathrm{ATM}$is undecidable, so is $\mathrm{BTM}$. It should be intuitive that $\mathrm{BTM}$ is undeciadble, because given a turing machine $M$, you can't tell if $M$ will halt with input $101$ • Hmm thank you I am getting the concept of the behaviour of one turing machine versus a language describing how a set of turing machines behave - still a bit confused. – Daniel Baughman Sep 29 '14 at 6:44 BTM is also undecidable, with a similar diagonalization proof. Suppose the Turing machine $M$ decided BTM. Define a Turing machine $T$ that, on input $x$ an encoding of a Turing machine, it computes the encoding $y_x$ of a Turing machine which runs the Turing machine encoded by $x$ on input $x$; if $M(y_x)=1$ then $T$ gets into an infinite loop, and otherwise it halts. Then $T$ halts on input $\#T$ (where $\#T$ is the encoding of $T$) iff $M(y_{\# T})=0$ iff $y_{\# T}$ doesn't halt on 101 iff $T$ doesn't halt on input $\#T$, contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995791077613831, "perplexity": 522.1433337388088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00583.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_NonParam_ANOVA&diff=10415&oldid=6821
# AP Statistics Curriculum 2007 NonParam ANOVA (Difference between revisions) Revision as of 05:09, 2 March 2008 (view source)IvoDinov (Talk | contribs) (→Motivational Example)← Older edit Current revision as of 21:00, 28 June 2010 (view source)Jenny (Talk | contribs) (10 intermediate revisions not shown) Line 4: Line 4: ===Motivational Example=== ===Motivational Example=== - Suppose four groups of students were randomly assigned to be taught with four different techniques, and their achievement test scores were recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below. + Suppose four groups of students are randomly assigned to be taught with four different techniques, and their achievement test scores are recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below. Line 11: Line 11: | colspan=5| Teaching Method | colspan=5| Teaching Method |- |- - |  || '''Method 1''' || Method 2 || Method 3 || Method 4 + |  || '''Method 1''' || '''Method 2''' || '''Method 3''' || '''Method 4''' |- |- | rowspan=4| Index || 65 || 75 || 59 || 94 | rowspan=4| Index || 65 || 75 || 59 || 94 Line 23: Line 23: - The small sample sizes, and the lack of information about the distribution of each of the four samples, imply that ANOVA may not be appropriate for analyzing these data. + The small sample sizes and the lack of distribution information of each sample illustrate how ANOVA may not be appropriate for analyzing these types of data. ==The Kruskal-Wallis Test== ==The Kruskal-Wallis Test== - '''Kruskal-Wallis one-way analysis of variance''' by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a [[AP_Statistics_Curriculum_2007_ANOVA_1Way | one-way analysis of variance]] with the raw data (observed measurements) replaced by their ranks. + '''Kruskal-Wallis One-Way Analysis of Variance''' by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a [[AP_Statistics_Curriculum_2007_ANOVA_1Way | One-Way Analysis of Variance]] with the raw data (observed measurements) replaced by their ranks. - Since it is a non-parametric method, the Kruskal-Wallis test '''does not''' assume a normal population, unlike the analogous one-way ANOVA.  However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians). + Since it is a non-parametric method, the Kruskal-Wallis Test '''does not''' assume a normal population, unlike the analogous one-way ANOVA.  However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians). ==Calculations== ==Calculations== - # Rank all data from all groups together; i.e., rank the data from 1 to N ignoring group membership. Assign any tied values the average of the ranks they would have received had they not been tied. + Let ''N'' be the total number of observations, then $N = \sum_{i=1}^k {n_i}$. - # The test statistic is given by: + - : $K = (N-1)\frac{\sum_{i=1}^g n_i(\bar{r}_{i\cdot} - \bar{r})^2}{\sum_{i=1}^g\sum_{j=1}^{n_i}(r_{ij} - \bar{r})^2}$, where: + - #*$n_g$ is the number of observations in group $g$ + - #*$r_{ij}$ is the rank (among all observations) of observation ''j'' from group ''i'' + - #*$N$ is the total number of observations across all groups + - #*$\bar{r}_{i\cdot} = \frac{\sum_{j=1}^{n_i}{r_{ij}}}{n_i}$, + - #*$\bar{r} =(N+1)/2$ is the average of all the $r_{ij}$. + - #*Notice that the denominator of the expression for $K$ is exactly $(N-1)N(N+1)/12$. Thus $K = \frac{12}{N(N+1)}\sum_{i=1}^g n_i(\bar{r}_{i\cdot} - \bar{r})^2$. + - # A correction for ties can be made by dividing $K$ by $1 - \frac{\sum_{i=1}^G (t_{i}^3 - t_{i})}{N^3-N}$, where G is the number of groupings of different tied ranks, and ti is the number of tied values within group i that are tied at a particular value.  This correction usually makes little difference in the value of K unless there are a large number of ties. + - # Finally, the p-value is approximated by $\Pr(\chi^2_{g-1} \ge K)$. If some ni's are small (i.e., less than 5) the probability distribution of K can be quite different from this [http://en.wikipedia.org/wiki/Chi-square Chi-square distribution]. + - The null hypothesis of equal population medians would then be rejected if $K \ge \chi^2_{\alpha: g-1}$. + Let $R(X_{ij})$ denotes the rank assigned to $X_{ij}$ and let $R_i$ be the sum of ranks assigned to the $i^{th}$ sample. - ===The Kruskal-Wallis Test using SOCR Analyses=== + : $R_i = \sum_{j=1}^{n_i} {R(X_{ij})}, i = 1, 2, ... , k$. - It is much quicker to use [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Analyses] to compute the statistical significance of this test. This [[SOCR_EduMaterials_AnalysisActivities_KruskalWallis | SOCR KruskalWallis Test activity]] may also be helpful in understanding how to use this test in SOCR. + + The SOCR program computes $R_i$ for each sample. The test statistic is defined for the following formulation of hypotheses: + + : $H_o$: All of the k population distribution functions are identical. + : $H_1$: At least one of the populations tends to yield larger observations than at least one of the other populations. + + Suppose {$X_{i,1}, X_{i,2}, \cdots, X_{i,n_i}$} represents the values of the $i^{th}$ sample, where $1\leq i\leq k$. + + : Test statistics: + :: T = $(1/{{S}^{2}}) (\sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-}{N {(N + 1)}^{2} }) / 4$, + where + ::${{S}^{2}} = \left( \left({1/ {N - 1}}\right) \right) \sum{{R(X_{ij})}^{2}} {-} {N {\left(N + 1)\right)}^{2} } ) / 4$. + + * Note: If there are no ties, then the test statistic is reduced to: + ::$T = \left(12 / N(N+1) \right) \sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-} 3 \left(N+1\right)$. + + However, the SOCR implementation allows for the possibility of having ties; so it uses the non-simplified, exact method of computation. + + Multiple comparisons have to be done here. For each pair of groups, the following is computed and printed at the '''Result''' Panel. + + $|R_{i} /n_{i} -R_{j} /n_{j} | > t_{1-\alpha /2} (S^{2^{} } (N-1-T)/(N-k))^{1/2_{} } /(1/n_{i} +1/n_{j} )^{1/2_{}}$. + + The SOCR computation employs the exact method instead of the approximate one (Conover 1980), since computation is easy and fast to implement and the exact method is somewhat more accurate. + + ===The Kruskal-Wallis Test Using SOCR Analyses=== + It is much quicker to use [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Analyses] to compute the statistical significance of this test. This [[SOCR_EduMaterials_AnalysisActivities_KruskalWallis | SOCR KruskalWallis Test Activity]] may also be helpful in understanding how to use this test in SOCR. For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers): For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers): Line 52: Line 67: [[Image:SOCR_EBook_Dinov_KruskalWallis_030108_Fig1.jpg|600px]] [[Image:SOCR_EBook_Dinov_KruskalWallis_030108_Fig1.jpg|600px]] - Clearly, there are significant differences between the group medians, even after the multiple testing correction, all groups appear different from each other. + Clearly, there is only one significant group difference between medians, after the multiple testing correction, for the group1 vs. group4 comparison (see below): : Group Method1 vs. Group Method2: 1.0 < 5.2056 : Group Method1 vs. Group Method2: 1.0 < 5.2056 : Group Method1 vs. Group Method3: 4.0 < 5.2056 : Group Method1 vs. Group Method3: 4.0 < 5.2056 - : Group Method1 vs. Group Method4: 6.0 > 5.2056 + : '''Group Method1 vs. Group Method4: 6.0 > 5.2056''' : Group Method2 vs. Group Method3: 5.0 < 5.2056 : Group Method2 vs. Group Method3: 5.0 < 5.2056 : Group Method2 vs. Group Method4: 5.0 < 5.2056 : Group Method2 vs. Group Method4: 5.0 < 5.2056 Line 65: Line 80: ==Notes== ==Notes== - * The [http://en.wikipedia.org/wiki/Friedman_test Friedman Fr test] is the rank equivalent of the randomized block design alternative to the [[AP_Statistics_Curriculum_2007_ANOVA_2Way |two-way analysis of variance F test]]. + * The [http://en.wikipedia.org/wiki/Friedman_test Friedman Fr Test] is the rank equivalent of the randomized block design alternative to the [[AP_Statistics_Curriculum_2007_ANOVA_2Way |Two-Way Analysis of Variance F Test]]. [[SOCR_EduMaterials_AnalysisActivities_Friedman | The SOCR Friedman Test Activity ]] demonstrates how to use [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Analyses] to compute the Friedman Test statistics and p-value. - + ==References== ==References== + Conover W (1980). Practical Nonparametric Statistics. John Wiley & Sons, New York, second edition. ## General Advance-Placement (AP) Statistics Curriculum - Means of Several Independent Samples In this section we extend the multi-sample inference which we discussed in the ANOVA section, to the situation where the ANOVA assumptions are invalid. Hence we use a non-parametric analysis to study differences in centrality between two or more populations. ### Motivational Example Suppose four groups of students are randomly assigned to be taught with four different techniques, and their achievement test scores are recorded. Are the distributions of test scores the same, or do they differ in location? The data is presented in the table below. Teaching Method Method 1 Method 2 Method 3 Method 4 Index 65 75 59 94 87 69 78 89 73 83 67 80 79 81 62 88 The small sample sizes and the lack of distribution information of each sample illustrate how ANOVA may not be appropriate for analyzing these types of data. ## The Kruskal-Wallis Test Kruskal-Wallis One-Way Analysis of Variance by ranks is a non-parametric method for testing equality of two or more population medians. Intuitively, it is identical to a One-Way Analysis of Variance with the raw data (observed measurements) replaced by their ranks. Since it is a non-parametric method, the Kruskal-Wallis Test does not assume a normal population, unlike the analogous one-way ANOVA. However, the test does assume identically-shaped distributions for all groups, except for any difference in their centers (e.g., medians). ## Calculations Let N be the total number of observations, then $N = \sum_{i=1}^k {n_i}$. Let R(Xij) denotes the rank assigned to Xij and let Ri be the sum of ranks assigned to the ith sample. $R_i = \sum_{j=1}^{n_i} {R(X_{ij})}, i = 1, 2, ... , k$. The SOCR program computes Ri for each sample. The test statistic is defined for the following formulation of hypotheses: Ho: All of the k population distribution functions are identical. H1: At least one of the populations tends to yield larger observations than at least one of the other populations. Suppose {$X_{i,1}, X_{i,2}, \cdots, X_{i,n_i}$} represents the values of the ith sample, where $1\leq i\leq k$. Test statistics: T = $(1/{{S}^{2}}) (\sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-}{N {(N + 1)}^{2} }) / 4$, where ${{S}^{2}} = \left( \left({1/ {N - 1}}\right) \right) \sum{{R(X_{ij})}^{2}} {-} {N {\left(N + 1)\right)}^{2} } ) / 4$. • Note: If there are no ties, then the test statistic is reduced to: $T = \left(12 / N(N+1) \right) \sum_{i=1}^{k} {{R_i}^{2}} / {n_i} {-} 3 \left(N+1\right)$. However, the SOCR implementation allows for the possibility of having ties; so it uses the non-simplified, exact method of computation. Multiple comparisons have to be done here. For each pair of groups, the following is computed and printed at the Result Panel. $|R_{i} /n_{i} -R_{j} /n_{j} | > t_{1-\alpha /2} (S^{2^{} } (N-1-T)/(N-k))^{1/2_{} } /(1/n_{i} +1/n_{j} )^{1/2_{}}$. The SOCR computation employs the exact method instead of the approximate one (Conover 1980), since computation is easy and fast to implement and the exact method is somewhat more accurate. ### The Kruskal-Wallis Test Using SOCR Analyses It is much quicker to use SOCR Analyses to compute the statistical significance of this test. This SOCR KruskalWallis Test Activity may also be helpful in understanding how to use this test in SOCR. For the teaching-methods example above, we can easily compute the statistical significance of the differences between the group medians (centers): Clearly, there is only one significant group difference between medians, after the multiple testing correction, for the group1 vs. group4 comparison (see below): Group Method1 vs. Group Method2: 1.0 < 5.2056 Group Method1 vs. Group Method3: 4.0 < 5.2056 Group Method1 vs. Group Method4: 6.0 > 5.2056 Group Method2 vs. Group Method3: 5.0 < 5.2056 Group Method2 vs. Group Method4: 5.0 < 5.2056 Group Method3 vs. Group Method4: 10.0 > 5.2056 TBD ## References Conover W (1980). Practical Nonparametric Statistics. John Wiley & Sons, New York, second edition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8589844703674316, "perplexity": 1829.8283236110854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823360.1/warc/CC-MAIN-20171019175016-20171019195016-00196.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2014.19.1987
# American Institute of Mathematical Sciences September  2014, 19(7): 1987-2011. doi: 10.3934/dcdsb.2014.19.1987 ## Intrinsic decay rate estimates for the wave equation with competing viscoelastic and frictional dissipative effects 1 Department of Mathematics, State University of Maringá, 87020-900, Maringá, PR, Brazil, Brazil 2 University of Memphis, Department of Mathematical Sciences, 373 Dunn Hall, Memphis, TN 38152 3 Department of Mathematics, State University of Ceará- FAFIDAM, 62930-000 Limoeiro do Norte - CE Received  April 2013 Revised  September 2013 Published  August 2014 Wave equation defined on a compact Riemannian manifold $(M, \mathfrak{g})$ subject to a combination of locally distributed viscoelastic and frictional dissipations is discussed. The viscoelastic dissipation is active on the support of $a(x)$ while the frictional damping affects the portion of the manifold quantified by the support of $b(x)$ where both $a(x)$ and $b(x)$ are smooth functions. Assuming that $a(x) + b(x) \geq \delta >0$ for all $x\in M$ and that the relaxation function satisfies certain nonlinear differential inequality, it is shown that the solutions decay according to the law dictated by the decay rates corresponding to the slowest damping. In the special case when the viscoelastic effect is active on the entire domain and the frictional dissipation is differentiable at the origin, then the overall decay rates are dictated by the viscoelasticity. The obtained decay estimates are intrinsic without any prior quantification of decay rates of both viscoelastic and frictional dissipative effects. This particular topic has been motivated by influential paper of Fabrizio-Polidoro [15] where it was shown that viscoelasticity with poorly behaving relaxation kernel destroys exponential decay rates generated by linear frictional dissipation. In this paper we extend these considerations to: (i) nonlinear dissipation with unquantified growth at the origin (frictional) and infinity (viscoelastic) , (ii) more general geometric settings that accommodate competing nature of frictional and viscoelastic damping. Citation: Marcelo M. Cavalcanti, Valéria N. Domingos Cavalcanti, Irena Lasiecka, Flávio A. Falcão Nascimento. Intrinsic decay rate estimates for the wave equation with competing viscoelastic and frictional dissipative effects. Discrete and Continuous Dynamical Systems - B, 2014, 19 (7) : 1987-2011. doi: 10.3934/dcdsb.2014.19.1987 ##### References: [1] F. Alabau-Boussouira, P. Cannarsa and D. Sforza, Decay estimates for second order evolution equations with memory, Journal of Functional Analysis, 254 (2008), 1342-1372. doi: 10.1016/j.jfa.2007.09.012. [2] F. Alabau-Boussouira, Convexity and weighted integral inequalities for energy decay rates of nonlinear dissipative hyperbolic systems, Applied Mathematics and Optimization, 51 (2005), 61-105. doi: 10.1007/s00245. [3] F. Alabau-Boussouira and P. Cannarsa, A general method for proving sharp energy decay rates for memory-dissipative evolution equations, C. R. Acad. Sci. Paris, Ser. I, 347 (2009), 867-872. doi: 10.1016/j.crma.2009.05.011. [4] C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary, SIAM J. Control Optim., 30 (1992), 1024-1065. doi: 10.1137/0330055. [5] M. Bellassoued, Decay of solutions of the elastic wave equation with a localized dissipation, Annales de la Faculté des Sciences de Toulouse, 12 (2003), 267-301. doi: 10.5802/afst.1049. [6] M. M. Cavalcanti, V. N. Domingos Cavalcanti and P. Martinez, General decay rate estimates for viscoelastic dissipative systems, Nonlinear Anal., 68 (2008), 177-193. doi: 10.1016/j.na.2006.10.040. [7] M. M. Cavalcanti and H. P. Oquendo, Frictional versus viscoelastic damping in a semilinear wave equation, SIAM J. Control Optim., 42 (2003), 1310-1324. doi: 10.1137/S0363012902408010. [8] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka and J. A. Soriano, Uniform stabilization of the wave equation on compact surfaces and locally distributed damping, Methods Appl. Anal., 15 (2008), 405-426. doi: 10.4310/MAA.2008.v15.n4.a1. [9] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka and J. A. Soriano, Uniform Stabilization of the wave equation on compact surfaces and locally distributed damping, Transactions of AMS, 361 (2009), 4561-4580. doi: 10.1090/S0002-9947-09-04763-1. [10] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka and J. A. Soriano, Asymptotic stability of the wave equation on compact manifolds and locally distributed damping: A sharp result, Arch. Ration. Mech. Anal., 197 (2010), 925-964. doi: 10.1007/s00205-009-0284-z. [11] H. Christianson, Semiclassical non-concentration near hyperbolic orbits, J. Funct. Anal., 246 (2007), 145-195. doi: 10.1016/j.jfa.2006.09.012. [12] C. M. Dafermos, Asymptotic behavior of solutions of evolution equations, Nonlinear evolution equations, (Proc. Sympos., Univ. Wisconsin, Madison, Wis., Publ. Math. Res. Center Univ. Wisconsin, Academic Press, New York-London, 40 (1977), (1978), 103-123. [13] M. Daoulatli, I. Lasiecka and D. Toundykov, Uniform energy decay for a wave equation with partialy supported nonlinear boundary dissipation without growth restrictions, DCDS-S, 2 (2009), 67-94. doi: 10.3934/dcdss.2009.2.67. [14] B. Dehman, G. Lebeau and E. Zuazua, Stabilization and control for the subcritical semilinear wave equation, Anna. Sci. Ec. Norm. Super., 36 (2003), 525-551. doi: 10.1016/S0012-9593(03)00021-1. [15] M. Fabrizio and S. Polidoro, Asymptotic decay for some differential systems with fading memory, Appl. Anal., 81 (2002), 1245-1264. doi: 10.1080/0003681021000035588. [16] M. Hitrik, Expansions and eigenfrequencies for damped wave equations, Journées équations aux Dérivées Partielles" (Plestin-les-Grèves, 2001), Exp. No. VI, Univ. Nantes, Nantes, (2001), 10 pp. [17] I. Lasiecka and D. Tataru, Uniform boundary stabilization of semilinear wave equation with nonlinear boundary damping, Differential and integral Equations, 6 (1993), 507-533. [18] I. Lasiecka, S. Messaoudi and M. Mustafa, Note on intrinsic decay rates for abstract wave equations with memory, Journal Mathematical Physics, 54 (2013), 031504. doi: 10.1063/1.4793988. [19] I. Lasiecka and D. Toundykov, Regularity of higher energies of wave equation with nonlinear localized damping and a nonlinear source, Nonlinear Anal., 69 (2008), 898-910. doi: 10.1016/j.na.2008.02.069. [20] G. Lebeau, Equations des ondes amorties, Algebraic Geometric Methods in Maths. Physics, (1996), 73-109. [21] P. Martinez, A new method to obtain decay rate estimates for dissipative systems with localized damping, Rev. Mat. Complutense, 12 (1999), 251-283. [22] L. Miller, Escape function conditions for the observation, control, and stabilization of the wave equation, SIAM J. Control Optim., 41 (2002), 1554-1566. doi: 10.1137/S036301290139107X. [23] S. Messaoudi and M. Mustafa , General stability result for viscoelastic wave equations, Journal of Mathematical Physics, 53 (2012), 053702. doi: 10.1063/1.4711830. [24] J. E. Muñoz Rivera and A. Peres Salvatierra, Asymptotic behaviour of the energy in partially viscoelastic materials, Quart. Appl. Math., 59 (2001), 557-578. [25] M. Nakao, Decay and global existence for nonlinear wave equations with localized dissipations in general exterior domains, New trends in the theory of hyperbolic equations, Oper. Theory Adv. Appl., Birkhäuser, Basel, 159 (2005), 213-299. doi: 10.1007/3-7643-7386-5_3. [26] M. Nakao, Energy decay for the wave equation with boundary and localized dissipations in exterior domains, Math. Nachr., 278 (2005), 771-783. doi: 10.1002/mana.200310271. [27] J. Rauch and M. Taylor, Decay of solutions to n ondissipative hyperbolic systems on compact manifolds, Comm. Pure Appl. Math., 28 (1975), 501-523. doi: 10.1002/cpa.3160280405. [28] T. Qin, Asymptotic behavior of a class of abstract semilinear integrodifferential equations and applications, J. Math. Anal. Appl., 233 (1999), 130-147. doi: 10.1006/jmaa.1999.6271. [29] D. Toundykov, Optimal decay rates for solutions of nonlinear wave equation with localized nonlinear dissipation of unrestricted growth and critical exponents source terms under mixed boundary, Nonlinear Analysis T. M. A., 67 (2007), 512-544. doi: 10.1016/j.na.2006.06.007. [30] R. Triggiani and P. F. Yao, Carleman estimates with no lower-Order terms for general Riemannian wave equations. Global uniqueness and observability in one shot, Appl. Math. and Optim, 46 (2002), 331-375. Special issue dedicated to J. L. Lions. doi: 10.1007/s00245-002-0751-5. [31] E. Zuazua, Exponential decay for the semilinear wave equation with locally distributed damping, Comm. Partial Differential Equations, 15 (1990), 205-235. doi: 10.1080/03605309908820684. show all references ##### References: [1] F. Alabau-Boussouira, P. Cannarsa and D. Sforza, Decay estimates for second order evolution equations with memory, Journal of Functional Analysis, 254 (2008), 1342-1372. doi: 10.1016/j.jfa.2007.09.012. [2] F. Alabau-Boussouira, Convexity and weighted integral inequalities for energy decay rates of nonlinear dissipative hyperbolic systems, Applied Mathematics and Optimization, 51 (2005), 61-105. doi: 10.1007/s00245. [3] F. Alabau-Boussouira and P. Cannarsa, A general method for proving sharp energy decay rates for memory-dissipative evolution equations, C. R. Acad. Sci. Paris, Ser. I, 347 (2009), 867-872. doi: 10.1016/j.crma.2009.05.011. [4] C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary, SIAM J. Control Optim., 30 (1992), 1024-1065. doi: 10.1137/0330055. [5] M. Bellassoued, Decay of solutions of the elastic wave equation with a localized dissipation, Annales de la Faculté des Sciences de Toulouse, 12 (2003), 267-301. doi: 10.5802/afst.1049. [6] M. M. Cavalcanti, V. N. Domingos Cavalcanti and P. Martinez, General decay rate estimates for viscoelastic dissipative systems, Nonlinear Anal., 68 (2008), 177-193. doi: 10.1016/j.na.2006.10.040. [7] M. M. Cavalcanti and H. P. Oquendo, Frictional versus viscoelastic damping in a semilinear wave equation, SIAM J. Control Optim., 42 (2003), 1310-1324. doi: 10.1137/S0363012902408010. [8] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka and J. A. Soriano, Uniform stabilization of the wave equation on compact surfaces and locally distributed damping, Methods Appl. Anal., 15 (2008), 405-426. doi: 10.4310/MAA.2008.v15.n4.a1. [9] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka and J. A. Soriano, Uniform Stabilization of the wave equation on compact surfaces and locally distributed damping, Transactions of AMS, 361 (2009), 4561-4580. doi: 10.1090/S0002-9947-09-04763-1. [10] M. M. Cavalcanti, V. N. Domingos Cavalcanti, R. Fukuoka and J. A. Soriano, Asymptotic stability of the wave equation on compact manifolds and locally distributed damping: A sharp result, Arch. Ration. Mech. Anal., 197 (2010), 925-964. doi: 10.1007/s00205-009-0284-z. [11] H. Christianson, Semiclassical non-concentration near hyperbolic orbits, J. Funct. Anal., 246 (2007), 145-195. doi: 10.1016/j.jfa.2006.09.012. [12] C. M. Dafermos, Asymptotic behavior of solutions of evolution equations, Nonlinear evolution equations, (Proc. Sympos., Univ. Wisconsin, Madison, Wis., Publ. Math. Res. Center Univ. Wisconsin, Academic Press, New York-London, 40 (1977), (1978), 103-123. [13] M. Daoulatli, I. Lasiecka and D. Toundykov, Uniform energy decay for a wave equation with partialy supported nonlinear boundary dissipation without growth restrictions, DCDS-S, 2 (2009), 67-94. doi: 10.3934/dcdss.2009.2.67. [14] B. Dehman, G. Lebeau and E. Zuazua, Stabilization and control for the subcritical semilinear wave equation, Anna. Sci. Ec. Norm. Super., 36 (2003), 525-551. doi: 10.1016/S0012-9593(03)00021-1. [15] M. Fabrizio and S. Polidoro, Asymptotic decay for some differential systems with fading memory, Appl. Anal., 81 (2002), 1245-1264. doi: 10.1080/0003681021000035588. [16] M. Hitrik, Expansions and eigenfrequencies for damped wave equations, Journées équations aux Dérivées Partielles" (Plestin-les-Grèves, 2001), Exp. No. VI, Univ. Nantes, Nantes, (2001), 10 pp. [17] I. Lasiecka and D. Tataru, Uniform boundary stabilization of semilinear wave equation with nonlinear boundary damping, Differential and integral Equations, 6 (1993), 507-533. [18] I. Lasiecka, S. Messaoudi and M. Mustafa, Note on intrinsic decay rates for abstract wave equations with memory, Journal Mathematical Physics, 54 (2013), 031504. doi: 10.1063/1.4793988. [19] I. Lasiecka and D. Toundykov, Regularity of higher energies of wave equation with nonlinear localized damping and a nonlinear source, Nonlinear Anal., 69 (2008), 898-910. doi: 10.1016/j.na.2008.02.069. [20] G. Lebeau, Equations des ondes amorties, Algebraic Geometric Methods in Maths. Physics, (1996), 73-109. [21] P. Martinez, A new method to obtain decay rate estimates for dissipative systems with localized damping, Rev. Mat. Complutense, 12 (1999), 251-283. [22] L. Miller, Escape function conditions for the observation, control, and stabilization of the wave equation, SIAM J. Control Optim., 41 (2002), 1554-1566. doi: 10.1137/S036301290139107X. [23] S. Messaoudi and M. Mustafa , General stability result for viscoelastic wave equations, Journal of Mathematical Physics, 53 (2012), 053702. doi: 10.1063/1.4711830. [24] J. E. Muñoz Rivera and A. Peres Salvatierra, Asymptotic behaviour of the energy in partially viscoelastic materials, Quart. Appl. Math., 59 (2001), 557-578. [25] M. Nakao, Decay and global existence for nonlinear wave equations with localized dissipations in general exterior domains, New trends in the theory of hyperbolic equations, Oper. Theory Adv. Appl., Birkhäuser, Basel, 159 (2005), 213-299. doi: 10.1007/3-7643-7386-5_3. [26] M. Nakao, Energy decay for the wave equation with boundary and localized dissipations in exterior domains, Math. Nachr., 278 (2005), 771-783. doi: 10.1002/mana.200310271. [27] J. Rauch and M. Taylor, Decay of solutions to n ondissipative hyperbolic systems on compact manifolds, Comm. Pure Appl. Math., 28 (1975), 501-523. doi: 10.1002/cpa.3160280405. [28] T. Qin, Asymptotic behavior of a class of abstract semilinear integrodifferential equations and applications, J. Math. Anal. Appl., 233 (1999), 130-147. doi: 10.1006/jmaa.1999.6271. [29] D. Toundykov, Optimal decay rates for solutions of nonlinear wave equation with localized nonlinear dissipation of unrestricted growth and critical exponents source terms under mixed boundary, Nonlinear Analysis T. M. A., 67 (2007), 512-544. doi: 10.1016/j.na.2006.06.007. [30] R. Triggiani and P. F. Yao, Carleman estimates with no lower-Order terms for general Riemannian wave equations. Global uniqueness and observability in one shot, Appl. Math. and Optim, 46 (2002), 331-375. Special issue dedicated to J. L. Lions. doi: 10.1007/s00245-002-0751-5. [31] E. Zuazua, Exponential decay for the semilinear wave equation with locally distributed damping, Comm. Partial Differential Equations, 15 (1990), 205-235. doi: 10.1080/03605309908820684. [1] Mohammad Al-Gharabli, Mohamed Balegh, Baowei Feng, Zayd Hajjej, Salim A. Messaoudi. Existence and general decay of Balakrishnan-Taylor viscoelastic equation with nonlinear frictional damping and logarithmic source term. Evolution Equations and Control Theory, 2021  doi: 10.3934/eect.2021038 [2] Moez Daoulatli. Energy decay rates for solutions of the wave equation with linear damping in exterior domain. Evolution Equations and Control Theory, 2016, 5 (1) : 37-59. doi: 10.3934/eect.2016.5.37 [3] Abdelaziz Soufyane, Belkacem Said-Houari. The effect of the wave speeds and the frictional damping terms on the decay rate of the Bresse system. Evolution Equations and Control Theory, 2014, 3 (4) : 713-738. doi: 10.3934/eect.2014.3.713 [4] Moez Daoulatli. Rates of decay for the wave systems with time dependent damping. Discrete and Continuous Dynamical Systems, 2011, 31 (2) : 407-443. doi: 10.3934/dcds.2011.31.407 [5] Ryo Ikehata, Shingo Kitazaki. Optimal energy decay rates for some wave equations with double damping terms. Evolution Equations and Control Theory, 2019, 8 (4) : 825-846. doi: 10.3934/eect.2019040 [6] Tae Gab Ha. On viscoelastic wave equation with nonlinear boundary damping and source term. Communications on Pure and Applied Analysis, 2010, 9 (6) : 1543-1576. doi: 10.3934/cpaa.2010.9.1543 [7] Donghao Li, Hongwei Zhang, Shuo Liu, Qingiyng Hu. Blow-up of solutions to a viscoelastic wave equation with nonlocal damping. Evolution Equations and Control Theory, 2022  doi: 10.3934/eect.2022009 [8] Menglan Liao. The lifespan of solutions for a viscoelastic wave equation with a strong damping and logarithmic nonlinearity. Evolution Equations and Control Theory, 2022, 11 (3) : 781-792. doi: 10.3934/eect.2021025 [9] Jing Zhang. The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework. Evolution Equations and Control Theory, 2017, 6 (1) : 135-154. doi: 10.3934/eect.2017008 [10] Claudianor O. Alves, M. M. Cavalcanti, Valeria N. Domingos Cavalcanti, Mohammad A. Rammaha, Daniel Toundykov. On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms. Discrete and Continuous Dynamical Systems - S, 2009, 2 (3) : 583-608. doi: 10.3934/dcdss.2009.2.583 [11] Kim Dang Phung. Decay of solutions of the wave equation with localized nonlinear damping and trapped rays. Mathematical Control and Related Fields, 2011, 1 (2) : 251-265. doi: 10.3934/mcrf.2011.1.251 [12] Mohammad A. Rammaha, Daniel Toundykov, Zahava Wilstein. Global existence and decay of energy for a nonlinear wave equation with $p$-Laplacian damping. Discrete and Continuous Dynamical Systems, 2012, 32 (12) : 4361-4390. doi: 10.3934/dcds.2012.32.4361 [13] Belkacem Said-Houari, Flávio A. Falcão Nascimento. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction. Communications on Pure and Applied Analysis, 2013, 12 (1) : 375-403. doi: 10.3934/cpaa.2013.12.375 [14] Belkacem Said-Houari, Salim A. Messaoudi. General decay estimates for a Cauchy viscoelastic wave problem. Communications on Pure and Applied Analysis, 2014, 13 (4) : 1541-1551. doi: 10.3934/cpaa.2014.13.1541 [15] Wenjun Liu, Biqing Zhu, Gang Li, Danhua Wang. General decay for a viscoelastic Kirchhoff equation with Balakrishnan-Taylor damping, dynamic boundary conditions and a time-varying delay term. Evolution Equations and Control Theory, 2017, 6 (2) : 239-260. doi: 10.3934/eect.2017013 [16] Jong Yeoul Park, Sun Hye Park. On uniform decay for the coupled Euler-Bernoulli viscoelastic system with boundary damping. Discrete and Continuous Dynamical Systems, 2005, 12 (3) : 425-436. doi: 10.3934/dcds.2005.12.425 [17] Huafei Di, Yadong Shang, Jiali Yu. Existence and uniform decay estimates for the fourth order wave equation with nonlinear boundary damping and interior source. Electronic Research Archive, 2020, 28 (1) : 221-261. doi: 10.3934/era.2020015 [18] César Augusto Bortot, Wellington José Corrêa, Ryuichi Fukuoka, Thales Maier Souza. Exponential stability for the locally damped defocusing Schrödinger equation on compact manifold. Communications on Pure and Applied Analysis, 2020, 19 (3) : 1367-1386. doi: 10.3934/cpaa.2020067 [19] Barbara Kaltenbacher, Irena Lasiecka. Global existence and exponential decay rates for the Westervelt equation. Discrete and Continuous Dynamical Systems - S, 2009, 2 (3) : 503-523. doi: 10.3934/dcdss.2009.2.503 [20] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure and Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273 2020 Impact Factor: 1.327
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333409667015076, "perplexity": 2854.85357626024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00606.warc.gz"}
http://divisbyzero.com/2009/02/11/is-this-the-cayley-table-of-a-group-part-2/
Posted by: Dave Richeson | February 11, 2009 ## Is this the Cayley table of a group? (Part 2) In my previous post I posed the following question: suppose you are given a table for a binary operation such that 1. there is a two-sided identity, 2. every element has a two-sided inverse, and 3. the table is a Latin square (that is, each symbol occurs exactly once in each row and each column). Is it the Cayley table of a group? If so, prove it.  If not, find a minimal counter-example. The answer, not surprisingly, is no. It need not be a group. A set satisfying properties (2) and (3) is called a quasigroup, and one satisfying (1), (2), and (3) is called a loop. Thus we would like to find the smallest loop that is not a group. We can create a simple example of a loop that is not a group by modifying the quaternions. Keep all the usual products, but assume that $i^2=j^2=k^2=1$. Then $(i*i)*j=j$ and $i*(i*j)=i*k=-j$. Thus the operation is not associative and we have an example of order 8. (I found this example here.) $\begin{array}{|c||c|c|c|c|c|c|c|c|}\hline * & 1 & -1 & i & -i & j & -j & k & -k \\\hline\hline 1 & 1 & -1 & i & -i & j & -j & k & -k \\\hline -1 & -1 & 1 & -i & i & -j & j & -k & k \\\hline i & i & -i & 1 & -1 & k & -k & -j & j \\\hline -i & -i & i & -1 & 1 & -k & k & j & -j \\\hline j & j & -j & -k & k & 1 & -1 & i & -i \\\hline -j & -j & j & k & -k & -1 & 1 & -i & 0 \\\hline k & k & -k & j & -j & -i & i & 1 & -1 \\\hline -k & -k & k & -j & j & i & -i & -1 & 1 \\\hline \end{array}$ What about a minimal example? It is trivial to check that any 1×1, 2×2, and 3×3 table that satisfies (1)-(4) is a group. The 4×4 case requires a few more cases to check, but likewise any 4×4 loop is a group. Thus we turn to the 5×5 case. This is our minimal case. The paper “Cayley Tables and Associativity” (pdf), by R. P. Burn, (The Mathematical Gazette, Vol. 62, No. 422, (Dec., 1978), pp. 278-281) contains the following example. $\begin{array}{|c||c|c|c|c|c|}\hline * & 1 & 2 & 3 & 4 & 5 \\\hline\hline 1 & 1 & 2 & 3 & 4 & 5 \\\hline 2 & 2 & 1 & 4 & 5 & 3 \\\hline 3 & 3 & 5 & 1 & 2 & 4 \\\hline 4 & 4 & 3 & 5 & 1 & 2 \\\hline 5 & 5 & 4 & 2 & 3 & 1 \\\hline \end{array}$ Indeed it satisfies (1)-(3), but it fails to be associative. For example, $4*(3*2)=4*5=2$ and $(4*3)*2=5*2=4$. (Actually, we can easily see that this fails to be a group by Lagrange’s Theorem: every element is its own inverse, and hence has order 2, but 2 does not divide 5.) This paper has an interesting discussion of tables that have certain types associative products.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313817024230957, "perplexity": 291.3195778172811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642306/warc/CC-MAIN-20140305060722-00011-ip-10-183-142-35.ec2.internal.warc.gz"}
https://byjus.com/ncert-solutions-for-class-10-maths-chapter-10-circles-ex-10-1/
# NCERT Solutions for Class 10 Maths Exercise 10.1 Chapter 10 Circle NCERT Solutions for class 10 Maths Chapter 10- Circles Exercise 10.1 gives you a detailed answer to the questions provided in the NCERT class 10maths chapter 10. Students are advised to study this exercise solution to clear all your doubts on Circles. This NCERT solution covers basic topics on circles and numerical problems on tangent to a circle. Solving these exercise questions students can understand the theorem and can solve the question on tangent to a circle easily. ## Topics covered in Exercise 10.1 1. Introduction 2. Tangent to a circle 3. Solved examples • Make you understand the theorem thoroughly. • Helps you in memorising the derivations and important formulas. • Assists you in solving different varieties of questions. ### Access other exercise solutions of class 10 Maths Chapter 10- Circles Exercise 10.1- 4 Questions (1 short answer questions, 1 fill in the blanks question and 2 long answer questions) Exercise 10.2- 13 Questions (10 long answer questions, 4 descriptive type questions and 2 short answer questions)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737406134605408, "perplexity": 2536.091949659685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00124.warc.gz"}
http://mathhelpforum.com/calculus/227275-need-help-double-integral.html
# Math Help - need help with double integral 1. ## need help with double integral hey guy, I'm having a problem with double integral integration. Can you guys tell me where I went wrong? The equation is $\int_0^1\int_0^y\frac{y^3}{x^2+y^2}dxdy$ let $U = x+y$ $\frac{dU}{dx} = 1$ $\int_0^1\int_0^y\frac{y^3}{U^2}dUdy$ $\int_0^1[\frac{-y^3}{x+y}]_0^y dy$ $\int_0^1 \frac{y^2}{2} dy$ $[\frac{y^3}{6}]_0^1 = \frac{1}{6}$ However, I seem like this answer is wrong, and the correct answer should be $\frac{pi}{12}$ Can anyone point out what I did wrong here? Best Regards Junks 2. ## Re: need help with double integral $(x+y)^2 = x^2 + y^2$ Really? Try it again. 3. ## Re: need help with double integral ahh, I see, I'll let U=x^2 + y^2 .................................................. .......................................... So I let $U=x^2+y^2$ $\int_0^1\int_0^y\frac{y^3}{2*x*U}$ $\int_0^1[\frac{y^3}{2*x}*log(x^2+y^2)]_0^y$ $\int_0^1\frac{y^2}{2}*log(2y^2)$ let $U=log(2*y^2)$ and $\frac{dV}{dy}=\frac{y^2}{2}$ $\frac{dU}{dy}=\frac{2}{y}$ and $V=\frac{y^3}{6}$ using this rule $UV-\int \frac{dU}{dy}*\frac{dV}{dy}dy$ $[\frac{y^3}{6}*log(2*y^2)-\int y dy]_0^1$ $\frac{1}{6}*log(2)-\frac{1}{2}$ the answer turn out to be -0.38, but the correct answer seem to be pi/12 or 0.26. What did I do wrong in here? 4. ## Re: need help with double integral $\int_0^1\int_0^y\frac{y^3}{x^2+y^2}dxdy=$ $\int_0^1 y^3 \int_0^y \dfrac{1}{x^2+y^2}~dx~dy=$ $\int_0^1 y^3 \left(\dfrac{\arctan(\frac{x}{y})}{y}|_0^y\right)~ dy=$ $\int_0^1 y^2 \left(\arctan(1)-\arctan(0)\right)~dy=$ $\int_0^1 y^2 \dfrac{\pi}{4}~dy=$ $\dfrac{\pi}{4}\left( \dfrac{y^3}{3}~|_0^1\right)=$ $\dfrac{\pi}{12}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971583485603333, "perplexity": 1106.029657986184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922763.3/warc/CC-MAIN-20140909051027-00278-ip-10-180-136-8.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/39138/literature-request-twisted-dirichlet-series
Literature Request: Twisted Dirichlet Series Recently, I have been pushed toward studying analytic continuation dirichlet series with twists that are additive. These are functions $D(s) = \sum e_k(hn)\frac{a_n}{n^s}$ where $a_n$ is some sequence (usually positive) and $e_k(n)=e^{2\pi i \frac{n}{k}}$ with $h\in \mathbb{N}.$ I would like to know if there is any literature on this topic available. - Are you asking about Dirichlet series, or twisted Dirichlet series? My guess is both should use the same tools so you should start with usual Dirichlet series (which are covered in many analytic number theory books). If your coefficients are positive, the alternating signs might make the series converge on a bigger half plane (that's the only difference I can see so far). A good place to start might be to study the case $a_n = 1$ (which gives you the $\zeta$ function, and $L$ series in the twisted case). – Joel Cohen May 15 '11 at 1:17 I know it is tabbo to answer your own question but the Conclusion is that additive and multiplicative twists same in the sense that if $D(s)$ continues for every $\chi(n)$ which is multiplicative then the additive twist will continue as well. If $$\tau(\tilde{\chi})=\sum_{n\mod k}\tilde{\chi}(n)e_k(n)$$ is a Gauss sum then $$e_k(n) = \frac{1}{\phi(k)}\sum_{\chi \mod k} \chi(\bar{n}) \tau(\chi)$$ where $\bar{n}n=1\mod k.$ Hence an additive twist can be written as a sum of multiplicative twisted Dirichlet series.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713267683982849, "perplexity": 267.32317999893775}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398474527.16/warc/CC-MAIN-20151124205434-00204-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/94702-find-all-functions.html
1. ## Find all functions Find all functions f : IR ---> IR : ∀ (x,y) ∈ IR² : 2. If we replace $f(t) ~~$ by $\frac{1}{g(t)}$ Then it becomes $\frac{1}{g(\frac{x+y}{2})}[ g(y) + g(x) ] = 2$ $g(x) + g(y) = 2 g( \frac{x+y}{2})$ It implies that $g(t) = t ~~$ so $f(t) = \frac{1}{t}$ Is there another function satisifying the requirement ? 3. How about $f(t)=\frac1{at+b}$ for constants $a$, $b$ not both zero? 4. The equation $g\left(\frac{x+y}2\right)=\frac{g(x)+g(y)}2$ that occurs in simplependulum's reply is known as Jensen's equation. To solve it, put $y=0$ and then $g\left(\frac x2\right)=\frac{g(x)+b}2$ where $b=g(0)$. Put $x+y$ in this latter equation: $g\left(\frac{x+y}2\right)=\frac{g(x+y)+b}2$. And now we see that $g(x+y)+b=g(x)+g(y)$. Finally, put $h(x)=g(x)-b$. Then $h(x+y)=h(x)+h(y)$. This is the well-known Cauchy's equation. The most general continuous solution of Cauchy's equation is $h(x)=ax$ where $a$ is constant. Therefore $g(x)=ax+b$ is the most general continuous solution of Jensen's equation. However, it must be said that there exist solutions of Cauchy's equation on $\mathbb R$ which are not continuous, and they turn out to be extremely pathological. For example, if $h$ is one of these solutions and $I=(a,b)$ is any open interval then $h(I)$ is dense in $\mathbb R$. So it makes sense in many cases to ask for conditions on the solution, such as continuity, or boundedness on a finite interval, or monotonicity, anything which would avoid these weird functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880104660987854, "perplexity": 296.9024706057554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660548.33/warc/CC-MAIN-20160924173740-00289-ip-10-143-35-109.ec2.internal.warc.gz"}
https://astarmathsandphysics.com/university-physics-notes/fluid-mechanics/1573-pathlines.html?tmpl=component&print=1&page=
## Pathlines Pathlines in fluids are the paths taken by the individual particles of the fluid as they travel from point to point. We can find the pathlines taken by the particles of a fluid if we know the velocity of the particles of the fluid. In two dimensions, we write down the differential equations for the pathlines, in vector form asor in component form as We integrate these to obtain the parametric equationsIt may be possible to solve these to obtain a single solution relatingand(orand). A particular pathline might be found using a given initial point. Example: Find the equations of the pathlines for the velocity function The equations are(1) and(2). From (1)a constant and from (2) The pathlines are concentric circles around the origin. Asincreases, the speed decreases in proportion. Example: Find the equations of the pathlines for the velocity functionwith The equations are(3) and(4). From (3)and from (4) Hence
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550117254257202, "perplexity": 906.6081042301155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00869.warc.gz"}
http://export.arxiv.org/abs/2110.01011?context=cs
cs (what is this?) Title: A QLP Decomposition via Randomization Abstract: This paper is concerned with full matrix decomposition of matrices, primarily low-rank matrices. It develops a QLP-like decomposition algorithm such that when operating on a matrix A, gives A = QLP^T , where Q and P are orthonormal, and L is lower-triangular. The proposed algorithm, termed Rand-QLP, utilizes randomization and the unpivoted QR decomposition. This in turn enables Rand-QLP to leverage modern computational architectures, thus addressing a serious bottleneck associated with classical and most recent matrix decomposition algorithms. We derive several error bounds for Rand- QLP: bounds for the first k approximate singular values as well as the trailing block of the middle factor, which show that Rand-QLP is rank-revealing; and bounds for the distance between approximate subspaces and the exact ones for all four fundamental subspaces of a given matrix. We assess the speed and approximation quality of Rand-QLP on synthetic and real matrices with different dimensions and characteristics, and compare our results with those of multiple existing algorithms. Subjects: Numerical Analysis (math.NA); Signal Processing (eess.SP) Cite as: arXiv:2110.01011 [math.NA] (or arXiv:2110.01011v1 [math.NA] for this version) Submission history From: Maboud Kaloorazi [view email] [v1] Sun, 3 Oct 2021 14:29:52 GMT (5167kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925769031047821, "perplexity": 2234.1053837820987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00467.warc.gz"}
http://www.contrib.andrew.cmu.edu/~ryanod/index.php?p=1005&replytocom=8295
# §6.1: Notions of pseudorandomness The most obvious spectral property of a truly random function $\boldsymbol{f} : \{-1,1\}^n \to \{-1,1\}$ is that all of its Fourier coefficients are very small (as we saw in Exercise 5.8). Let’s switch notation to $\boldsymbol{f} : \{-1,1\}^n \to \{0,1\}$; in this case $\boldsymbol{f}(\emptyset)$ will not be very small but rather very close to $1/2$. Generalizing: Proposition 1 Let $n > 1$ and let $\boldsymbol{f} : \{-1,1\}^n \to \{0,1\}$ be a $p$-biased random function; i.e., each $\boldsymbol{f}(x)$ is $1$ with probability $p$ and $0$ with probability $1-p$, independently for all $x \in \{-1,1\}^n$. Then except with probability at most $2^{-n}$, all of the following hold: $|\widehat{\boldsymbol{f}}(\emptyset) - p| \leq 2\sqrt{n} 2^{-n/2}, \qquad \forall S \neq \emptyset \quad |\widehat{\boldsymbol{f}}(S)| \leq 2\sqrt{n} 2^{-n/2}.$ Proof: We have $\widehat{\boldsymbol{f}}(S) = \sum_{x} \frac{1}{2^n} x^S \boldsymbol{f}(x)$, where the random variables $\boldsymbol{f}(x)$ are independent. If $S = \emptyset$ then the coefficients $\frac{1}{2^n} x^S$ sum to $1$ and the mean of $\widehat{\boldsymbol{f}}(S)$ is $p$; otherwise the coefficients sum to $0$ and the mean of $\widehat{\boldsymbol{f}}(S)$ is $0$. Either way we may apply the Hoeffding bound to conclude that $\mathop{\bf Pr}[|\widehat{\boldsymbol{f}}(S) - \mathop{\bf E}[\widehat{\boldsymbol{f}}(S)]| \geq t] \leq 2\exp(-t^2 \cdot 2^{n-1})$ for any $t > 0$. Selecting $t = 2\sqrt{n} 2^{-n/2}$, the above bound is $2\exp(-2n) \leq 4^{-n}$. The result follows by taking a union bound over all $S \subseteq [n]$. $\Box$ This proposition motivates the following basic notion of “pseudorandomness”: Definition 2 A function $f : \{-1,1\}^n \to {\mathbb R}$ is $\epsilon$-regular (sometimes called $\epsilon$-uniform) if $|\widehat{f}(S)| \leq \epsilon$ for all $S \neq \emptyset$. Remark 3 By Exercise 3.8, every function $f$ is $\epsilon$-regular for $\epsilon = \|f\|_1$. We are often concerned with $f : \{-1,1\}^n \to [-1,1]$, in which case we focus on $\epsilon \leq 1$. Examples 4 Proposition 1 states that a random $p$-biased function is $(2\sqrt{n} 2^{-n/2})$-regular with very high probability. A function is $0$-regular if and only if it is constant (even though you might not think of a constant function as very “random”). If $A \subseteq {\mathbb F}_2^n$ is an affine subspace of codimension $k$ then $1_A$ is $2^{-k}$-regular (Proposition 3.12). For $n$ even the inner product mod $2$ function and the complete quadratic function, $\mathrm{IP}_{n}, \mathrm{CQ}_n : {\mathbb F}_2^{n} \to \{0,1\}$, are $2^{-n/2-1}$-regular (Exercise 1.1). On the other hand, the parity functions $\chi_S : \{-1,1\}^n \to \{-1,1\}$ are not $\epsilon$-regular for any $\epsilon < 1$ (except for $S = \emptyset$). By Exercise 5.22, $\mathrm{Maj}_n$ is $\tfrac{1}{\sqrt{n}}$-regular. The notion of regularity can be particularly useful for probability density functions; in this case it is traditional to use an alternate name: Definition 5 If $\varphi : {\mathbb F}_2^n \to {\mathbb R}^{\geq 0}$ is a probability density which is $\epsilon$-regular, we call it an $\epsilon$-biased density. Equivalently, $\varphi$ is $\epsilon$-biased if and only if $|\mathop{\bf E}_{{\boldsymbol{x}} \sim \varphi}[\chi_\gamma({\boldsymbol{x}})]| \leq \epsilon$ for all $\gamma \in {\widehat{{\mathbb F}_2^n}} \setminus \{0\}$; thus one can think of “$\epsilon$-biased” as meaning “at most $\epsilon$-biased on subspaces”. Note that the marginal of such a distribution on any set of coordinates $J \subseteq [n]$ is also $\epsilon$-biased. If $\varphi$ is $\varphi_A = 1_A/\mathop{\bf E}[1_A]$ for some $A \subseteq {\mathbb F}_2^n$ we call $A$ an $\epsilon$-biased set. Examples 6 For $\varphi$ a probability density we have $\|\varphi\|_1 = \mathop{\bf E}[\varphi] = 1$, so every density is $1$-biased. The density corresponding to the uniform distribution on ${\mathbb F}_2^n$, namely $\varphi \equiv 1$, is the only $0$-biased density. Densities corresponding to the uniform distribution on smaller affine subspaces are “maximally biased”: if $A \subseteq {\mathbb F}_2^n$ is an affine subspace of codimension less than $n$ then $\varphi_A$ is not $\epsilon$-biased for any $\epsilon < 1$ (Proposition 3.12 again). If $E = \{(0, \dots, 0), (1, \dots, 1)\}$ then $E$ is a $1/2$-biased set (an easy computation, see also Exercise 1.1(h)). There is a “combinatorial” property of functions $f$ which is roughly equivalent to $\epsilon$-regularity. Recall from Exercise 1.28 that $\hat{\lVert} f \hat{\rVert}_4^4$ has an equivalent non-Fourier formula: $\mathop{\bf E}_{{\boldsymbol{x}},\boldsymbol{y},\boldsymbol{z}}[f({\boldsymbol{x}})f(\boldsymbol{y})f(\boldsymbol{z})f({\boldsymbol{x}}+\boldsymbol{y}+\boldsymbol{z})]$. We show (roughly speaking) that $f$ is regular if and only if this expectation is not much bigger than $\mathop{\bf E}[f]^4 = \mathop{\bf E}_{{\boldsymbol{x}},\boldsymbol{y},\boldsymbol{z},\boldsymbol{w}}[f({\boldsymbol{x}})f(\boldsymbol{y})f(\boldsymbol{z})f(\boldsymbol{w})]$: Proposition 7 Let $f : {\mathbb F}_2^n \to {\mathbb R}$. Then 1. If $f$ is $\epsilon$-regular then $\hat{\lVert} f \hat{\rVert}_4^4 – \mathop{\bf E}[f]^4 \leq \epsilon^2 \cdot \mathop{\bf Var}[f]$. 2. If $f$ is not $\epsilon$-regular then $\hat{\lVert} f \hat{\rVert}_4^4 – \mathop{\bf E}[f]^4 \geq \epsilon^4$. Proof: If $f$ is $\epsilon$-regular then $\hat{\lVert} f \hat{\rVert}_4^4 - \mathop{\bf E}[f]^4 = \sum_{S \neq \emptyset} \widehat{f}(S)^4 \leq \max_{S \neq \emptyset} \{\widehat{f}(S)^2\} \cdot \sum_{S \neq \emptyset} \widehat{f}(S)^2 \leq \epsilon^2 \cdot \mathop{\bf Var}[f].$ On the other hand, if $f$ is not $\epsilon$-regular then $|\widehat{f}(T)| \geq \epsilon$ for some $T \neq \emptyset$; hence $\hat{\lVert} f \hat{\rVert}_4^4$ is at least $\widehat{f}(\emptyset)^4 + \widehat{f}(T)^4 \geq \mathop{\bf E}[f]^4 + \epsilon^4$. $\Box$ The condition of $\epsilon$-regularity — that all non empty-set coefficients are small — is quite strong. As we saw when investigating the $\frac{2}{\pi}$ Theorem in Chapter 5.4 it’s also interesting to consider $f$ that merely have $|\widehat{f}(i)| \leq \epsilon$ for all $i \in [n]$; for monotone $f$ this is the same as saying $\mathbf{Inf}_i[f] \leq \epsilon$ for $i$. This suggests two weaker possible notions of pseudorandomness: having all low-degree Fourier coefficients small, and having all influences small. We will consider both possibilities, starting with the second. Now a randomly chosen $\boldsymbol{f} : \{-1,1\}^n \to \{-1,1\}$ will not have all of its influences small; in fact as we saw in Exercise 2.10, each $\mathbf{Inf}_i[\boldsymbol{f}]$ is $1/2$ in expectation. However for any $\delta > 0$ it will have all of its $(1-\delta)$-stable influences exponentially small (recall Definition 2.51). In the exercises you will show: Fact 8 Fix $\delta \in [0,1]$ and let $\boldsymbol{f} : \{-1,1\}^n \to \{-1,1\}$ be a randomly chosen function. Then for any $i \in [n]$, $\mathop{\bf E}[\mathbf{Inf}_i^{(1-\delta)}[f]] = \frac{(1-\delta/2)^n}{2-\delta}.$ This motivates a very important notion of pseudorandomness in the analysis of boolean functions: having all stable-influences small. Recalling the discussion surrounding Proposition 2.53, we can also describe this as having no “notable” coordinates. Definition 9 We say that $f : \{-1,1\}^n \to {\mathbb R}$ has $(\epsilon,\delta)$-small stable influences, or no $(\epsilon,\delta)$-notable coordinates, if $\mathbf{Inf}_{i}^{(1-\delta)}[f] \leq \epsilon$ for each $i \in [n]$. This condition gets stronger as $\epsilon$ and $\delta$ decrease: when $\delta = 0$, meaning $\mathbf{Inf}_{i}[f] \leq \epsilon$ for all $i$, we simply say $f$ has $\epsilon$-small influences. Examples 10 Besides random functions, important examples of boolean-valued functions with no notable coordinates are constants, majority, and large parities. Constant functions are the ultimate in this regard: they have $(0,0)$-small stable influences. (Indeed, constant functions are the only ones with $0$-small influences.) The $\mathrm{Maj}_n$ function has $\frac{1}{\sqrt{n}}$-small influences. To see the distinction between influences and stable influences, consider the parity functions $\chi_S$. Any parity function $\chi_S$ (with $S \neq \emptyset$) has at least one coordinate with maximal influence, $1$. But if $|S|$ is “large” then all of its stable influences will be small: we have $\mathbf{Inf}^{(1-\delta)}_i[\chi_S]$ equal to $(1-\delta)^{|S|-1}$ when $i \in S$ and equal to $0$ otherwise. I.e., $\chi_S$ has $((1-\delta)^{|S|-1}, \delta)$-small stable influences. In particular, $\chi_S$ has $(\epsilon, \delta)$-small stable influences whenever $|S| \geq \frac{\ln(e/\epsilon)}{\delta}$. The prototypical example of a function $f : \{-1,1\}^n \to \{-1,1\}$ which does not have small stable influences is an unbiased $k$-junta. Such a function has $\mathop{\bf Var}[f] = 1$ and hence from Fact 2.52 the sum of its $(1-\delta)$-stable influences is at least $(1-\delta)^{k-1}$. Thus $\mathbf{Inf}^{(1-\delta)}_i[f] \geq (1-\delta)^{k-1}/k$ for at least one $i$; hence $f$ does not have $((1-\delta)^k/k, \delta)$-small stable influences for any $\delta \in (0,1)$. A somewhat different example is the function $f(x) = x_0 \mathrm{Maj}_n(x_1, \dots, x_n)$, which has $\mathbf{Inf}_0^{(1-\delta)}[f] \geq 1 – \sqrt{\delta}$; see the exercises. Let’s return to considering the interesting condition that $|\widehat{f}(i)| \leq \epsilon$ for all $i \in [n]$. We will call this condition $(\epsilon,1)$-regularity. It is equivalent to saying that $f^{\leq 1}$ is $\epsilon$-regular, or that $f$ has at most $\epsilon$ “correlation” with every dictator: $|\langle f, \pm \chi_i \rangle| \leq \epsilon$ for all $i$. Our third notion of pseudorandomness extends this condition to higher degrees: Definition 11 A function $f : \{-1,1\}^n \to {\mathbb R}$ is $(\epsilon,k)$-regular if $|\widehat{f}(S)| \leq \epsilon$ for all $0 < |S| \leq k$; equivalently, if $f^{\leq k}$ is $\epsilon$-regular. For $k = n$ (or $k = \infty$), this condition coincides with $\epsilon$-regularity. When $\varphi : {\mathbb F}_2^n \to {\mathbb R}^{\geq 0}$ is an $(\epsilon,k)$-regular probability density, it is more usual to call $\varphi$ (and the associated probability distribution) $(\epsilon,k)$-wise independent. Below we give two alternate characterizations of $(\epsilon, k)$-regularity; however they are fairly “rough” in the sense that they have exponential losses on $k$. This can be acceptable if $k$ is thought of as a constant. The first characterization is that $f$ is $(\epsilon, k)$-regular if and only if fixing $k$ input coordinates changes $f$’s mean by at most $O(\epsilon)$. The second characterization is the condition that $f$ has $O(\epsilon)$ covariance with every $k$-junta. Proposition 12 Let $f : \{-1,1\}^n \to {\mathbb R}$ and let $\epsilon \geq 0$, $k \in {\mathbb N}$. 1. If $f$ is $(\epsilon,k)$-regular then any restriction of at most $k$ coordinates changes $f$’s mean by at most $2^{k} \epsilon$. 2. If $f$ is not $(\epsilon,k)$-regular then some restriction to at most $k$ coordinates changes $f$’s mean by more than $\epsilon$. Proposition 13 Let $f : \{-1,1\}^n \to {\mathbb R}$ and let $\epsilon \geq 0$, $k \in {\mathbb N}$. 1. If $f$ is $(\epsilon,k)$-regular then $\mathop{\bf Cov}[f,h] \leq \hat{\lVert} h \hat{\rVert}_1 \epsilon$ for any $h : \{-1,1\}^n \to {\mathbb R}$ with $\deg(h) \leq k$. In particular, $\mathop{\bf Cov}[f,h] \leq 2^{k/2} \epsilon$ for any $k$-junta $h : \{-1,1\}^n \to \{-1,1\}$. 2. If $f$ is not $(\epsilon,k)$-regular then $\mathop{\bf Cov}[f,h] > \epsilon$ for some $k$-junta $h : \{-1,1\}^n \to \{-1,1\}$. We will prove Proposition 12, leaving the proof of Proposition 13 to the exercises. Proof: For the first statement, suppose $f$ is $(\epsilon,k)$-regular and let $J \subseteq [n]$, $z \in \{-1,1\}^{J}$, where $|J| \leq k$. Then the statement holds because $\mathop{\bf E}[f_{\overline{J} \mid z}] = \widehat{f}(\emptyset) + \sum_{\emptyset \neq T \subseteq J} \widehat{f}(T)\,z^T$ (Exercise 1.16) and each of the at most $2^{k}$ terms $|\widehat{f}(T)\,z^T| = |\widehat{f}(T)|$ is at most $\epsilon$. For the second statement, by subtracting a constant from $f$ we may assume without loss of generality that its mean is $0$. Suppose now that $|\widehat{f}(U)| > \epsilon$ where $0 < |U| \leq k$. Let $\boldsymbol{z} \sim \{-1,1\}^U$ be a random restriction to the coordinates $U$. By Corollary 3.22 we have $\mathop{\bf E}[\widehat{f_{\mid \boldsymbol{z}}}(\emptyset)^2] = \sum_{T \subseteq U} \widehat{f}(T)^2 \geq \widehat{f}(U)^2 > \epsilon^2.$ Thus there must exist a particular $z$ such that $f_{\mid z}(\emptyset)^2 > \epsilon^2$; i.e., $|f_{\mid z}(\emptyset)| > \epsilon$. This restriction changes $f$’s mean by more than $\epsilon$.$\Box$ Taking $\epsilon = 0$ in the above two propositions we obtain: Corollary 14 For $f : \{-1,1\}^n \to {\mathbb R}$, the following are equivalent: 1. $f$ is $(0, k)$-regular. 2. Every restriction of at most $k$ coordinates leaves $f$’s mean unchanged. 3. $\mathbf{Cov}[f,h] = 0$ for every $k$-junta $h : \{-1,1\}^n \to \{-1,1\}$. If $f$ is a probability density, condition (3) is equivalent to $\mathop{\bf E}_{{\boldsymbol{x}} \sim f}[h({\boldsymbol{x}})] = \mathop{\bf E}[h]$ for every $k$-junta $h : \{-1,1\}^n \to \{-1,1\}$. For such functions, additional terminology is used: Definition 15 If $f : \{-1,1\}^n \to \{-1,1\}$ is $(0,k)$-regular, it is also called $k$th-order correlation immune. If $f$ is in addition unbiased then it is called $k$-resilient. Finally, if $\varphi : {\mathbb F}_2^n \to {\mathbb R}^{\geq 0}$ is a $(0,k)$-regular probability density then we call $\varphi$ (and the associated probability distribution) $k$-wise independent. Examples 16 Any parity function $\chi_S : \{-1,1\}^n \to \{-1,1\}$ with $|S| = k+1$ is $k$-resilient. More generally, so is $\chi_S \cdot g$ for any $g : \{-1,1\}^n \to \{-1,1\}$ which does not depend on the coordinates in $S$. For a good example of a correlation immune function which is not resilient, consider $h : \{-1,1\}^{3m} \to \{-1,1\}$ defined by $h = \chi_{\{1, \dots, 2m\}} \wedge \chi_{\{m+1, \dots, 3m\}}$. This $h$ is not unbiased, being $\mathsf{True}$ on only a $1/4$-fraction of inputs. However, its bias does not change unless at least $2m$ input bits are fixed; hence $h$ is $(2m-1)$th-order correlation immune. We conclude this section with a diagram indicating how our various notions of pseudorandomness compare: Comparing notions of pseudorandomness: arrows go from stronger notions to (strictly) weaker ones For precise quantitative statements, counterexamples showing that no other relationships are possible, and explanations for why these notions essentially coincide for monotone functions, see the exercises. ### 17 comments to §6.1: Notions of pseudorandomness • Chin Ho Lee In the proof of proposition 1, the R.H.S. of the first inequality should be 2\exp(-t^2 2^{n+1}) (without /), and the choice of t should be \sqrt{n}2^{-n/2} (without the constant 2). • Thanks, definitely a bug in there. Hopefully it’s accurate now. • Chin Ho Lee Yes, it is correct now. I think the bound can be improved by a bit since for each x in {-1, 1}^n, the random variable x^S f(x) is either from {0, -1}, or {0, 1} and so the R.H.S. can be 2\exp(-t^2 2^n)? • Chin Ho Lee Sorry I meant to say 2\exp(-t^2 2^{n+1}). • AA A few small things: - In Examples 6, “uniform distribution on $F_2$” should be on “$F_2^n$” (exponent ^n is missing). - In the proof of Proposition 12, the restricted function in the displayed formula $f_{J | z}$ I guess should be $f_{\overline{J} | z}$. Why not use $J$ instead of $\overline{J}$ everywhere? - There is a lost curly bracket right after the beginning of that proof. - In the statement of Corollary 14, what is “condition (3)”? • Thanks a lot AA, I think I fixed everything! • Noam Lifshitz in the proof of the second statement of prop. 12 . I am probably missing somethitg but I think $\hat{g}\left(j\right)=\sum_{j\in S\subseteq U}\pm\hat{f}\left(s\right)$ rather than $\hat{f}\left(U\right)$ • Hmm, you’re right, this is a bad proof. The simplest correct (?) proof I can think of right now is the following: Assume WOLOG that the mean of f is 0. Let U be a Fourier coefficient of cardinality between 0 and k and magnitude at least $\epsilon$. Consider restricting the coordinates in U at random, forming the function g. Now $\mathbf{E}[\widehat{g}(\emptyset)^2] = \sum_{S \subseteq U} \widehat{f}(S)^2$ (this is Corollary 3.22), which is at least $\epsilon^2$ because of the $S = U$ term in the sum. Thus there exists a restriction z such that $\widehat{f|_z}(\emptyset)^2 \geq \epsilon^2$, meaning $|\widehat{f|_z}(\emptyset)| \geq |\epsilon|$, so this restriction does the trick. Do you see a simpler proof? I think you can also get the required restriction by a deterministic process (set the bits one at a time to make sure the coefficient exceeding $\epsilon$ always stays at least $\epsilon$ in magnitude). But maybe that’s too annoying to explain. Thanks for finding this mistake! • Noam Lifshitz I don’t know if it’s simpler but here is another proof: if we restrict U ‘s coordinates to z to get the function g then : U,f as in your proof if we restrict U ‘s coordinates to z to get the function g then : $h\left(z\right):=\hat{g}\left(\varnothing\right)=\underset{T\subseteq U}{\sum}\hat{f}\left(T\right)z^{T}$ as a function of $z$ it must be $\left|h\right|_{\infty}$ regular but $\hat{h}\left(U\right)>\epsilon$ • Noam Lifshitz Less messy proof: Assume $\left|\widehat{f}\left(U\right)\right|>\epsilon$ restrict U ‘s coordinates to z to get a function g then : $h\left(z\right):=\widehat{g}\left(\varnothing\right)-\widehat{f}\left(\varnothing\right)=\underset{T\subseteq U}{\sum}\widehat{f}\left(T\right)z^{T}$ We need to show that \left|h\right|_{\infty}>\epsilon to be done. This follows from the fact that \left|h\right|_{\infty}>\left|E\left(h\chi_{U}\right)\right|=\left|\widehat{h}\left(U\right)\right|=\left|\widehat{f}\left(U\right)\right|>\epsilon $• Noam Lifshitz and now hopefully with no LaTeX errors: Assume$\left|\widehat{f}\left(U\right)\right|>\epsilon $restrict U ‘s coordinates to z to get a function g then :$h\left(z\right):=\widehat{g}\left(\varnothing\right)-\widehat{f}\left(\varnothing\right)=\underset{T\subseteq U}{\sum}\widehat{f}\left(T\right)z^{T} $We need to show that \left|h\right|_{\infty}>\epsilon to be done. This follows from the fact that$\left|h\right|_{\infty}>\left|E\left(h\chi_{U}\right)\right|=\left|\widehat{h}\left(U\right)\right|=\left|\widehat{f}\left(U\right)\right|>\epsilon $• Noam Lifshitz Assume$\left|\widehat{f}\left(U\right)\right|>\epsilon $restrict$U $’s coordinates to$z $to get a function$g $then :$h\left(z\right):=\widehat{g}\left(\varnothing\right)-\widehat{f}\left(\varnothing\right)=\underset{T\subseteq U}{\sum}\widehat{f}\left(T\right)z^{T} $We need to show that$\left|h\right|_{\infty}>\epsilon $to be done. This follows from the fact that$\left|h\right|_{\infty}\geq\left|E\left(h\chi_{U}\right)\right|=\left|\widehat{h}\left(U\right)\right|=\left|\widehat{f}\left(U\right)\right|>\epsilon $• Great, thanks — I’ll use this proof in the final version of the book (with acknowledgment)! • Matt Franklin There may be a small typo at the start of the proof of the second statement on Prop. 6.12 (middle of p. 135 of book): change “… where$0 < |U| \leq k$" to "… where$0 < |J| \leq k". • Great catch, thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888383150100708, "perplexity": 332.31404153504303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864022.18/warc/CC-MAIN-20180621040124-20180621060124-00237.warc.gz"}