url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/48798/non-finitely-generated-subalgebra-of-a-finitely-generated-algebra/48831
# Non finitely-generated subalgebra of a finitely-generated algebra Ok, I feel a little bit ashamed by my question. This afternoon in the train, I looked for a counter-example: — $k$ a field — $A$ a finitely generated $k$-algebra — $B$ a $k$-subalgebra of $A$ that is not finitely generated Finally, I have found this: — $k$ any field — $A=k[x,y]$ — $B=k[xy, xy^2, xy^3, \dots]$ (proof : exercise) My questions are: 1) What is your usual counter-example ? 2) Under which conditions can we conclude that $B$ is f.g. ? 3) How would you interpret geometrically this counter-example ? - 2) For example, if $B$ is the invariant ring of $A$ under the action of a group $G$, and $A$ is a completely reducible $G$-module, then $B$ is f.g.. This is Hilbert's theorem. It is far from being an if-and-only-if, however, and it seems hard to construct non-f.g. invariant rings even without complete reducibility. –  darij grinberg Dec 9 '10 at 18:44 @nicojo, I see no need to start your question by "Ok, I feel a little bit ashamed by my question", there are a lot worse MO questions out there.. –  J.C. Ottem Dec 9 '10 at 19:21 Although I'm sure this is no surprise, it might be worth adding that subalgebras of an algebra with a single generator are finitely generated (ams.org/journals/proc/1957-008-05/S0002-9939-1957-0091273-0/…). So the case you have with 2 generators is best-possible in this sense. –  George Lowther Dec 9 '10 at 22:50 It is easy to make examples of such subrings. For example, take $A=k[x,y]$ and consider the subring $$B=k[x^a y^b : 0\le \frac{b}{a}<\sqrt{2}].$$Geometrically, $B$ is spanned by monomials whose exponent vectors lie below the line $y=\sqrt{2}x$. I think your question is quite interesting in the setting where $B=A^G\subset A$ is the invariant ring of some group action on $A$ (or equivalently, on the space $X=\mbox{Spec }A$). In many cases this subalgebra is finitely generated, which allows one can define a quotient space $X/G$ by $Y=\mbox{Spec }A^G$ with many good properties. This happens for example if $G$ is finite or reductive. However, as shown by Nagata's famous counterexample to Hilbert's 14th problem, $A^G$ may be infinitely generated, so the problem of defining such quotients in general is subtle. (Nagata's construction is indeed very geometrical, but a bit too complicated to restate here). - Dear Nicojo, since you now have many counter-examples, let me give you a situation where $B$ is finitely generated, in line with your question 2). I am going to adopt your notations with the important caveat that $k$ is a ring which needn't be a field . Theorem of Artin-Tate Consider the inclusions of rings $k \subset B \subset A$ . Suppose that $k$ is Noetherian, that $A$ is a finitely generated algebra over $k$ and that $A$ is a finitely generated module over $B$. Then $B$ is a finitely generated algebra over $k$. You might interpret this as saying that when $B$ is sufficiently close to $A$, finite generation is preserved. You can find the proof in Atiyah-Macdonald, Proposition 7.8, page 81. From this theorem you can then prove Zariski's result that an extension of fields that is finitely generated as an algebra is actually a finite-dimensional extension (Proposition 7.9 page 82 loc.cit.) and then Hilbert's Nullstellensatz is literally an exercise: exercise 14, page 85 . So this result of Artin-Tate is really basic in commutative algebra and algebraic geometry, not surprisingly if you consider the authors (the Artin here is Emil, Mike's father.) - 1) Here's another example. $k[y, xy, y/x, y/x^2, y/x^3, \dots]$. The localization of this at the origin is a valuation ring (and this idea can be used to construct many other examples). 2+3) If you are constructing examples of this type, many are constructed by gluing. In other words, as pushouts of diagrams of affine schemes $$\{ X \leftarrow Z \rightarrow W \}.$$ where $Z \to X$ is a closed immersion and $Z \rightarrow W$ is arbitrary. The condition you then want in (2) is for $Z \rightarrow W$ to be a finite map. Some relevant references include Ferrand, "Conducteur, descente et pincement", MR2044495 (2005a:13016) and Artin, "Algebraization of Formal Moduli II: Existence of Modifications", MR0260747 (41 #5370) For example, the ring $k[x, xy, xy^2, \dots]$ is the pushout of $$\{ \mathbb{A}^2 \leftarrow \text{coordinate-axis} \rightarrow \text{point} \}.$$ This gives a nice geometric interpretation, you just contracted a coordinate axis to a point, you can contract other schemes and get new examples. Note the $Z \to W$ in this example is not finite. My example in 1) is the pushout of $$\{ \mathbb{A}^2 \setminus{V(x)} \leftarrow \text{Spec } k[x,y,x^{-1}]/(y) \rightarrow \text{Spec } k[x] \}.$$ Where the maps are the obvious ones. The $Z \rightarrow W$ map is not finite in this example either. - I think that your description corresponds actually to B=k[x, xy, x.y^2, x.y^3, ...]. –  user2330 Dec 9 '10 at 19:53 nicojo, you are right, I misread it. I'll fix it now. –  Karl Schwede Dec 9 '10 at 21:26 One might also want to look at Karl's paper on glueing: ams.org/mathscinet/search/… –  Sándor Kovács Dec 9 '10 at 21:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403499364852905, "perplexity": 274.4092436123583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059455.0/warc/CC-MAIN-20150827025419-00229-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.ipht.fr/en/Phocea/Vie_des_labos/Seminaires/index.php?id=994117
The structure of the representations of the affine Temperley-Lieb algebras on the periodic XXZ chain Theo Pinet The affine Temperley-Lieb algebras aTLN($\beta$) are a family of infinite dimensional algebras generalizing the well-known Temperley-Lieb algebras TLN($\beta$). They play, for the periodic XXZ chain, the role played by the original Temperley-Lieb algebra for the open XXZ chain. Their representation theory is much richer than that of the original TL family and admits a lot of similarities with the representation theory of the Virasoro algebra Vir. In particular, we will show in this talk that the representations of aTLN($\beta$) on the periodic XXZ chains admits a structure akin that of the so-called Feigin-Fuchs Vir-modules. To do this, we will highlight the link between these representations and other canonical modules over aTLN($\beta$) (the standard modules) while building up on the well-known quantum Schur-Weyl duality between TLN($\beta$) and Uqsl2. \\ \\ The seminar is online only. \\ Internet link to be collected from the Organizer: [email protected]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948977828025818, "perplexity": 822.0110310348604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00051.warc.gz"}
http://mathhelpforum.com/calculus/212463-piece-wise-function-help-print.html
# Piece wise function help • Feb 2nd 2013, 08:24 PM Oldspice1212 Piece wise function help f(x) = x2 − 4 x − 2 if x < 2 ax2bx + 3 if 2x < 3 4xa + b if x ≥ 3 OK guys so I'm having trouble with this, it includes a lot of algebra steps but I'll put some of them to show that I DID attempt it several times, not sure if I'm just tired or I messed up at some point.... I factored out at the start of coarse... (x-2)(x+2)/ (x-2) = ax^2 - bx + 3 -----> (2+2) = a(2)^2-b(2)+3 4 = 4a - 2b +3 4 - 3 = 4a - 2b 1 = 4a - 2b 1+2b/4 = 4a/4 1/4+1/2b = a ax^2 -bx + 3 = 4x - ab a(3)^2 - b(3) +3 = 4(3) - a + b 9a + a - 3b - b = 12- 3 10a - 4b = 9 10(1/4 + 1/2b) - 4b = 9 5/2 + 5b - 4b = 9 5/2 - 1b = 9 -1b = 13/2 = -13/2 1/4 + 1/2(-13/2) = a 1/4 + (-13/4) = a - 3 = a a = -3 b = -13/2 • Feb 2nd 2013, 08:51 PM Prove It Re: Piece wise function help This is extremely hard to read. Is your function \displaystyle \begin{align*} f(x) = \begin{cases} \frac{x^2 - 4}{x - 2} \textrm{ if } x < 2 \\ a x^2 - b x + 3 \textrm{ if } 2 \leq x < 3 \\ 4x - a + b \textrm{ if } x \geq 3 \end{cases} \end{align*}? • Feb 2nd 2013, 09:43 PM Oldspice1212 Re: Piece wise function help Yes exactly right, sorry about that, I wasn't sure how to make it look like that so I copied the question exactly :S • Feb 2nd 2013, 09:46 PM Prove It Re: Piece wise function help What are you actually trying to do with this question? Are you trying to find the values of a and b which make this function continuous? • Feb 2nd 2013, 10:05 PM Oldspice1212 Re: Piece wise function help Quote: Originally Posted by Prove It What are you actually trying to do with this question? Are you trying to find the values of a and b which make this function continuous? Yes sir, sorry for not mentioning that been up for longer then I should. • Feb 2nd 2013, 11:01 PM Prove It Re: Piece wise function help It should be clear that the function needs to approach 4 if you make x approach 2 from the left (why). For the function to be continuous at x = 2, it needs to approach the same value from the right. So that means \displaystyle \begin{align*} a(2)^2 - b(2) + 3 = 2 \end{align*}. For the function to be continuous at x = 3, we require the function to approach the same value from the left as from the right. So \displaystyle \begin{align*} a(3)^2 - b(3) + 3 = 4(3) - a + b \end{align*}. Simplify both of these equations and solve them simultaneously for a and b. • Feb 3rd 2013, 12:48 AM Oldspice1212 Re: Piece wise function help Ah I think I missed a step, I got a= 7/2 and b = 13/2 this time hows that look? 9=10a-4b 2=8a-4b
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545860290527344, "perplexity": 853.3191489212368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659833.43/warc/CC-MAIN-20160924173739-00259-ip-10-143-35-109.ec2.internal.warc.gz"}
https://reference.opcfoundation.org/Core/Part3/5.6.3/index.html
## 5.6.3 Properties Properties are used to define the characteristics of Nodes. Properties are defined using the Variable NodeClass, specified in Table 13. However, they restrict their use. Properties are the leaf of any hierarchy; therefore they shall not be the SourceNode of any hierarchical References. This includes the HasComponent or HasProperty Reference, that is, Properties do not contain Properties and cannot expose their complex structure. However, they may be the SourceNode of any NonHierarchical References. The HasTypeDefinition Reference points to the VariableType of the Property. Since Properties are uniquely identified by their BrowseName, all Properties shall point to the PropertyType defined in OPC 10000-5. Properties shall always be defined in the context of another Node and shall be the TargetNode of at least one HasProperty Reference. To distinguish them from DataVariables, they shall not be the TargetNode of any HasComponent Reference. Thus, a HasProperty Reference pointing to a Variable Node defines this Node as a Property. The BrowseName of a Property is always unique in the context of a Node. It is not permitted for a Node to refer to two Variables using HasProperty References having the same BrowseName.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527224659919739, "perplexity": 1730.9198605496642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00636.warc.gz"}
https://meangreenmath.com/2017/03/24/my-favorite-one-liners-part-52/
# My Favorite One-Liners: Part 52 In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s story is a continuation of yesterday’s post. When I teach regression, I typically use this example to illustrate the regression effect: Suppose that the heights of fathers and their adult sons both have mean 69 inches and standard deviation 3 inches. Suppose also that the correlation between the heights of the fathers and sons is 0.5. Predict the height of a son whose father is 63 inches tall. Repeat if the father is 78 inches tall. Using the formula for the regression line $y = \overline{y} + r \displaystyle \frac{s_y}{s_x} (x - \overline{x})$, we obtain the equation $y = 69 + 0.5(x-69) = 0.5x + 34.5$, so that the predicted height of the son is 66 inches if the father is 63 inches tall. However, the prediction would be 73.5 inches if the father is 76 inches tall. As expected, tall fathers tend to have tall sons, and short fathers tend to have short sons. Then, I’ll tell my class: However, to the psychological comfort of us short people, tall fathers tend to have sons who are not quite as tall, and short fathers tend to have sons who are not quite as short. This was first observed by Francis Galton (see the Wikipedia article for more details), a particularly brilliant but aristocratic (read: snobbish) mathematician who had high hopes for breeding a race of super-tall people with the proper use of genetics, only to discover that the laws of statistics naturally prevented this from occurring. Defeated, he called this phenomenon “regression toward the mean,” and so we’re stuck with called fitting data to a straight line “regression” to this day. ## One thought on “My Favorite One-Liners: Part 52” This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287355899810791, "perplexity": 1723.0830476264905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00051.warc.gz"}
https://tex.stackexchange.com/questions/479776/interaction-between-hyperref-and-marginpar
Interaction between hyperref and marginpar? I have a document with margin notes in which \cite commands are used. I have the problem that the first line of the margin note does not match the line in the text if the margin note starts with \cite and the option backref is used in hyperref. MWE: \documentclass{article} \setlength{\textwidth}{12cm} % to have enough space for margin notes \setlength{\marginparwidth}{8cm} % ditto \usepackage{natbib} \usepackage[backref]{hyperref} \bibliographystyle{chicago} \begin{document} Note \marginpar{See \cite{Other1980}} that this margin comment should start on this line and it does. \vspace{\baselineskip} Note \marginpar{\cite{Other1980}} that this margin comment should start on this line but it does not. \bibliography{refs} \end{document} with refs.bib containing @article{Other1980, Author = {A. N. Other}, Title = {1 + 1 = 3}, Journal = {J. Irreproducible Results}, Year = 1980}} The output is: Note how 'Other (1980)' is below the corresponding line of text if the margin note starts with \cite whereas it is correctly aligned if the margin comments starts with text. On the other hand, if I comment out the option \backref, I get both margin notes correctly aligned: Now 'Other (1980)' is correctly aligned with the second line of text. How can I get proper alignment with \backref even if the margin note does start with \cite? Am I doing something wrong or is this a bug? • \marginpar{\leavevmode\cite{moore}} gives the expected results with both backref settings. I assume with backref hyperref adds code before the actual \cite and confuses the boxing... – moewe Mar 16 at 12:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905453324317932, "perplexity": 3025.72122935654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00524.warc.gz"}
https://www.physicsforums.com/threads/find-the-tangent-line-between-two-surfaces.280093/
# Find the tangent line between two surfaces 1. Dec 16, 2008 ### jheld 1. The problem statement, all variables and given/known data Let C be the intersection of the two surfaces: S1: x^2 + 4y^2 + z^2 = 6; s2: z = x^2 + 2y; Show that the point (1, -1, -1) is on the curve C and find the tangent line to the curve C at the point (1, -1, -1). 2. Relevant equations partial derivates, maybe the gradient vector and directional derivatives though, maybe symmetrical equations like x - x_0/partial derivative with respect to x = y etc... 3. The attempt at a solution I'm just kind of wondering where to start. I think I should be making these into vectors, but I'm not quite sure how to do so, and of course thinking about partial derivatives. 2. Dec 16, 2008 ### NoMoreExams Well find the intersection, you know that S1 can be written as $$x^{2} = 6 - 4y^{2} - z^{2}$$ and S2 can be written as $$x^{2} = z - 2y$$ so set them equal to each other to find their intersection. Are you sure your 2nd equation is correct? 3. Dec 16, 2008 ### Dick You know that the tangent direction is tangent to both surfaces, and the gradient of each surface is normal to that tangent direction. Use the two gradient directions to deduce the tangent direction. 4. Dec 16, 2008 ### jheld Both equations are written correctly. I'm trying to find their point of intersection and I thought to complete the square, but it doesn't seem to be working. I'm unsure of how to find the gradient vector between two surfaces. 5. Dec 16, 2008 ### Dick Find the gradient of each surface separately. That gives you two vectors which are orthogonal to the tangent direction. How can you find a vector that's orthogonal to two given vectors? 6. Dec 16, 2008 ### jheld Okay I found the gradients as: S1: <2x, 8y, 2z> s2: <2x, 2 -1> I went on to find their symmetric equations. But, I'm not sure how to relate them. 7. Dec 16, 2008 ### Dick You are interested in the point x=1, y=(-1) and z=(-1). The vector you want is perpendicular to both those vectors. 8. Dec 16, 2008 ### jheld I'm not quite sure how to show a vector like that. Is that supposed to be the gradient vector? Should I use the dot product between S1 and S2 directional derivatives to get that? 9. Dec 16, 2008 ### Dick You were supposed to say, "Ah ha! I can use the cross product!". 10. Dec 16, 2008 ### jheld I use the cross-product? Oh, well I suppose that could work, haha. Would I calculate the cross-product before plugging in the values? That leaves me with a bunch of x, y and z's. 11. Dec 16, 2008 ### Dick Yes, the cross product of two vectors is perpendicular to both. It doesn't matter whether you plug in the numbers before or after, does it? Whatever you find easier. When you are done you will have a vector that points in the direction of the tangent line, right? 12. Dec 16, 2008 ### jheld I think I understand it now. I think it would be easier to plug them in before, though, less writing, you know? Yes, it does point in the direction of the tangent line. Thanks for all your help :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852936863899231, "perplexity": 529.9693805589844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00434-ip-10-171-10-108.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3228550/probability-of-population-parameter-to-be-contained-in-confidence-interval
# Probability of population parameter to be contained in confidence interval First of all, I have found this question: Interpretation of confidence interval which might seem as a duplicate. As well as this explanation http://onlinestatbook.com/2/estimation/confidence.html within it -- but I find it hard to understand. In the second link it is stated that: Confidence intervals for means are intervals constructed using a procedure that will contain the population mean a specified proportion of the time, typically either 95% or 99% of the time. An example of a 95% confidence interval is shown below: 72.85 < μ < 107.15 There is good reason to believe that the population mean lies between these two bounds of 72.85 and 107.15 since 95% of the time confidence intervals contain the true mean. But also: It is natural to interpret a 95% confidence interval as an interval with a 0.95 probability of containing the population mean. However, the proper interpretation is not that simple. One problem is that the computation of a confidence interval does not take into account any other information you might have about the value of the population mean. For example, if numerous prior studies had all found sample means above 110, it would not make sense to conclude that there is a 0.95 probability that the population mean is between 72.85 and 107.15. What about situations in which there is no prior information about the value of the population mean? Even here the interpretation is complex. The problem is that there can be more than one procedure that produces intervals that contain the population parameter 95% of the time. Which procedure produces the "true" 95% confidence interval? For the sake of the discussion I rather refer to a situation in which one can have huge amount of data, and the confidence level can be extremely high. My questions are: • If one can't say it is as likely for the population parameter to be inside the interval as the confidence level, why does the author write "There is good reason to believe that the population mean lies between these two bounds (...)" in the first paragraph? • In the second quoted paragraph the author refers to a situation in which there was a prior information. I don't see how that is relevant. If your confidence level is, say $$1-2^{-30}$$, it is indeed very unlikely you get an interval which contradicts previous studies. If it indeed happens, one must conclude that one of you had a mistake. You, or the previous studies. Where am I wrong? • Also in the second paragraph the author writes: ... there can be more than one procedure that produces intervals that contain the population parameter 95% of the time. Which procedure produces the "true" 95% confidence interval? I didn't understand this line, what is he trying to say? To summarize, I'll try to compare it to null hypothesis rejecting, which I understand better: If I randomly pick a confidence interval from a set of confidence intervals, of which $$1-2^{-30}$$ contain the true population parameter, why can't I say it is as likely I have picked a good interval, as it is likely the null hypothesis should be rejected when $$p\leq2^{-30}$$? Note: I am a beginning math student. I have taken with passion some basic math classes such as Linear Algebra 101, and Calculus, but nothing more. In statistics I have a reasonable understanding of basic hypothesis testing (null hypothesis, statistical significance, p-values).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128644466400146, "perplexity": 189.93881195506154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00181.warc.gz"}
https://www.physicsforums.com/threads/gre-relativity-problem.708297/
# GRE Relativity Problem 1. Sep 2, 2013 ### PsychonautQQ 1. The problem statement, all variables and given/known data http://grephysics.net/ans/8677/20 So to do this, I solved for the total rest energy of both particles. The rest energy of Kaon + K = Rest energy of Proton Rest energy of Proton - Rest Energy of Kaon = K K = P^2/2m ((Rest Energy of Proton - Rest Energy of Kaon)*(2*Mass of Kaon))^1/2 = P P = mv/(sqrt(1-v^2/c^2) If I do this and do all the algebra correct and solve for v, will this method give me the correct answer? I got the wrong answer but I suck at numbers X_x. I realize after looking at the answers on the website this is a poor way to do this problem with GRE time constraints, I just want to know if this thought process is flawed or not. 2. Sep 2, 2013 ### Staff: Mentor If the rest energy of a particle is 494 MeV, and its total energy is 938 MeV, what is γ? Since $γ=\frac{1}{\sqrt{1-\beta^2}}$, square both sides and solve for β2 in terms of γ2. What is the value of β? 3. Sep 2, 2013 ### PsychonautQQ Yeah, I see that this is the best way to do it. I was just wondering if my way works (even though it would take way longer). Draft saved Draft deleted Similar Discussions: GRE Relativity Problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368047475814819, "perplexity": 780.4825767969369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00194.warc.gz"}
https://arxiv.org/abs/1001.0987
Full-text links: hep-ph # Title:Two-body hadronic charmed meson decays Abstract: We study in this work the two-body hadronic charmed meson decays, including both the PP and VP modes. The latest experimental data are first analyzed in the diagrammatic approach. The magnitudes and strong phases of the flavor amplitudes are extracted from the Cabibbo-favored (CF) decay modes using $\chi^2$ minimization. The best-fitted values are then used to predict the branching fractions of the singly-Cabibbo-suppressed (SCS) and doubly-Cabibbo-suppressed decay modes in the flavor SU(3) symmetry limit. We observe significant SU(3) breaking effects in some of SCS channels. In the case of VP modes, we point out that the $A_P$ and $A_V$ amplitudes cannot be completely determined based on currently available data. We conjecture that the quoted experimental results for both $D_s^+\to\bar K^0K^{*+}$ and $D_s^+\to \rho^+\eta'$ are overestimated. We compare the sizes of color-allowed and color-suppressed tree amplitudes extracted from the diagrammatical approach with the effective parameters $a_1$ and $a_2$ defined in the factorization approach. The ratio $|a_2/a_1|$ is more or less universal among the $D \to {\bar K} \pi$, ${\bar K}^* \pi$ and ${\bar K} \rho$ modes. This feature allows us to discriminate between different solutions of topological amplitudes. For the long-standing puzzle about the ratio $\Gamma(D^0\to K^+K^-)/\Gamma(D^0\to\pi^+\pi^-)$, we argue that, in addition to the SU(3) breaking effect in the spectator amplitudes, the long-distance resonant contribution through the nearby resonance $f_0(1710)$ can naturally explain why $D^0$ decays more copiously to $K^+ K^-$ than $\pi^+ \pi^-$ through the $W$-exchange topology. Comments: 32 pages, 5 figures. An alternative method for error bar extraction is used; last columns of Tables~I to VI, and all entries in Tables~VII, VIII and X are modified. To appear in PRD. Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex) Journal reference: Phys.Rev.D81:074021,2010 DOI: 10.1103/PhysRevD.81.074021 Cite as: arXiv:1001.0987 [hep-ph] (or arXiv:1001.0987v2 [hep-ph] for this version) ## Submission history From: Hai-Yang Cheng [view email] [v1] Wed, 6 Jan 2010 21:37:37 UTC (189 KB) [v2] Wed, 10 Mar 2010 02:20:26 UTC (190 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556594252586365, "perplexity": 1866.3146086423183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315329.55/warc/CC-MAIN-20190820113425-20190820135425-00100.warc.gz"}
https://www2.physics.ox.ac.uk/contacts/people/march-russell/publications/6728
# Publications by John March-Russell ## WIMPonium and Boost Factors for Indirect Dark Matter Detection Phys.Lett.B676:133-139,2009 (2008) J March-Russell, SM West We argue that WIMP dark matter can annihilate via long-lived "WIMPonium" bound states in reasonable particle physics models of dark matter (DM). WIMPonium bound states can occur at or near threshold leading to substantial enhancements in the DM annihilation rate, closely related to the Sommerfeld effect. Large "boost factor" amplifications in the annihilation rate can thus occur without large density enhancements, possibly preferring colder less dense objects such as dwarf galaxies as locations for indirect DM searches. The radiative capture to and transitions among the WIMPonium states generically lead to a rich energy spectrum of annihilation products, with many distinct lines possible in the case of 2-body decays to $\gamma\gamma$ or $\gamma Z$ final states. The existence of multiple radiative capture modes further enhances the total annihilation rate, and the detection of the lines would give direct over-determined information on the nature and self-interactions of the DM particles. Show full publication list
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755410552024841, "perplexity": 3154.946551776415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159561.37/warc/CC-MAIN-20180923153915-20180923174315-00220.warc.gz"}
http://www.opuscula.agh.edu.pl/om-vol31iss3art3
Opuscula Math. 31, no. 3 (2011), 327-339 http://dx.doi.org/10.7494/OpMath.2011.31.3.327 Opuscula Mathematica Monotone iterative technique for finite systems of nonlinear Riemann-Liouville fractional differential equations Z. Denton A. S. Vatsala Abstract. Comparison results of the nonlinear scalar Riemann-Liouville fractional differential equation of order $$q$$, $$0 \lt q \leq 1$$, are presented without requiring Hölder continuity assumption. Monotone method is developed for finite systems of fractional differential equations of order $$q$$, using coupled upper and lower solutions. Existence of minimal and maximal solutions of the nonlinear fractional differential system is proved. Keywords: fractional differential systems, coupled lower and upper solutions, mixed quasimonotone property. Mathematics Subject Classification: 34A08, 24A34. Full text (pdf)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523719549179077, "perplexity": 732.2692103258872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00235.warc.gz"}
https://www.goodfruit.com/calibration-steps/
1) Mark the spot where the sprayer is parked when filling the tank with water. Know the total volume of the tank and the effective boom width. 2) Spray a measured distance on a paved surface, driving at the tractor speed you will use in the field. Note the spray tank pressure. 3) Return to the same spot where the tank was originally filled up and measure the amount required to fill the tank back to the original level. 4) Spray the measured paved surface again and repeat one more time, if necessary. Example: A spray rig boom is ten feet wide. An acre is equivalent to 43,560 square feet. Because you don’t want to take the time to spray an entire acre, reduce the dimensions by one-tenth. Hence, spray a strip 436 feet long. Multiply the number of gallons needed to fill up the tank by ten to find the amount of carrier that would be used per acre. For example, it if took four gallons to fill up the tank, that would be the equivalent of 40 gallons per acre. Check your figures against the recommended rate for the material you want to use. To each 40 gallons of carrier in the supply tank, add the number of ounces, pints, quarts, gallons, pounds, or grams needed to obtain the recommended application rate. Follow similar steps to calibrate an orchard or vineyard fan sprayer: Make a trial application with both sides of the sprayer down a test tree or vine row. Return to the same place that the tank was filled with water and refill the tank, measuring the amount needed to refill it. Multiply the number of gallons needed to refill the tank by 43,560 feet squared, the equivalent of an acre. Multiply the length of the test row by the distance between rows and divide that number into the previous number, which will give the actual spray volume.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253413438796997, "perplexity": 1004.540977221539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00055.warc.gz"}
https://proceedings.neurips.cc/paper/2017/hash/3cfbdf468f0a03187f6cee51a25e5e9a-Abstract.html
#### Authors Chuang Wang, Yue Lu #### Abstract We analyze the dynamics of an online algorithm for independent component analysis in the high-dimensional scaling limit. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measure of the target feature vector and the estimates provided by the algorithm will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE. Numerical solutions of this PDE, which involves two spatial variables and one time variable, can be efficiently obtained. These solutions provide detailed information about the performance of the ICA algorithm, as many practical performance metrics are functionals of the joint empirical measures. Numerical simulations show that our asymptotic analysis is accurate even for moderate dimensions. In addition to providing a tool for understanding the performance of the algorithm, our PDE analysis also provides useful insight. In particular, in the high-dimensional limit, the original coupled dynamics associated with the algorithm will be asymptotically “decoupled”, with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight to design new algorithms for achieving optimal trade-offs between computational and statistical efficiency may prove an interesting line of future research.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012245297431946, "perplexity": 272.9456915353555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00455.warc.gz"}
https://www.physicsforums.com/threads/polar-axis.133766/
# Polar axis? 1. Sep 26, 2006 ### pivoxa15 What is the significance of the polar axis? What would it mean if someone said 'Take the z axis (in 3D cartesian coords) as the polar axis?' 2. Sep 27, 2006 ### pivoxa15 I found out what it meant. The polar axis is where angles are measured with respect to. Similar Discussions: Polar axis?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955400824546814, "perplexity": 4391.311206579841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00340-ip-10-145-167-34.ec2.internal.warc.gz"}
http://sibconf.igm.nsc.ru/niknik-90/en/reportview/44387
Novosibirsk, Russia, May, 30 – June, 4, 2011 International Conference "Modern Problems of Applied Mathematics and Mechanics: Theory, Experiment and Applications", devoted to the 90th anniversary of professor Nikolai N. Yanenko Canonical domains for almost orthogonal quasi-isometric grids Reporter: Чумаков Г.А. A special class of canonical domains is discussed for the generation of quasi-isometric grids. The base computational strategy of our approach is that the physical domain is decomposed into five non-overlapping blocks, which are automatically generated by solving a variational problem. Four of these blocks - the ones that contain the corners - are conformally equivalent to geodesic quadrangles on surfaces of constant curvature, while the fifth block is a conformal image of a non-convex polygon composed of five planar rectangles (or a large rectangle with four small rectangles cut out of its corners). To ensure that the angles of the physical and canonical domains coincide and the conformal modules are the same, the four corner blocks are taken to be geodesic quadrangles on surfaces of constant curvature, namely, spherical, planar or Lobachevsky plane, depending on the angles of the physical domain. Within each of these blocks a quasi-isometric grid is generated. Orthogonality of coordinate lines holds in the fifth, central block. We present an algorithm for automated construction of one-parameter family of such canonical domains. The parameter $\delta$ is defined in such a way that, according to a theorem that we have proved, for any physical domain there exists a unique value of $\delta$ for which the mapping from the canonical domain onto physical region is conformal and its derivative is bounded. Application of such a mapping results in a grid inside the physical region that is orthogonal far from the corners. This strategy ensures the existence of such canonical domain (the possibility to generate the grid) and the uniqueness of the mapping, i.e., our algorithm cannot converge to two different solutions. Note that the grid lines are the images of the geodesics in corresponding metrics. Abstracts file: chumakov_Nik-Nik-90.tex To reports list © 1996-2019, Institute of computational technologies of SB RAS, Novosibirsk
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600467443466187, "perplexity": 543.0204581335497}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00393.warc.gz"}
https://deepai.org/publication/alternation-diameter-of-a-product-object
# Alternation diameter of a product object We prove that every permutation of a Cartesian product of two finite sets can be written as a composition of three permutations, the first of which only modifies the left projection, the second only the right projection, and the third again only the left projection, and three alternations is indeed the optimal number. We show that for two countably infinite sets, the corresponding optimal number of alternations, called the alternation diameter, is four. The notion of alternation diameter can be defined in any category. In the category of finite-dimensional vector spaces, the diameter is also three. For the category of topological spaces, we exhibit a single self-homeomorphism of the plane which is not generated by finitely many alternations of homeomorphisms that only change one coordinate. ## Authors • 5 publications • ### Cops and Robbers on graphs of bounded diameter The game of Cops and Robbers is a well known game played on graphs. In t... 12/16/2019 ∙ by Seyyed Aliasghar Hosseini, et al. ∙ 0 • ### Simulation of bifurcated stent grafts to treat abdominal aortic aneurysms (AAA) In this paper a method is introduced, to visualize bifurcated stent graf... 02/08/2016 ∙ by Jan Egger, et al. ∙ 0 • ### Products in a Category with Only One Object We consider certain decision problems for the free model of the theory o... 01/26/2021 ∙ by Richard Statman, et al. ∙ 0 • ### On critical and maximal digraphs This paper is devoted to the study of directed graphs with extremal prop... 07/24/2018 ∙ by G. Š. Fridman, et al. ∙ 0 • ### Infinite Diameter Confidence Sets in a Model for Publication Bias There is no confidence set of guaranteed finite diameter for the mean an... 12/19/2019 ∙ by Jonas Moss, et al. ∙ 0 • ### Extending set functors to generalised metric spaces For a commutative quantale V, the category V-cat can be perceived as a c... 09/06/2018 ∙ by Adriana Balan, et al. ∙ 0 • ### The Four Point Permutation Test for Latent Block Structure in Incidence Matrices Transactional data may be represented as a bipartite graph G:=(L ∪ R, E)... 10/04/2018 ∙ by R W R Darling, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction We prove in this paper that every permutation of a Cartesian product of two finite sets can be performed by first permuting on the left, then on the right, then on the left. This is Theorem 2, and its proof is a reduction to Hall’s marriage theorem. For two countably infinite sets, every permutation of the Cartesian product can be performed either by permuting left-right-left-right or right-left-right-left (sometimes only one of these orders works). This is Theorem 5. Its proof is elementary set theory, a Hilbert’s Hotel type argument. We also prove that for the direct product of two finite-dimensional vector spaces, every linear automorphism can be written by a composition of three linear automorphisms, the first of which only modifies the left coordinate, the second the right coordinate and the third the left coordinate. This is Theorem 6. In Theorem 8, we show that a similar result does not hold for general self-homeomorphisms of the plane. It turns out that the results on finite sets and vector spaces have been independently proved several times. See Section 2 for this and other related results. The study of such alternations arises from the unpublished draft [13] where this optimal number of alternations was studied for automorphism groups of subshifts. Here, we note that this notion applies to the automorphism group of a product object in any category, and study it in some categories of interest. We call the optimal length of a left-right alternation the alternation diameter (see below for more detailed definitions). The alternation diameter is interesting as a general concept since the automorphism group of a product object always contains the automorphism groups of the left and right components (in a natural way) – and more generally what we call the left and right groups (defined below) –, but in addition it can contain many “entirely new” automorphisms that can only be understood globally, and do not easily reduce to the study of the left or right component of the product separately. In the case that the alternation diameter is bounded for a particular product object and the left and right groups are easy to describe, we get a handle on the elements of the automorphism group, at least as a set. Of course, in complicated categories we cannot expect this to happen very generally, but it can be a helpful technique in the study of automorphism groups of individual objects when it succeeds. In this note we consider some simple categories where alternation diameter is actually globally bounded over the whole category (though in these simple cases it does not really help in understanding the automorphism group of a product), and show by examples that the left and right groups, not surprisingly, do not in general generate the automorphism group of a product object in more complex categories. In terms of the alternation diameter, our results are the following: ###### Theorem 1. The category • FinSet of finite sets and functions has alternation diameter , • CountSet of countable sets and functions has alternation diameter , • FinVect of finite-dimensional vector spaces and linear maps has alternation diameter , Our examples of categories where alternation diameter is undefined (meaning that left and right alternations do not generate all automorphisms) are the following: The plane has, for slightly non-trivial reasons, undefined alternation diameter in the category Top of topological spaces and continuous functions. The square has undefined alternation diameter in Top for trivial reasons. In Pos, the product posets and where is the diamond, i.e. the poset with Hasse diagram , have undefined alternation diameter for (different) trivial reasons. (By a more Pos-specific proof, Maximilien Gadouleau has shown that in fact has undefined alternation diameter for every finite poset .) We now give some more detailed definitions (see [10] for basic examples and notions of category theory111Other than Section 4 and Section 5.2, we are not concerned with the “theory”, mainly terminology and concepts.). In the category of sets and functions a product object is just the Cartesian product of and , and the automorphism group of an object is just the full permutation group on that set. In concrete categories where products behave this way, for a product object and its automorphism group , we define the Left group , namely the subgroup of containing those automorphisms of that modify only the left coordinate, i.e. satisfy . Define the Right group analogously. Our precise statements about sets are that in the category FinSet of finite sets, , and in the category CountSet of countable sets, but generally ). In less Set-like categories one can define and in terms of commutative diagrams (see Section 4 for the diagrams): let be the category-theoretic product of and , and and the projections defining . Define as the group of automorphisms satisfying , and those satisfying . The groups obtained, as well as the subgroups and , do not depend on the choice of the product (up to isomorphism of the triples ). In categories of sets, it is convenient to think of as indices into a (possibly infinite) matrix, with indexing the rows and the columns. This is the suggested convention for mental pictures and is the one used in the proofs. Then is the coLumn group that performs a permutation in each column separately (independently of each other) and is the Row group that performs independent permutations on rows. Then in FinSet states precisely that any -by- matrix containing each element of exactly once can be turned into the matrix by first permuting each row, then each column, then each row. In group-theoretic terms, since and are subgroups, we can state the weaker fact equivalently as follows: the group is generated by , and its diameter with respect to the generating set is at most three (independently of and ). This diameter is in general what we call the alternation diameter of , or of the object having as automorphism group. The alternation diameter of a category is the least upper bound of alternation diameters of products. This diameter can infinite for a single object , when is generated by and , but is not equal to a finite number of alternations (this is what happens in Top and Pos). In principle a category may also have infinite alternation diameter due to objects having arbitrarily large finite alternation diameters. If is not generated by and at all, we say the alternation diameter is undefined, and a category has undefined alternation diameter if some product in it does. ## 2 Existing related work The case of finite sets, which started this paper, is inspired by [16], where it is shown that any permutation of , for three sets with , can be written as a composition of finitely many permutations where alternately only or is permuted. Our notion of alternation diameter in the category of finite sets is related to this definition, as it also means refers to “alternately permuting a product on the left and right”. The difference is that there is no communication coordinate (making it harder), but we allow the permutation to depend on the value on the right when permuting the value on the left, and vice versa (making it easier). It turns out that the results about finite sets and vector spaces have been proved before in the context of memoryless computation: [3, Theorem 3] and [6, Theorem 2] are essentially the same result as Theorem 3. More related results on permutation groups can be found in [12]. The motivation and framework is ostensibly different, but the case of finite sets is proved in [2, Theorem 3.1] using a version of Hall’s theorem. Theorem 6 is also known previously: the number of alternations needed for a product of length (which can be obtained from Theorem 6) can be found in [2, 3] (for a larger class of modules). It turns out that is not optimal at least for finite fields: in [11, Theorem 2.1] it is proved that for FinVect over a finite field the optimal number of alternations for a product of length is . It should be possible to extract, from the results of [1], a natural category where alternation diameter is defined but infinite for some objects. We do not know of previous work on alternation diameter in categories that are less obviously computationally relevant, in particular the results on countably infinite sets and homeomorphisms groups are new to the best of our knowledge. In Section 3.4 we discuss a known related result in graph theory. ## 3 Bounded alternation diameter ### 3.1 Finite sets In this section we look at symmetric groups for finite sets , i.e. automorphism groups of product objects in the category FinSet of finite sets and functions. Permutations act from the left and compose right-to-left, and we also write . The following theorem is a well-known corollary of Hall’s marriage theorem (see Lemma 4 for a proof): ###### Lemma 1. Let be an -by- matrix over where all rows and columns sum to . Then cellwise, for some permutation matrix . This naturally implies that every such matrix is a sum of permutation matrices, but we prefer to use the lemma directly. ###### Definition 1. Let and be finite sets, and let . Define the coLumn group of as GL={g∈G|∀(a,b)∈A×B:∃c∈B:g(a,b)=(c,b)} and define the Row group symmetrically. ###### Theorem 2. Let and be finite sets, and let . Then . ###### Proof. We prove . The equality follows by symmetry. A permutation of can be seen as an -matrix containing each entry of exactly once. (Formula: .) We show that by showing that contains the identity map. In terms of the matrix , precomposing by , i.e. , corresponds to permuting the entries of the matrix by (in the obvious “forward” direction). Thus, our task is to turn into the matrix by first permuting the rows, then the columns, then the rows again. Now, ignoring the -component of every matrix entry, and supposing without loss of generality that and , we obtain an -by--matrix over where every element of occurs exactly times. To such a matrix , we associate an -by--matrix defined as follows: Na,b=|{j|Ma,j=b}|. Then every row of sums to because is an -by- matrix, and every column of sums to because every appears times in . It follows from Lemma 1 that where is a permutation matrix. We want to permute so that every column contains every symbol of exactly once. To do this, consider a row of , and let be the unique element such that . Then row of contains at least one copy of . On each row, move such an to the first column. Since is a permutation matrix, the elements moved to the first column are distinct, so the net effect of this is that the first column of contains each element of exactly once. Considering now the -by- matrix obtained from by deleting the first column, we observe that every element of appears exactly times. By induction, we obtain that can be permuted by an element of into a matrix where all columns contain each element of exactly once. Now, apply an element of to sort each column so that the th row of contains only s. Now, consider the action of this transformation on the original -by- matrix with entries in . After the transformation (by an element of ), the row contains only values of the form where . Since every entry appears exactly once in , the set of values on row is then precisely . We can now apply a final permutation in to permute all elements into their correct position, obtaining the matrix representing the identity permutation on . ∎ Let us make some additional observations. Consider an arbitrary product . Write for the group of permutations of that only modify the th component of their input. By induction on , Theorem 2 shows the following: ###### Theorem 3. Let where are finite sets and let be as above. Then Sym(X)=GkGk−1⋯G2G1G2⋯Gk−1Gk. If each has at least two elements, then no sequence of less than groups suffices. ###### Proof. We first prove the formula for . The case is trivial, and is Theorem 2. Now, let and . Then any permutation of is in where and are defined with respect to the decomposition . Let be the corresponding decomposition. Then and are in because they do not modify the -component of their input. We can write as where each is a permutation of that modifies the -component only if the -component is equal to . Each permutation is in by induction, where are the groups corresponding to components of the product . Thus π2=∏b∈Bπ′b∈(Gk−1⋯G2G1G2⋯Gk−1)ℓ, and we can reorder the product to get since the permutations corresponding to distinct commute (as they have disjoint supports). For the second claim, we show a stronger result: We cannot have Sym(X)=Giℓ⋯Gi2⋅Gi1 for any sequence where there are at least two indices that occur at most once. Suppose the contrary, and let , with , and and each occur only once. Suppose that . For write for the set of elements of where the -coordinate is equal to and the -coordinate is equal to . We claim that if is in and fixes , then it maps no elements of into , which clearly proves the claim. To see this, observe that all of the groups except and leave the sets invariant. Thus, when gets its turn, elements of and have not yet been moved. Since fixes , the -permutation cannot move any elements away from (since after this step, their -coordinate will no longer change). But then elements of cannot be moved into by since , so after applying , elements of still have nonzero -coordinate, which will no longer change. Thus cannot move them into . ∎ ###### Corollary 1. The category FinSet has alternation diameter . One can extract a full characterization of from the proof of Theorem 2. ###### Lemma 2. Let and be sets, and let . The set contains precisely those permutations such that for the natural projection map, for all , the map is bijective. ###### Proof. Note that the set contains precisely the permutations that can be mapped to the identity by precomposing first with an element of , and then an element of . In matrix form, they are the -by- matrices over that can be turned into the matrix by first applying a permutation of columns (from ) and then a permutation of rows (from ).222To readers experiencing chiral confusion, we write some formulas: if we “first” apply and “then” , to a matrix , then formulaically we obtain , which contains the identity matrix if and only if . For sufficiency, observe that the above property is precisely the one that holds after the first application of in the proof of Theorem 2 – in terms of the matrix , it states that every column of contains every symbol of exactly once. The two following steps in that proof perform any permutation of using only this property, and do not use the finiteness of or . For necessity, write in matrix form, , and suppose that for some , we can turn into the matrix by first applying and then . First, must be injective: otherwise, some column of contains both and for some . This still holds after applying , so at the time of the final application of , and are on the same column, thus cannot both be on the correct row . Second, must be surjective: if there exists and such that the th column of does not contain any , , then the same is true after applying . In particular the th column will necessarily contain some value , , on the th row, after the application of . ∎ One can give precise formulas for the sizes of each of the sets obtained by applying in various orders. ###### Definition 2. When is an automorphism group, for , write . ###### Theorem 4. Let be finite sets, . For any , is equal to one of the sets in and |GLRL|=mn! |GLR|=|GRL|=m!nn!m ###### Proof. Since by Theorem 2 and and because and are groups, the claim about is true. The first formula comes from the fact that . By definition, , so since and are subgroups, |GLR|=|GLGR|=|GL||GR||GL∩GR|=m!nn!m, where since we choose an independent permutation of each of the columns of size . The formula follows by symmetry. ∎ The above theorem shows that at least three alternations are needed also in the stronger sense that for large enough. This is because is dwarfed by by a straightforward application of Stirling’s formula. We note that Lemma 2 gives a characterization of permutations in . We have not investigated whether there is a simple formula for the cardinality in terms of and . From Theorem 3 we immediately obtain an alternation diameter result for finite-support permutations on infinite sets. Write for the group of finite-support permutations of a set . Define as before, requiring that only the th coordinates of inputs are modified by elements of . ###### Corollary 2. Let where are arbitrary sets and let be as above. Then Sym0(X)=GkGk−1⋯G2G1G2⋯Gk−1Gk. If each has at least two elements, then no sequence of less than groups suffices. ###### Proof. Every permutation has finite support, thus finite projection of the support on the sets . Pick suitable finite initial segments of the sets, and for sufficiency apply Theorem 3, and for necessity its proof. ∎ ### 3.2 Countably infinite sets We now look at permutations with countably infinite support. We recall a version of Hall’s theorem for infinite sets and include a proof sketch (see e.g. [5, 4] for details). Here, graphs are undirected without self-loops, but may have multiple edges between two vertices. A graph is bipartite if its vertices can be partitioned into two nonempty sets in such a way that no edge goes between two vertices in or between two vertices in . Write for the open neighborhood of a subset of a graph, i.e. the set of vertices connected to a vertex in by an edge. A graph is locally finite if every vertex has finite degree, i.e.  for all vertices . Write for a finite subset of . A matching in a bipartite graph, with a fixed bipartition of the vertices, is a -to- correspondence that matches a subset of the elements of injectively to a subset of . We write matchings as partial functions ###### Lemma 3. Let be a locally finite bipartite graph where for each finite set of vertices , . Then admits a perfect matching. ###### Proof. Let us call the vertices “left” or “right” depending on which side they are on, and as sets. The set of subsets of the edge set has a natural compact topology, namely the product topology on where is the set of edges and means the edge is included in the set. Matchings form a closed subset of this space. Since the graph is locally finite, for each vertex the set of matchings where is matched is clopen, thus compact. Thus, there exists a matching where a maximal set of left vertices is (injectively) matched with some vertex on the right, and for this maximal set, a maximal set of right vertices is matched. If some left vertex is not matched in , let be the set of those vertices (left or right) and edges which are reachable by a path starting from where every second edge is part of the maximal matching . If is infinite, by König’s lemma we can find an infinite path, and swapping the edges that are in with those not in on this path, we add to but remove no vertex from , so was not maximal. If is finite, and some is not matched, then we can take a path from to and again swap the matching edges with non-matching edges to add to without removing any matched left vertices. If is finite and all elements of are matched, then (because gives an injection from into ), a contradiction since all edges from are included in and thus . We conclude that , i.e. matches every left vertex with a right vertex. Suppose then that is not surjective. Then perform the above argument with the roles of left and right reversed, and observe that we also never unmatched a matched right vertex when modifying our matching. Alternatively, one can construct a left-surjective and right-surjective matching separately and apply the Cantor-–Schröder–-Bernstein argument. ∎ In the following, we use the matrix terminology, though indexing by infinite sets. The meaning should be clear. ###### Lemma 4. Let be any set and any -by- matrix over . If every row and column sums to (in particular, cofinitely many entries have value on every row and column), then for some permutation matrix . ###### Proof. Construct the bipartite graph with a copy of “on the left” and another copy of “on the right”. Include an edge from left- to right- if . Then this graph satisfies the assumptions of the previous lemma: Clearly it is locally finite and bipartite. Consider any finite set of left vertices . We have |L′| =1n∑a∈L′,b∈N(L′)Na,b ≤1n∑b∈N(L′)∑a∈LNa,b=|N(L′)|. Similarly, for any we have . The previous lemma gives a perfect matching, i.e. a permutation matrix . ∎ ###### Theorem 5. Let be sets. If , then Sym(A×B)=GLRLR∪GRLRL. If both and are infinite, then and . If is infinite and , then . If is finite, then . ###### Proof. In the first claim, if and are finite, then this is Theorem 2, and if only one of them is finite, this follows from the last claim (which we prove last). We thus consider the case that both are infinite. As usual, we consider the -by- matrix over and the cellwise projection with values in , representing a permutation of . We will compose with elements of in that order. Again we use the standard left action, which formulaically is precomposition with inverse, and in terms of matrices corresponds to directly permuting the entries. We prove the following: • if any cofinite set of columns of contains infinitely many distinct values of , i.e. ∀A′⋐A,B′⋐B:∃a′∉A′,a∈A,b∉B′:Mab=a′, then , • if the dual claim holds, then , • either the claim or its dual holds. We first prove the third item. Suppose that does not satisfy the first item. Then a finite set of columns of contains all values from a cofinite subset of . Then those columns of must contain all pairs with , so takes every possible value in these finitely many columns. This clearly cannot happen in finitely many rows and finitely many columns. Thus the dual claim of the first item holds. Thus we only need to prove the first item, as the second is symmetric, and by the third claim, these together give . Assume then that any cofinite set of columns of contains infinitely many distinct values of . We describe what happens to the matrices and after each step (we do not rename them after each step). Our plan is the following: 1. After applying , every row of has infinitely many distinct values, and every that appears on infinitely many columns also appears on infinitely many rows. 2. After an application of , all columns of have exactly one copy of each . 3. By Lemma 2, another application of finishes the proof. Assume . In the first step, we modify each column at most once, and then freeze it and never modify it again. We go through , and on the th turn, we modify a finite set of columns and then freeze them. What we ensure on the th turn is that the first rows of all contain at least distinct values that appear frozen columns, and that each with which appears on infinitely many columns also appears on at least distinct rows on frozen columns. No trick is needed, we just do it: After a finite number of steps, we have seen only frozen finitely many columns, and contains infinitely many values in any cofinite set of columns, so we never run out of fresh values to move to rows needing them, thus we can indeed make sure each row contains more and more distinct values, and if appears on infinitely many columns, then we can make this choice infinitely many times. In the limit, in the compact topology of cellwise convergence of the matrix entries, clearly the resulting matrix still describes a permutation (we performed a permutation at most once on each column, so clearly the transformation is columnwise bijective, thus bijective). Every row contains infinitely many distinct values since for any row and any , on the th turn we made sure the th row contains at least values. In the second step, we again construct the permutation column by column, but now we are permuting rows instead of columns. We modify each row infinitely many times, but with smaller and smaller supports, and take the limit of the process. Note that the set of matrices representing injective maps is closed with respect to this topology, so the limit is automatically injective (if well-defined). Matrices representing surjective maps are not closed, so our matrix may fail to be surjective in the limit if we are careless. To ensure surjectivity, we fix an enumeration of , and say the index of is the such that . For surjectivity, it is enough that each appears in the limit, and for this it is enough that from some point on, we no longer move pairs with indices up to . To ensure that we get a limit in the second step, we modify each column at most once. We go through , and on the th turn, we permute the rows in such a way that the th column is not modified for , the th column contains exactly one copy of each after the permutations, and the th pair is moved to column unless it is already in some column . Call/color an element of the th column red if the row containing it has not yet been permuted on the th turn, and green otherwise. We begin the turn by moving the th pair to column if it is among columns (and color it green, so that at this point we have at most one green entry). We now perform a back-and-forth argument where we modify each row at most once by alternating the following steps: • Pick the next that has not yet been moved into the column (i.e. all occurrences of in it are red), move that into the column (possibly it was already there), color it green and freeze the row. • Let be the first row with a red symbol and move any fresh (that does not yet appear as a green symbol in the th column) into it. The first type of move in the back-and-forth is always possible: If appeared on only finitely many columns initially (thus also after the first step), in which case it appeared on infinitely many distinct rows, and since exactly copies of are on frozen columns , there are still infinitely many unfrozen copies of on infinitely many distinct rows. If appeared on infinitely many columns before the first step, then after the first step appears on infinitely many rows, thus on the th turn there are again still unfrozen copies of on infinitely many distinct rows. The second type of move in the back-and-forth is always possible: All rows contain infinitely many distinct symbols , and at any point of the process we have only finitely many green symbols on the column. This concludes the construction, as Lemma 2 applies to the resulting matrix. For the second claim, we observe that no -matrix with -projection ⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝∗0000⋯∗0000⋯∗0000⋯∗0000⋯∗0000⋯⋮⋮⋮⋮⋮⋱⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠ where the -symbols are elements of , is in . Namely, the first application of is useless, as the set of matrices of this form is invariant under . The following application of will move at most one nonzero into each row. Then already the top left -by- block necessarily contains at least two zeroes on some column, so Lemma 2 does not apply. Note that matrices of this form do exist since . For the third claim, let , . Pick a bijection . Consider a permutation mapping , , . Writing again as an -by- matrix , the -row contains exactly one element which should be in the -row in the end, and the -row contains only elements that belong to it. We have : After the application of to , we still have exactly one element of the form (namely ) in the -row. If it is in the -column, then the -column contains two elements of this form, and thus after the application of , the -column contains such an element in some row . Therefore after applying we still have at least one element of the form in the -row, so we have not turned to the identity. For the fourth claim, the proof is that of Theorem 2, but using Lemma 4 in place of Lemma 1. ∎ ###### Corollary 3. The category CountSet has alternation diameter . Since in FinSet we had alternation diameter and , while in CountSet we have alternation diameter and but not or , one notes that “alternation diameter” indeed loses some information. One could instead mimic quantifier hierarchies and define , , and inductively , , . Then in FinSet, the alternation hierarchy collapses on the level , while the one for CountSet collapses at the join . ### 3.3 Finite-dimensional vector spaces Besides Set, an obvious place to look for category-wide diameter bounds is the category of finite-dimensional vector spaces. The first reason is that it is a category where all objects and morphisms behave nicely. The second is the intuition that dimension can often replace cardinality. We find that alternating diameter is indeed , as in FinSet. Fix a field and let be the category of finite-dimensional vector spaces over . ###### Theorem 6. The category has alternation diameter . More precisely, let and be in . Then satisfies . ###### Proof. Let . The claims and , are symmetric, so we only prove the first. Consider an arbitrary matrix in block representation with four blocks, of widths and heights and , respectively (the “-by- block” of size -by- on the top left). Now applying automorphisms in (from the left) amounts to row operations that do not modify the bottom blocks, and modifies the bottom blocks only. We need to turn into the identity matrix with an element of . If are the rows of , then their restriction to their first coordinates has full rank. It is easy to see that then there is a finite sequence of row operations that do not affect the bottom rows – that is, an element of – left multiplication by which turns the top left block into the identity matrix. Next, apply an element of to turn the bottom left block into zeroes, and then the bottom right block into the identity matrix. Finally, apply an element of to turn the top left block into the all-zero matrix. We have shown that can be turned into the identity matrix by multiplying it by an element of , thus . To see that this is optimal, it is enough to consider and show that does not satisfy . Algebraically, it is easy to see that . Namely, after applying a row operation that modifies only the second row (element of ) to the identity matrix, the first row is still so the second row cannot be , and thus an application of cannot turn the resulting matrix into . ∎ In the case , one can also verify geometrically by staring intently at a square. Special linear groups also have alternation diameter in an obvious sense, by the same proof as for . (This fact does not directly fit the framework of this paper, in that the author does not know whether can be seen (in a natural way) as the automorphism group of a product object in a category.) ### 3.4 Graphs We mention a related result from graph theory. The box product , sometimes called the Cartesian product of and defined by ((g,h),(g′,h′))∈E(G□H)⟺ ((g,g′)∈E(G)∧h=h′)∨ (g=g′∧(h,h′)∈E(H)), (though it is not the category-theoretic product in the usual category of simple graphs) admits unique prime decompositions for finite connected graphs, and automorphisms of are essentially entirely determined by and (and a bit of counting) in the sense that if we decompose and into their prime factors, every automorphism consists of a permutation of the factors (with respect to a fixed identification of isomorphic factors), followed by separate permutations of the factors. In this sense, connected graphs with respect to box product have “bounded alternation diameter up to reordering of prime factors”. See [8] for details. For some related observations see Remark 1 and Section 6. ## 4 Left and right groups In this section we perform the (rather trivial) diagram chasing and algebra required to show that and “make sense”, i.e. are actually subgroups, and are independent of the choice of the product object . We give the diagrammatic definition of these subgroups. For a product object with defining projections and , write and write for the set of elements such that the leftmost diagram below commutes in Figure 1 (resp. for the set of elements such that the rightmost diagram commutes). It is easy to see (by gluing diagrams, or by algebra) that and are submonoids of under composition. To see that they are groups, note that if and then . From this, it follows (symmetrically ) is indeed a subgroup of . For an object in a category , the over category above , denoted , is the category whose objects are morphisms in (or simply the morphisms themselves), and morphisms from to are morphisms in such that the following diagram commutes: C rrh drf & & D dlg & B & Applying some geometric transformations to this diagram reveals a similarity with and , and we can make the observation that the above proof that is a group actually shows that is the automorphism group of the morphism as an object of the over category above . Similarly is the automorphism group of in the over category above . To see that the choice of the product object does not matter, suppose is another product of and , and the defining projections. By the universal property, there is a unique isomorphism such that the following diagram commutes: & C [swap]ldπ_A’ rdπ_B’ ddϕ & A & & B & A ×B luπ_A [swap]ruπ_B & Then is an isomorphism between and and thus gives an isomorphism of their automorphism groups in the over category , which as discussed are the groups (corresponding to the two different choices of the product object). The same applies to . We summarize the discussion into a theorem. Define a group triple to be a triple of groups such that . We say two group triples and are isomorphic is an isomorphism such that . ###### Theorem 7. Let be a category and let be a product of objects and with defining projections and . Then the sets and defined by the diagrams in Figure 1 are subgroups of under composition. The resulting triple does not depend on the choice of the product , up to isomorphism of group triples. ## 5 Undefined alternation diameter ### 5.1 Topological spaces It seems that, not surprisingly, in typical categories with a lot of structure, the alternation diameter is undefined for the whole category, that is, left and right automorphisms do not generate all others. We give in Top a non-trivial example (the plane) of undefined alternation diameter, and also a trivial example (the square ), and in Pos we exhibit an object with has trivial left and right groups, but non-trivial (though not far from trivial) automorphism group. In the category Top of topological spaces and continuous functions, the automorphism group has undefined alternation diameter, since the left border and the top border can be exchanged by an automorphism, but this obviously cannot be done by or . We state this as a metalemma.333We add “meta” to distinguish this from the lemma which would be obtained by replacing “nice” by the best possible list of necessary properties, which can be deduced from the proof. ###### Metalemma 1. Let be a nice enough concrete category, and a product object. If has a definable subset of the form , then is not equal to for any . ###### Proof. We have by the definition of , and since is definable. Since is a group action and , we must have . Similarly, and . The flip is in by the universal property of (see Lemma 5 for a diagrammatic deduction). Since it does not (setwise) stabilize , the subgroups do not generate it, thus they do not generate . ∎ ###### Corollary 4. The category Top has undefined alternation diameter. For homogenous spaces the question is more interesting. Let us show that also has undefined alternation diameter, by showing that rowwise and columnwise homeomorphisms cannot untangle sufficiently wild homeomorphisms in finite time. ###### Theorem 8. The automorphism group has an element that is not in for any . ###### Proof. To agree with our matrix convention ( permutes the Rows), draw the axes of the plane so that the second is the horizontal axis (left-to-right) and the first axis is vertical (top-down). Consider a homeomorphism . Let be the unit speed path from to , and consider the -image of this path, which is a path from to . Then is a path from to for any homeomorphism . Now, cut out a small compact neighborhood of and consider the sequence in which traverses the rays in cardinal directions eminating from , before it first enters (ignoring repeated crossings of the same ray). Let , be the (finite) word thus obtained. By the intermediate value theorem, between occurrences of and there is an occurrence of or .444The word may depend on the choice of , and there need not be a best possible choice for which this word is the longest. What is important is that some choice gives a long word. Formally, one can consider the set of all words that correspond to some choice of to obtain a more canonical invariant. Now consider for some , and consider the corresponding word , computed up to the neighborhood . Observe that changes either the orientation of all rows, or none of the rows. Then in , we have at least as many alternations between and as in . Similarly, an application of cannot decrease the number of alternations between and . It follows that if the path from to is mapped by to a path having as a subsequence of the word corresponding to some choice of a small neighborhood of , the corresponding spiral in each homeomorphism in has a corresponding word with subsequence , thus is not the identity map, as the identity map preserves , and does not spiral with respect to any choice of . Of course, the points are not in any way special. By including such spirals of all finite diameters in our homeomorphism by twisting horizontal paths from to , we obtain a homeomorphism with undefined alternation diameter. One can also have infinitely many twists around the same point: through the usual identification the homeomorphism defined by α(re2πit)=re2πi(t+1r) is not in for any , as any appears as a subword of corresponding to a small enough choice of around the origin. See Figure 2 for a visualization of this homeomorphism. ∎ By a more careful analysis, one can construct homeomorphisms with word norm from the identity map in the homeomorphism group of w.r.t. the generators . It follows that for , is not equal to for any . We do not have a global understanding of this group . ### 5.2 Posets In many categories, there are even easier ways to find undefined alternation diameter than Metalemma 1. We now prove that under rather general assumptions, the flip automorphism used from Metalemma 1 is not generated by and . We show how to apply them to some (finite) examples in Pos. ###### Lemma 5. Let be a category and . If is trivial and the first and second canonical projections are distinct, then has undefined alternation diameter. ###### Proof. Let be the defining left projection and be the defining right projection. Define the flip automorphism as follows: let and define a left and right projection, respectively , by . The universal property yields a morphism satisfying , . By the assumption, , so . From the existence of we see that if i
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747320413589478, "perplexity": 477.6181178389898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00327.warc.gz"}
https://www.neetprep.com/question/45373-unit-positive-charge-taken-one-point-anequipotential-surface-Work-done-charge-Work-done-charge-Work-done-constant-No-work-done?courseId=8
• Subject: ... • Chapter: ... If a unit positive charge is taken from one point to another over an equipotential surface, then (1) Work is done on the charge (2) Work is done by the charge (3) Work done is constant (4) No work is done
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725987315177917, "perplexity": 750.5836086927803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578656640.56/warc/CC-MAIN-20190424194348-20190424220348-00338.warc.gz"}
http://aas.org/archives/BAAS/v31n3/aas194/632.htm
AAS Meeting #194 - Chicago, Illinois, May/June 1999 Session 49. Observations of Nearby AGN (Seyferts and LINERs Display, Tuesday, June 1, 1999, 10:00am-7:00pm, Southwest Exhibit Hall ## [49.13] The Effect of Intrinsic UV Absorbers on the Ionizing Continuum and Narrow Emission-Line Ratios in Seyfert Galaxies S.B. Kraemer (CUA/GSFC), T.J. Turner (UMBC/GSFC), D.M. Crenshaw (CUA/GSFC), I.M. George (USRA/GSFC) We explore the effects of UV absorbing material on the shape of the EUV continuum radiation emitted by the active galactic nucleus, and on the relative strengths of emission lines, formed in the narrow line regions of Seyfert galaxies, excited by this continuum. Within a sample of Seyfert 1.5 galaxies, objects with flatter soft X-ray slopes tend to have lower values of He~II \lambda4686/H\beta, which implies a correlation between the observed spectral energy distribution of the ionizing continuum and the narrow emission line strengths. Objects with the flattest soft X-ray continua tend to possess high column density UV absorption and it is plausible that the differences in narrow emission line ratios among these galaxies are an indication of the effects of absorbing material internal to the narrow line region, rather than intrinsic differences in continuum shape. We have generated a set of photoionization models to examine the effect of a range of UV absorbers on the ionizing continuum and, hence, the resulting conditions in a typical narrow line cloud. Our results indicate that a low ionization UV absorber with large covering factor will indeed produce the combination of narrow line ratios and soft X-ray spectral characteristics observed in several Seyfert 1.5 galaxies. Our results also suggest that low ionization UV absorption may be more common than currently believed. If the author provided an email address or URL for general inquiries, it is a s follows:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944306969642639, "perplexity": 3602.8251515178817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162938.42/warc/CC-MAIN-20160205193922-00142-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/lottery-probability-to-win.696878/
Lottery, probability to win 1. Jun 13, 2013 trenekas Hello, I have problem with one probability theory task. Hope that someone of you will be able to help me. So task is: Suppose that you are playing in lottery. The comptuer generates the lottery ticket which is made from 25 numbers. Total there are 75 numbers and 49 are extracted during the game. You win if all 25 numbers from 49 will be extracted. Need to calculate probability of winning. So my soulution is very simple: (50;24)/(75;49). The 50 is amount of numbers which are not included in our ticket. 24 is 49-25. Does this solution is good? 2. Jun 14, 2013 Simon Bridge 3. Jun 14, 2013 mathman Also explain your notation. 50;24 means what? Game description. 75 numbers total possible. Lottery picks 49 from 75. You pick 25 from 75. You win if all 25 of yours are among 49 chosen by lottery. Is this correct? 4. Jun 15, 2013 lurflurf (50;24)/(75;49) means $${50 \choose 24}/{75 \choose 49}$$ Which is quite correct, but I prefer $${50 \choose 26}/{75 \choose 26}$$ The reasoning is that if we consider the 50 numbers that are not ours and the 75 total numbers, we win if and only if the 26 drawn from each set are the same. 5. Jun 15, 2013 Simon Bridge What are the sets? By context, the sets are "the 50 numbers that are not ours" and "the 75 total numbers". But 26 drawn from each set are only the same if none of them are the numbers we picked ... and where do you get the number 26 from anyway: the numbers the lottery didn't pick? Did Mathman get the game description correct? I think that, in order for you to understand your result better, you need to describe it more carefully. 6. Jun 15, 2013 lurflurf ^ Yes the 26 numbers are those not drawn. Suppose we win. The picking process splits the 50 numbers that are not ours into 24 picked and 26 not. The 75 numbers split into 49 picked and 26 not. Every possible split is equally likely. I don't think the OP description is not bad, Mathman and the OP's description seem in agreement. You pick 25 numbers, the lottery picks 49, you win if yours are a subset of the lotteries. I think it is easier to visualize if we imagine the lottery picking 26 and you win if the lottery picks none of yours. The first number drawn causes us to lose with probability 1/3 which increases with each number drawn until the final draw which causes loss with probability 1/2. The total probability of winning is. (50/75)(49/74)(48/73)...(50-k)/(75-k)...(23/52)(24/51)(25/50) or $$\prod_{k=0}^{25} \frac{50-k}{75-k} = \frac{50!49!}{75!24!} = {50 \choose 26}/{75 \choose 26}$$ Last edited: Jun 15, 2013 Similar Discussions: Lottery, probability to win
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288291692733765, "perplexity": 1198.1827293611661}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513784.2/warc/CC-MAIN-20171211164220-20171211184220-00785.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-6-inverse-functions-6-8-indeterminate-forms-and-i-hospital-s-rule-6-8-exercises-page-499/6
## Calculus 8th Edition $-1.5$ From the given graph, we have $\lim\limits_{x \to 2} \dfrac{f(x)}{g(x)} = \lim\limits_{x \to 2} \dfrac{1.5(x-2)}{(2-x)} = \dfrac{0}{0}$ This shows is an indeterminate form. Thus, applying L'Hospital, we get $= \lim\limits_{x \to 2} \dfrac{1.5(x-2)}{(-1)(x-2)}$ $=-1.5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832894802093506, "perplexity": 550.3269997997658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541309137.92/warc/CC-MAIN-20191215173718-20191215201718-00201.warc.gz"}
http://human-web.org/Tennessee/error-estimate-formula.html
We provide computer service and support for systems running Linux, Unix, eComStation, Microsoft Windows or Apple OS X.We design and build computers to client specifications, PCs and servers.We service personal and business computers, PCs and servers. Including mal-ware (virus) cleanup.We provide network design, installation and setup. Including cabled and wireless installs.We design and build computer based Video Surveillance systems for business and home security needs.We provide web site design and setup. The site can run on hardware we build and install at your location or on a web hosting service. We recommend using either Drupal or Wordpress for your web site, depending on your needs Address Jackson, TN 38301 (731) 256-0973 http://blog.eracc.com # error estimate formula Lavinia, Tennessee In fact, data organizations often set reliability standards that their data must reach before publication. There are various formulas for it, but the one that is most intuitive is expressed in terms of the standardized values of the variables. The standard error estimated using the sample standard deviation is 2.56. All of these standard errors are proportional to the standard error of the regression divided by the square root of the sample size. Formulas for a sample comparable to the ones for a population are shown below. n is the size (number of observations) of the sample. The ages in that sample were 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. So, for models fitted to the same sample of the same dependent variable, adjusted R-squared always goes up when the standard error of the regression goes down. The standard error of the model (denoted again by s) is usually referred to as the standard error of the regression (or sometimes the "standard error of the estimate") in this Therefore, the predictions in Graph A are more accurate than in Graph B. Using a sample to estimate the standard error In the examples so far, the population standard deviation σ was assumed to be known. The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater than the true population mean μ {\displaystyle \mu } = 33.88 years. The margin of error and the confidence interval are based on a quantitative measure of uncertainty: the standard error. The correlation between Y and X is positive if they tend to move in the same direction relative to their respective means and negative if they tend to move in opposite Some regression software will not even display a negative value for adjusted R-squared and will just report it to be zero in that case. Assume the data in Table 1 are the data from a population of five X, Y pairs. The standard error of the estimate is closely related to this quantity and is defined below: where σest is the standard error of the estimate, Y is an actual score, Y' Standard error of the mean Further information: Variance §Sum of uncorrelated variables (Bienaymé formula) The standard error of the mean (SEM) is the standard deviation of the sample-mean's estimate of a The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. The standardized version of X will be denoted here by X*, and its value in period t is defined in Excel notation as: ... Example data. Relative standard error See also: Relative standard deviation The relative standard error of a sample mean is the standard error divided by the mean and expressed as a percentage. Anmelden Teilen Mehr Melden Möchtest du dieses Video melden? doi:10.2307/2682923. Anmelden Transkript Statistik 112.528 Aufrufe 549 Dieses Video gefällt dir? For the age at first marriage, the population mean age is 23.44, and the population standard deviation is 4.72. The standard deviation of the age for the 16 runners is 10.23. So, attention usually focuses mainly on the slope coefficient in the model, which measures the change in Y to be expected per unit of change in X as both variables move Notice that s x ¯   = s n {\displaystyle {\text{s}}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}} is only an estimate of the true standard error, σ x ¯   = σ n Melde dich an, um unangemessene Inhalte zu melden. The researchers report that candidate A is expected to receive 52% of the final vote, with a margin of error of 2%. The sample standard deviation s = 10.23 is greater than the true population standard deviation σ = 9.27 years. Often X is a variable which logically can never go to zero, or even close to it, given the way it is defined. Melde dich bei YouTube an, damit dein Feedback gezählt wird. First we need to compute the coefficient of correlation between Y and X, commonly denoted by rXY, which measures the strength of their linear relation on a relative scale of -1 The mean of these 20,000 samples from the age at first marriage population is 23.44, and the standard deviation of the 20,000 sample means is 1.18. As an example of the use of the relative standard error, consider two surveys of household income that both result in a sample mean of \$50,000. A variable is standardized by converting it to units of standard deviations from the mean. To estimate the standard error of a student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence It is rare that the true population standard deviation is known. The standard error of the forecast gets smaller as the sample size is increased, but only up to a point. Figure 1. In other words, it is the standard deviation of the sampling distribution of the sample statistic. That is, R-squared = rXY2, and that′s why it′s called R-squared. Because these 16 runners are a sample from the population of 9,732 runners, 37.25 is the sample mean, and 10.23 is the sample standard deviation, s. This term reflects the additional uncertainty about the value of the intercept that exists in situations where the center of mass of the independent variable is far from zero (in relative A simple regression model includes a single independent variable, denoted here by X, and its forecasting equation in real units is It differs from the mean model merely by the addition Hinzufügen Möchtest du dieses Video später noch einmal ansehen?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9208816885948181, "perplexity": 657.8225000838327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823785.24/warc/CC-MAIN-20181212065445-20181212090945-00226.warc.gz"}
http://www.computer.org/csdl/trans/tc/1995/12/t1462-abs.html
The Community for Technology Leaders Subscribe Issue No.12 - December (1995 vol.44) pp: 1462-1468 ABSTRACT <p><it>Abstract</it>—Dense, symmetric graphs are useful interconnection models for multicomputer systems. Borel Cayley graphs, the densest degree-4 graphs for a range of diameters [<ref rid="BIBC14621" type="bib">1</ref>], are attractive candidates. However, the group-theoretic representation of these graphs makes the development of efficient routing algorithms difficult. In earlier reports, we showed that all degree-4 Borel Cayley graphs have generalized chordal ring (GCR) and chordal ring (CR) representations [<ref rid="BIBC14622" type="bib">2</ref>], [<ref rid="BIBC14623" type="bib">3</ref>]. In this paper, we present the class-congruence property and use this property to develop the two-phase routing algorithm for Borel Cayley graphs in a special GCR representation. The algorithm requires a small space complexity of O(<it>p</it>+<it>k</it>) for <it>n</it>=<it>p</it>×<it>k</it> nodes. Although suboptimal, the algorithm finds paths with length bounded by 2<it>D</it>, where <it>D</it> is the diameter. Furthermore, our computer implementation of the algorithm on networks with 1,081 and 15,657 nodes shows that the average path length is on the order of the diameter. The performance of the algorithm is compared with that of existing optimal and suboptimal algorithms.</p> INDEX TERMS Generalized chordal ring, interconnection network, parallel computer. CITATION Bruce W. Arden, K. Wendy Tang, "Class-Congruence Property and Two-Phase Routing of Borel Cayley Graphs", IEEE Transactions on Computers, vol.44, no. 12, pp. 1462-1468, December 1995, doi:10.1109/12.477252
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133381605148315, "perplexity": 3676.8266993170573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246650195.9/warc/CC-MAIN-20150417045730-00264-ip-10-235-10-82.ec2.internal.warc.gz"}
http://ctms.engin.umich.edu/CTMS/index.php?example=BallBeam&section=ControlFrequency
Effects Tips # Ball & Beam: Frequency Domain Methods for Controller Design Key MATLAB commands used in this tutorial are: tf , bode , feedback , step ## Contents The open-loop transfer function of the plant for the ball and beam experiment is given below: (1) The design criteria for this problem are: • Settling time less than 3 seconds • Overshoot less than 5% To see the derivation of the equations for this problem refer to the Ball & Beam: System Modeling page. ## Open-loop bode plot The main idea of frequency based design is to use the Bode plot of the open-loop transfer function to estimate the closed-loop response. Adding a controller to the system changes the open-loop Bode plot, therefore changing the closed-loop response. Let's first draw the bode plot for the original open-loop transfer function. Create a new m-file with the following code and then run it in the MATLAB command window. You should get the following Bode plot: m = 0.111; R = 0.015; g = -9.8; L = 1.0; d = 0.03; J = 9.99e-6; s = tf('s'); P_ball = -m*g*d/L/(J/R^2+m)/s^2; bode(P_ball) From this plot we see that the phase margin is zero. Since the phase margin is defined as the change in open-loop phase shift necessary to make a closed-loop system unstable this means that our zero phase margin indicates our system is unstable. We want to increase the phase margin and we can use a lead compensator controller to do this. For more information on Phase and Gain margins please refer to the Introduction: Frequency Domain Methods for Controller Design page. A first order phase-lead compensator has the form given below: (2) The phase-lead compensator will add positive phase to our system over the frequency range and , which are called the corner frequencies. The maximum added phase for one lead compensator is 90 degrees. For our controller design we need a percent overshoot of less than 5%, which corresponds to a of 0.7. Generally will give you the minimum phase margin needed to obtain your desired overshoot. Therefore, we require a phase margin greater than 70 degrees. To obtain and , the following steps can be applied. 1. Determine the positive phase needed: We need at least 70 degrees from our controller. 2. Determine the frequency where the phase should be added (center frequency): In our case this is difficult to determine because the phase vs. frequency graph in the bode plot is a flat line. However, we have a relation between bandwidth frequency ( ) and settling time which tells us that is approximately 1.92 rad/s. Therefore, we want a center frequency just before this. For now we will choose 1 rad/sec. 3. Determine the constant from the equation below: this determines the required space between the zero and the pole for the maximum phase added. (3) where phi refers to the desired phase margin. For 70 degrees, = 0.0311. 4. Determine and from the following equations: (4) (5) For 70 degrees and center frequency = 1, = 0.176 and = 5.67. Now, we can add our lead controller to the system and view the bode plot. Remove the bode command from your m-file and add the following. You should get the following bode plot: phi=70*pi/180; a=(1-sin(phi))/(1+sin(phi)); w=1; T=1/(w*sqrt(a)); K = 1; C = K*(1+T*s)/(1+a*T*s); bode(C*P_ball) You can see that our phase margin is now 70 degrees. Let's check the closed-loop response to a step input of 0.25 m. Add the following to your m-file. You should get the following plot: sys_cl = feedback(C*P_ball,1); t = 0:0.01:5; step(0.25*sys_cl,t) Although the system is now stable and the overshoot is only slightly over 5%, the settling time is not satisfactory. Increasing the gain will increase the crossover frequency and make the response faster. With = 5, your response should look like: K = 5; C = K*(1+T*s)/(1+a*T*s); sys_cl = feedback(C*P_ball,1); bode(C*P_ball) step(0.25*sys_cl,t) The response is faster, however, the overshoot is much too high. Increasing the gain further will just make the overshoot worse. We can increase our phase-lead compensator to decrease the overshoot. Create an m-file and copy the following code from your web-browser into it: pm = 80; w = 1; K = 1; %view compensated system bode plot pmr = pm*pi/180; a = (1 - sin(pmr))/(1+sin(pmr)); T = sqrt(a)/w; aT = 1/(w*sqrt(a)); C = K*(1+aT*s)/(1+T*s); figure bode(C*P_ball) %view step response sys_cl = feedback(C*P_ball,1); t = 0:0.01:5; figure step(0.25*sys_cl,t) The overshoot is fine but the settling time is just a bit long. Try different numbers and see what happens. Using the following values the design criteria was met. pm = 85; w = 1.9; K = 2; %view compensated system bode plot pmr = pm*pi/180; a = (1 - sin(pmr))/(1+sin(pmr)); T = sqrt(a)/w; aT = 1/(w*sqrt(a)); C = K*(1+aT*s)/(1+T*s); figure bode(C*P_ball) %view step response sys_cl = feedback(C*P_ball,1); t = 0:0.01:5; figure step(0.25*sys_cl,t) Note: A design problem does not necessarily have a unique answer. Using this method (or any other) may result in many different compensators. For practice you may want to go back and change the added phase, gain, or center frequency.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141472339630127, "perplexity": 1257.9541240752676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00515.warc.gz"}
http://fourier.eng.hmc.edu/e176/lectures/algebra/node19.html
# The Fundamental Theorem of Linear Algebra When solving a linear equation system , with an coefficient matrix , we need to answer some questions such as the following: • Does the solution exist, i.e., can we find an so that holds? • If a solution exists, is it unique? If it is not unique, how can we find all of the solutions? • If no solution exists, can we still find the optimal approximate solution so that the error is minimized? • If the system has fewer equations than unknowns (), are there infinite solutions? • If the system has more equations than unknowns (), is there no solution? The fundamental theorem of linear algebra can reveal the structure of the solutions of a given linear system , and thereby answer all such questions. The coefficient matrix can be expressed in terms of either its M-D column vectors or its N-D row vectors : (303) (304) where and are respectively the ith row vector and jth column vector (all vectors are assumed to be vertical): (305) In general a function can be represented by , where • is the domain of the function, the set of all input or argument values; • the codomain of the function, the set into which all outputs of the function are constrained to fall; • the function value of any is called the image of ; • the set of for all is the image of the function, a subset of the codomain. The matrix can be considered as a function, a linear transformation , which maps an N-D vector in the domain of the function into an M-D vector in the codomain of the function. The fundamental theorem of linear algebra concerns the following four subspaces associated with any matrix with rank (i.e., has independent columns and rows). • The column space of is a space spanned by its M-D column vectors (of which are independent): (306) which is an R-D subspace of composed of all possible linear combinations of its column vectors: (307) The column space is the image of the linear transformation , and the equation is solvable if and only if . The dimension of the column space is the rank of , . • The row space of is a space spanned by its N-D row vectors (of which are independent): (308) which is an R-D subspace of composed of all possible linear combinations of its row vectors: (309) The row space is the image of the linear transformation , and the equation is solvable if and only if . As the rows and columns in are respectively the columns and rows in , the row space of is the column space of , and the column space of is the row space of : (310) The rank is the number of linearly independent rows and columns of , i.e., the row space and the column space have the same dimension, both equal to the rank of : (311) • The null space (kernel) of , denoted by , is the set of all N-D vectors that satisfy the homogeneous equation (312) i.e., (313) In particular, when , we get , i.e., the origin is in the null space. As for any and , we see that the null space and the row space are orthogonal to each other, . The dimension of the null space is called the nullity of : . The rank-nullity theorem states the sum of the rank and the nullity of an matrix is equal to : (314) We therefore see that and are two mutually exclusive and complementary subspaces of : (315) i.e., they are orthogonal complement of each other, denoted by (316) Any N-D vector is in either of the two subspaces and . • The null space of (left null space of , denoted by , is the set of all M-D vectors that satisfy the homogeneous equation (317) i.e., (318) As all are orthogonal to , is orthogonal to the column space : (319) We see that and are two mutually exclusive and complementary subspaces of : (320) Any M-D vector is in either of the two subspaces and . The four subspaces are summarized in the figure below, showing the domain (left) and the codomain (right) of the linear mapping , where • is the particular solution that is mapped to , the image of ; • is a homogeneous solution that is mapped to ; • is the complete solution that is mapped to . On the other hand, , , and are respectively the particular, homogeneous and complete solutions of . Here we have assumed and , i.e., both and are solvable. We will also consider the case where later. We now consider specifically how to find the solutions of the system in light of the four subspaces of defined above, through the examples below. Example 1: Solve the homogeneous equation system: (321) We first convert into the rref: (322) The columns in the rref containing a single 1, called a pivot, are called the pivot columns, and the rows containing a pivot are called the pivot rows. Here, , i.e., is a singular matrix. The two pivot rows and can be used as the basis vectors that span the row space : (323) Note that the pivot columns of the rref do not span the column space , as the row reduction operations do not reserve the columns of . But they indicate the corresponding columns and in the original matrix can be used as the basis that spans . In general the bases of the row and column spaces so obtained are not orthogonal. The pivot rows are the independent equations in the system of equations, and the variables corresponding to the pivot columns (here and ) are the pivot variables. The remaining non-pivot rows containing all zeros are not independent, and the variables corresponding to the non-pivot rows are free variables (here ), which can take any values. From the rref form of the equation, we get (324) If we let the free variable take the value 1, then we can get the two pivot variables and , and a special homogeneous solution as a basis vector that spans the 1-D null space . However, as the free variable can take any value , the complete solution is the entire 1-D null space: (325) Example 2: Solve the non-homogeneous equation with the same coefficient matrix used in the previous example: (326) We use Gauss-Jordan elimination to solve this system: (327) The pivot rows correspond to the independent equations in the system, i.e., , while the remaining non-pivot row does not play any role as they map any to . As is singular, does not exist. However, we can find the solution based on the rref of the system, which can also be expressed in block matrix form: (328) where (329) Solving the matrix equation above for , we get (330) If we let , we get a particular solution , which can be expressed as a linear combination of and that span , and that span : (331) We see that this solution is not entirely in the row space . In general, this is the case for all particular solutions so obtained. Having found both the particular solution and the homogeneous solution , we can further find the complete solution as the sum of and the entire null space spanned by : (332) Based on different constant , we get a set of equally valid solutions. For example, if , then we get (333) These solutions have the same projection onto the row space , i.e., they have the same projections onto the two basis vectors and that span : (334) The figure below shows how the complete solution can be obtained as the sum of a particular solution in and the entire null space . Here and space is composed of and , respectively 2-D and 1-D on the left, but 1-D and 2-D on the right. In either case, the complete solution is any particular solution plus the entire null space, the vertical dashed line on the left, the top dashed plane on the right. All points on the vertical line or top satisfy the equation system, as they are have the same projection onto the row space . If the right hand side is , then the rref of the equation becomes: (335) The non-pivot row is an impossible equation , indicating that no solution exists, as is not in the column space spanned by and . Example 3: Find the complete solution of the following linear equation system: (336) This equation can be solved in the following steps: • Construct the augmented matrix and then convert it to the rref form: The two pivot rows and in the rref span , and the two columns in the original matrix corresponding to the pivot columns in the rref, and could be used as the basis vectors that span . The equation system can be represented in block matrix form: (337) where (338) Multiplying out we get (339) • Find the homogeneous solution for equation by setting : (340) We let be either of the two standard basis vectors and of the null space , and get (341) and the two corresponding homogeneous solutions: (342) • Find the particular solution of the non-homogeneous equation by setting (343) • Find the complete solution: (344) If the right-hand side is , then the row reduction of the augmented matrix yields: (345) The equation corresponding to the last non-pivot row is , indicating the system is not solvable (even though the coefficient matrix does not have full rank), because is not in the column space. Example 4: Consider the linear equation system with a coefficient matrix , the transpose of used in the previous example: (346) • Convert the augmented matrix into the rref form: (347) The two pivot rows and are the basis vectors that span . The two vectors that span found in Example 3 can be expressed as linear combinations of and , and , i.e., either of the two pairs can be used as the basis that span . Based on the rref above, the equation system can now be written as (348) where (349) Multiplying out we get (350) • Find the homogeneous solution for by setting and thereby . We let the free variable and get (351) • Find the particular solution of the non-homogeneous equation by setting : (352) • Find the complete solution: (353) If the right-hand side is , (354) indicating the system is not solvable, as this , i.e., it is not in the column space of or row space of . In the two examples above, we have obtained all four subspaces associated with this matrix with , , and , in terms of the bases that span the subspaces: 1. The row space is an R-D subspace of , spanned by the pivot rows of the rref of : (355) 2. The null space is an (N-R)-D subspace of spanned by the independent homogeneous solutions: (356) Note that the basis vectors of are indeed orthogonal to those of . 3. The column space of is the same as the row space of , which is the R-D subspace of spanned by the two pivot rows of the rref of . (357) 4. The left null space is a (M-R)-D subspace of spanned by the homogeneous solutions (here one solution): (358) Again note that the basis vectors of are orthogonal to those of . In general, here are the ways to find the bases of the four subspaces: • The basis vectors of are the pivot rows of the rref of . • The basis vectors of are the pivot rows of the rref of . • The basis vectors of are the independent homogeneous solutions of . To find them, reduce to the rref, identify all free variables corresponding to non-pivot columns, set one of them to 1 and the rest to 0, solve homogeneous system to find the pivot variables to get one basis vector. Repeat the process for each of the free variables to get all basis vectors. • The basis vectors of can be obtained by doing the same as above for . Note that while the basis of are the pivot rows of the rref of , as its rows are equivalent to those of , the pivot columns of the rref basis of are not the basis of , as the columns of have been changed by the row deduction operations and are therefore not equivalent to the columns of the resulting rref. The columns in corresponding to the pivot columns in the rref could be used as the basis of . Alternatively, the basis of can be obtained from the rref of , as its rows are equivalent to those of , which are the columns of . We further make the following observations: • The basis vectors of each of the four subspaces are independent, the basis vectors of and are orthogonal, and . Similarly, the basis vectors of and are orthogonal, and . In other words, the four subspaces indeed satisfy the following orthogonal and complementary properties: (359) i.e., they are orthogonal complements: . • For to be solvable, the constant vector on the right-hand side must be in the column space, . Otherwise the equation is not solvable, even if . Similarly, for to be solvable, must be in the row space . In the examples above, both and are indeed in their corresponding column spaces: (360) But as and , the corresponding systems have no solutions. • All homogeneous solutions of are in the null space , but in general the particular solutions are not necessarily in the row space . In the example above, is a linear combination of the basis vectors of and basis vectors of : (361) where (362) are the projections of onto and , respectively, and is another particular solution without any homogeneous component that satisfies , while is a homogeneous solution satisfying . • All homogeneous solutions of are in the left null space , but in general the particular solutions are not necessarily in the column space. In the example above, is a linear combination of the basis vectors of and basis vector of : (363) where is a particular solution (without any homogeneous component) that satisfies . Here is a summary of the four subspaces associated with an by matrix of rank . , , dim dim dim dim solvability of 0 0 solvable, is unique solution 0 over-constrained, solvable if 0 under-constrained, solvable, infinite solutions solvable only if , infinite solutions The figure below illustrates a specific case with and . As , the system can only be approximately solved to find , which is the projection of onto the column space . The error is minimized, is the optimal approximation. We will consider ways to obtain this optimal approximation in the following sections.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745025634765625, "perplexity": 262.6978141890819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00522.warc.gz"}
https://stacks.math.columbia.edu/tag/03OJ
Theorem 58.17.4 (Meta theorem on quasi-coherent sheaves). Let $S$ be a scheme. Let $\mathcal{C}$ be a site. Assume that 1. the underlying category $\mathcal{C}$ is a full subcategory of $\mathit{Sch}/S$, 2. any Zariski covering of $T \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ can be refined by a covering of $\mathcal{C}$, 3. $S/S$ is an object of $\mathcal{C}$, 4. every covering of $\mathcal{C}$ is an fpqc covering of schemes. Then the presheaf $\mathcal{O}$ is a sheaf on $\mathcal{C}$ and any quasi-coherent $\mathcal{O}$-module on $(\mathcal{C}, \mathcal{O})$ is of the form $\mathcal{F}^ a$ for some quasi-coherent sheaf $\mathcal{F}$ on $S$. Proof. After some formal arguments this is exactly Theorem 58.16.2. Details omitted. In Descent, Proposition 35.8.11 we prove a more precise version of the theorem for the big Zariski, fppf, étale, smooth, and syntomic sites of $S$, as well as the small Zariski and étale sites of $S$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9551081657409668, "perplexity": 325.60518806554865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216897.58/warc/CC-MAIN-20201130161537-20201130191537-00437.warc.gz"}
http://mathhelpforum.com/algebra/181498-fin-zeros-polynomial-function.html
# Math Help - Fin the zeros of the polynomial function 1. ## Fin the zeros of the polynomial function f(x)=x(squared) + 11x + 18 Please write the steps to solve this problem. Thank you. 2. (x+9)(x+2)=0 so x= -9,x=-2 3. Originally Posted by GhitaBelghiti f(x)=x(squared) + 11x + 18 Please write the steps to solve this problem. Thank you. see post#2 by duke. if you can't guess how he got that then visit this and then try the problem Quadratic equation - Wikipedia, the free encyclopedia
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445801973342896, "perplexity": 1637.3439303529688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928486.86/warc/CC-MAIN-20150521113208-00092-ip-10-180-206-219.ec2.internal.warc.gz"}
http://lambda-the-ultimate.org/node/2565
## Weak normalisation theorem for typed lambda-calculus Hi, I read the proof of the weak normalisation theorem on the book Proofs and Types, on the chapter 4. There is something that seems strange to me. It says that: The degree &(T) of a type is defined by: - &(Ti) = 1 if Ti is atomic. - &(U x V) = &(U -> V) = max(&(U), &(V)) + 1 The degree &(r) of a redex is defined by: - &(fst) = &(snd) = &(U x V) where U x V is the type of - &((\x.v)u) = &(U -> V) where U -> V is the type of (\x.v) Reading this it seems to me that a redex has the degree of its type. Am I wrong? Anyway, after it says: Note that, if r is a redex of type T, then &(r) > &(T). Why should &(r) be greater than &(T)? thanks, bye ## Comment viewing options ### Explanation (Your definition for the redex degree of the projections got garbled by the HTML-filter). Here are some examples . Suppose T is atomic and X is a normal form of type T. • &(T) = 1 • &(T x T) = max (&(T), &(T)) + 1 = 2 • &(X) = 0 • &(fst <X, X>) = &(T x T) = 2 So the redex degree in the last line is equal to the degree of the tuple type and not equal to the degree of the type of the first element. (edit: fixed tuple type syntax) ### By the way, does anyone have By the way, does anyone have any suggestions as to how I might legally obtain a reasonably priced copy of this book? The cheapest listing on amazon.co.uk at the moment is £211.44. It's only a little book.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560633659362793, "perplexity": 2499.1722046131445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00264.warc.gz"}
https://casper.astro.berkeley.edu/astrobaki/index.php?title=Lorentz_transformations&oldid=5161
# Lorentz transformations ## Basics of Special Relativity ### Postulates 1. The laws of physics are the same in all inertial reference frames. 2. The speed of light is constant in all reference frames. ### Thought experiments #### Time dilation Imagine you are on a train and sending a pulse of light vertically from the floor to the ceiling. The pulse bounces off the ceiling and returns to you. The pulse travels at the speed of light, ${\displaystyle c}$, and the time it takes to return to you is ${\displaystyle 2t_{0}}$. So the light pulse travels a total distance of ${\displaystyle 2ct_{0}}$. Meanwhile, there is another observer on the train platform who sees the train traveling by him with a speed ${\displaystyle v}$. From his perspective, the light pulse does not travel vertically. Instead, it moves horizontally as well. Call the time that this observer measures for the pulse to travel to the ceiling and back ${\displaystyle 2t}$. Then, when the light pulse has returned to the floor of the train, the observer on the platform measures a horizontal displacement of ${\displaystyle 2vt}$, since this is the distance the train has moved. The path of the light pulse will be diagonal from his perspective – diagonally up and then diagonally down. Since the speed of light is the same for all observers, he will measure that the pulse has traveled a distance of ${\displaystyle 2ct}$. The two diagonal paths together with the horizontal displacement form a triangle. Let’s split this triangle into two right triangles by drawing a vertical line through the center. The length of the vertical line is ${\displaystyle ct_{0}}$. Each of the two right triangles will have a horizontal side of length ${\displaystyle vt}$ and hypotenuse ${\displaystyle ct}$. Use the Pythagorean theorem to relate the sides, i.e., ${\displaystyle (ct)^{2}=(vt)^{2}+(ct_{0})^{2}}$. We can rearrange this to get ${\displaystyle t=t_{0}\gamma ,\,\!}$ where ${\displaystyle \gamma \equiv {\frac {1}{\sqrt {1-(v/c)^{2}}}}.\,\!}$ We call ${\displaystyle t_{0}}$ the proper time. It is the time between two events in the reference frame in which the events occur at the same spatial location. In any other reference frame, the elapsed time will be longer. In our case, the two events are the sending and receiving of the light pulse. The take-away point is that time appears to be progressing slower for moving objects. #### Length contraction Suppose the length of the train car is ${\displaystyle \ell _{0}}$ at rest. This is the length that the observer on the train would measure. Suppose there is an observer on the platform. He is standing still on the platform letting the train pass by him, and he records the time at which the front of the train car passes him and the time at which the back of the train car passes him. Since these two events occur at the same spatial location in the reference frame of the platform, we will use ${\displaystyle t_{0}}$ to denote the time between the passing of the front and back of the train car. So ${\displaystyle t_{0}}$ is not the same as in the section on time dilation. Then the observer on the platform would infer that the length of the train car is ${\displaystyle \ell =vt_{0}}$. In the meantime, the observer on the train would measure a time difference of ${\displaystyle t=\ell _{0}/v}$. Now we can use the time-dilation formula to relate ${\displaystyle t}$ and ${\displaystyle t_{0}}$. Then we can relate ${\displaystyle \ell }$ and ${\displaystyle \ell _{0}}$ as ${\displaystyle \ell ={\frac {\ell _{0}}{\gamma }}.\,\!}$ It’s easy to be confused by this logic. The proper length, ${\displaystyle \ell _{0}}$, is measured in the reference frame of the train, but the proper time, ${\displaystyle t_{0}}$, is measured in the reference frame of the platform. The proper length is the length measured in the frame in which the object is at rest. The proper time is the time between two events in the frame in which those events occur at the same spatial location. The take-away point is that moving objects contract along the direction of motion. ### Units (${\displaystyle c=1}$) When dealing with relativistic systems on a theoretical level, it is often useful to set ${\displaystyle c=1}$. Most people find this totally ridiculous at first, but gradually they understand the legitimacy and the usefulness of this choice. This is standard practice in the general-relativity and particle-physics communities. It is such a common choice that virtually all papers in these fields do not even bother to state it. It is mostly only in textbooks that the choice is clearly stated, and usually this is done very early on. For instance, Sean Carroll sets ${\displaystyle c=1}$ on page 8 of Spacetime and Geometry. Steven Weinberg sets ${\displaystyle c=1}$ in the preface of Gravitation and Cosmology. In An Introduction to Quantum Field Theory, Peskin and Schroeder set ${\displaystyle c=1}$ between the preface and the editor’s forward. And so on... When we set ${\displaystyle c=1}$, we are just saying that ${\displaystyle 3\times 10^{10}\,\mathrm {cm} =1\,\mathrm {s} }$. Distance and time can be measured in the same units. You can measure distance in seconds, and you can measure time in centimeters. If you don’t want your distances in seconds, then you can always convert to centimeters by multiplying by ${\displaystyle 3\times 10^{10}}$. If you are ever given an expression in which ${\displaystyle c}$ has ben set to ${\displaystyle 1}$, you can always restore the factors of ${\displaystyle c}$ by requiring the units to come out the way you want. For example, we might write ${\displaystyle E=m}$, where ${\displaystyle E}$ is an energy and ${\displaystyle m}$ is a mass. If you want energy to be measured in ergs and mass to be measured in grams, then there is only one way to restore the factors of ${\displaystyle c}$, i.e., ${\displaystyle E=mc^{2}}$. On the other hand, if you’re content to measure both energy and mass in, for example, ${\displaystyle \mathrm {MeV} }$, then you don’t need to restore any factors of ${\displaystyle c}$ at all. This is why people often quote the masses of fundamental particles in ${\displaystyle \mathrm {MeV} }$ or ${\displaystyle \mathrm {GeV} }$. Sometimes they go halfway toward restoring the factors of ${\displaystyle c}$ by quoting masses in ${\displaystyle \mathrm {MeV} /c^{2}}$. The theoretical advantage – besides simplifying expressions – is that time and space are put on the same footing. The Lorentz transformation is seen as a rotation of time and space into each other. Since we now have ${\displaystyle E=m}$ instead of ${\displaystyle E=mc^{2}}$, mass is now viewed as a form of energy. Momentum will also now have the same units as energy, so we see that momentum is yet another form of energy. Instead of ${\displaystyle E^{2}=m^{2}c^{4}+p^{2}c^{2}}$, we now have ${\displaystyle E^{2}=m^{2}+p^{2}}$. This is simpler, easier to remember and clearly shows that mass and momentum are the contributions to the energy of a free particle. The factors of ${\displaystyle c}$ in the first expression only serve to get the units right. They don’t contain any theoretical significance. Additionally, velocities will now be dimensionless and always less than or equal to ${\displaystyle 1}$. This gives us a dimensionless parameter to measure how relativistic a particle is or to Taylor-expand in – higher powers of ${\displaystyle v}$ will be less and less significant. This takes the place of the parameter ${\displaystyle \beta =v/c}$ that is sometimes used in special relativity. If you ever want to take the Newtonian limit of a relativistic expression, then it is a good idea to restore all factors of ${\displaystyle c}$. In the Newtonian limit, ${\displaystyle c\to \infty }$. This makes no sense at all if ${\displaystyle c=1}$. Often, however, the ${\displaystyle c\to \infty }$ limit is equivalent to the ${\displaystyle v\ll 1}$ limit, so it’s not always necessary to restore the factors of ${\displaystyle c}$. Just be careful. In particle physics, it is typical to also set ${\displaystyle \hbar =1}$. In the classical limit, ${\displaystyle \hbar \to 0}$, so you will also sometimes see particle physicists restore and Taylor-expand in factors of ${\displaystyle \hbar }$ when they are interested in classical or semi-classical limits. You cannot set every constant equal to unity. If your system of units has ${\displaystyle n}$ units, then you can set ${\displaystyle n}$ constants to unity as long as they are linearly independent in their dimensions. This ensures that there is a unique way of restoring all of these constants. In cgs units, it is typical to take ${\displaystyle c=\hbar =1}$. Depending on the problem, you have the freedom to set some other prominent constant to unity. If Kelvin is included, it is common to set ${\displaystyle k=1}$. ## Lorentz transformations We can use the time-dilation and length-contraction thought experiments to derive the coordinate transformations between two frames traveling at a constant velocity with respect to each other. This is not the only type of Lorentz transformation. Typically, spatial rotations are also considered to be Lorentz transformations. Since spatial rotations are exactly the same in both the Einsteinian and Newtonian formulations, we will focus on Lorentz boosts, i.e., transformations to frames traveling at some constant velocity with respect to the original frame. Let ${\displaystyle S}$ be the stationary reference frame and ${\displaystyle S^{\prime }}$ be the frame moving with velocity ${\displaystyle v}$ which, without loss of generality, we can take to be along the ${\displaystyle x}$-axis. Unprimed coordinates refer to ${\displaystyle S}$; primed coordinates refer to ${\displaystyle S^{\prime }}$. Without loss of generality, assume the origins of the two coordinate systems coincide at ${\displaystyle t=t^{\prime }=0}$. When this is not the case, it will only introduce an overall offset. So we can always translate our coordinate system to make it the case that the origins coincide at ${\displaystyle t=0}$. Suppose there is a train car of length ${\displaystyle \ell }$ traveling with the ${\displaystyle S^{\prime }}$ frame. The back of the car is at ${\displaystyle x=0}$ at ${\displaystyle t=0}$ and is always at ${\displaystyle x^{\prime }=0}$. The trajectory of the front of the car is ${\displaystyle x=\ell +vt}$. This is just one-dimensional inertial motion with the initial condition ${\displaystyle x(0)=\ell }$. But in ${\displaystyle S^{\prime }}$, the car is at rest and, therefore, its length is larger, i.e., ${\displaystyle \ell ^{\prime }=\ell \gamma }$ using the length-contraction formula. In ${\displaystyle S^{\prime }}$, the front of the car is not moving, so its trajectory is given by ${\displaystyle x^{\prime }=\ell ^{\prime }=\ell \gamma }$. Then we can substitue ${\displaystyle \ell =x^{\prime }/\gamma }$ in the expression for ${\displaystyle x(t)}$, the trajectory in ${\displaystyle S}$. We can rearrange the terms to get ${\displaystyle x^{\prime }=\gamma (x-vt).\,\!}$ We can use the same logic to get a transformation from ${\displaystyle S^{\prime }}$ to ${\displaystyle S}$, but this time the velocity will be pointing in the opposite direction, i.e., ${\displaystyle v\to -v}$. So we have ${\displaystyle x=\gamma (x^{\prime }+vt^{\prime })}$. We can combine the two results to get an expression for ${\displaystyle t^{\prime }}$ in terms of ${\displaystyle x}$ and ${\displaystyle t}$. Using the definition of ${\displaystyle \gamma }$, this turns out to be ${\displaystyle t^{\prime }=\gamma (t-vx/c^{2}).\,\!}$ The expressions for ${\displaystyle x^{\prime }}$ and ${\displaystyle t^{\prime }}$ are a good example of the usefulness of setting ${\displaystyle c=1}$. Notice that the expressions are symmetric under an exchange of space and time, i.e., ${\displaystyle x\leftrightarrow t}$ and ${\displaystyle x^{\prime }\leftrightarrow t^{\prime }}$. Not only is this easier to remember, but it also emphasizes the idea that time and space are part of the same geometry in relativity. The Lorentz boost can be thought of as a rotation of time and space into each other. The ${\displaystyle y}$- and ${\displaystyle z}$-coordinates are not affected by this transformation, so ${\displaystyle y^{\prime }=y\,\!}$ and ${\displaystyle z^{\prime }=z.\,\!}$ The train car was only used to give us a spacetime point to follow. These transformations are general coordinate transformations. In particular, if a particle trajectory is given by ${\displaystyle x}$, ${\displaystyle y}$, ${\displaystyle z}$ and ${\displaystyle t}$ in ${\displaystyle S}$, then the trajectory is given by ${\displaystyle x^{\prime }}$, ${\displaystyle y^{\prime }}$, ${\displaystyle z^{\prime }}$ and ${\displaystyle t^{\prime }}$ in the ${\displaystyle S^{\prime }}$ frame. What if the velocity boost is not along the ${\displaystyle x}$-axis? Don’t make your life more difficult than it has to be. Just rotate your spatial coordinates so that the boost is along the ${\displaystyle x}$-axis. You can use similar tricks to always reduce the Lorentz boost to the form above. If the origins of the two coordinate systems do not coincide, then translate one or both of the coordinate systems to make it so. Remember that you can translate the coordinate system in both space and time. ### Velocity transformations We can use the differential form of the Lorentz transformations to see how velocities transform. In differential form, the Lorentz transformations are ${\displaystyle dx^{\prime }=\gamma (dx-vdt),\,\!}$ ${\displaystyle dt^{\prime }=\gamma (dt-vdx),\,\!}$ ${\displaystyle dy^{\prime }=dy,\,\!}$ and ${\displaystyle dz^{\prime }=dz.\,\!}$ We can define the components of the velocity vector of a particle in ${\displaystyle S}$ by ${\displaystyle v_{x}\equiv {\frac {dx}{dt}},\,\!}$ ${\displaystyle v_{y}\equiv {\frac {dy}{dt}},\,\!}$ ${\displaystyle v_{z}\equiv {\frac {dz}{dt}}\,\!}$ and likewise for the primed coordinates in ${\displaystyle S^{\prime }}$. With these definitions and a few algebraic manipulations we find ${\displaystyle v_{x}^{\prime }={\frac {v_{x}-v}{1-v_{x}v}},\,\!}$ ${\displaystyle v_{y}^{\prime }={\frac {v_{y}}{\gamma (1-v_{x}v)}}\,\!}$ and ${\displaystyle v_{z}^{\prime }={\frac {v_{z}}{\gamma (1-v_{x}v)}}.\,\!}$ These are the velocity-transformation formulae. ### Four-vectors It is often convenient to set relativistic phenomena in a 4-dimensional spacetime. We will number the dimensions of this spacetime ${\displaystyle 0}$, ${\displaystyle 1}$, ${\displaystyle 2}$ and ${\displaystyle 3}$. The ${\displaystyle 0^{\mathrm {th} }}$ component is time, and the rest are the spatial components. Then the spacetime location of a particle can be described by a 4-component object called a four-vector. We will use ${\displaystyle x}$ to represent the four-vector of a particle. Note that ${\displaystyle x}$ no longer represents position along the ${\displaystyle x}$-axis. The components of ${\displaystyle x}$ are denoted by ${\displaystyle x^{\mu }}$, where ${\displaystyle \mu }$ is meant to be an index, not an exponent. Typically, Greek indices are used when the index can take on any value from ${\displaystyle 0}$ to ${\displaystyle 3}$. Latin indices are used when the index can only take on values from ${\displaystyle 1}$ to ${\displaystyle 3}$. This is a very standard convention. When the index is raised, ${\displaystyle x^{\mu }}$ is said to be contravariant. We can also define the covariant four-vector ${\displaystyle x_{\mu }}$ for which ${\displaystyle x_{0}=-x^{0}}$ and ${\displaystyle x_{i}=x^{i}}$. Usually we define a four-vector to be any object ${\displaystyle x}$ that satisfies the condition that ${\displaystyle \sum \limits _{\mu =0}^{3}x^{\mu }x_{\mu }}$ is invariant under Lorentz transformations. Note that a four-vector need not represent location in spacetime. #### Einstein summation convention We will often want to sum over the indices of four-vectors. Albert found it tiresome to have to write down so many capital sigmas, so he invented a convention for summing over indices. According to this convention ${\displaystyle x^{\mu }y_{\mu }=\sum \limits _{i}x^{\mu }y_{\mu }.\,\!}$ The essence of the convention is that repeated indices are summed over. This is also a very standard convention. #### Minkowski metric It is useful to define a 2-index tensor ${\displaystyle \eta _{\mu \nu }}$ called the Minkowski metric. This tensor can be thought of as a matrix with ${\displaystyle \eta _{00}=-1,\,\!}$ ${\displaystyle \eta _{ii}=1\,\!}$ and ${\displaystyle \eta _{\mu \nu }=0\,\!}$ when ${\displaystyle \mu \not =\nu }$. So, as a matrix, ${\displaystyle \eta _{\mu \nu }}$ is diagonal. We can use the Minkowski metric to write covariant four-vectors in terms of contravariant four-vectors, i.e., ${\displaystyle x_{\mu }=\eta _{\mu \nu }x^{\nu }.\,\!}$ We can also raise and lower indices on tensors with multiple indices, e.g., ${\displaystyle T_{\,\,\,\,\nu }^{\mu }=\eta ^{\mu \sigma }T_{\sigma \nu }}$. Different authors use different conventions for the Minkowski metric. Our definition is common among general relativists. Among particle physicists, it is common to take ${\displaystyle \eta _{00}=+1}$ and ${\displaystyle \eta _{ii}=-1}$. This only amounts to flipping the sign on various expressions. Be sure you know which convention you are using. #### Lorentz scalar We can define the product of two four-vectors to be ${\displaystyle x^{\mu }y_{\mu }}$. This quantity is sometimes called an inner product (although it can be negative when ${\displaystyle x=y}$ which violates the conventional definition among mathematicians of an inner product). We are extremely interested in these kinds of products, because they are conserved under Lorentz transformations. Using the formulae for the Lorentz transformations, you can compute the components of ${\displaystyle (x^{\prime })^{\mu }}$ and ${\displaystyle (y^{\prime })^{\mu }}$ in terms of the components of ${\displaystyle x^{\mu }}$ and ${\displaystyle y^{\mu }}$. Then you can evaluate ${\displaystyle (x^{\prime })^{\mu }(y^{\prime })_{\mu }}$. You will find ${\displaystyle x^{\mu }y_{\mu }=(x^{\prime })^{\mu }(y^{\prime })_{\mu }.\,\!}$ Of particular interest is the case for which ${\displaystyle x^{\mu }=y^{\mu }}$. Then we have ${\displaystyle s^{2}\equiv x^{\mu }x_{\mu }=x^{2}+y^{2}+z^{2}-t^{2},\,\!}$ where ${\displaystyle s}$ is sometimes called the invariant spacetime interval and we have briefly gone back to using ${\displaystyle x}$ to mean the ${\displaystyle x}$-component of the spatial vector. The spacetime interval is conserved under all Lorentz transformations including both boosts and rotations. It is not conserved under translations. For that reason, we often talk write this equality in a differential form, i.e., ${\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2}-dt^{2}.\,\!}$ Differentials are conserved under translations, so this expression is now fully invariant under all coordinate transformations. The invariant interval, ${\displaystyle s}$, should be thought of as the length of the four-vector. In ordinary 3-dimensional space, the length of a vector is conserved under rotations. That is why we are so interested in dot products and norms; they do not depend on the orientation of our coordinate system. A Lorentz scalar is an even more useful quantity, because it is invariant under rotations as well as velocity boosts. We can also think of ${\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2}-dt^{2}}$ as a generalization of the Pythagorean theorem. The distance we travel in 3-dimensional space is given by ${\displaystyle dr^{2}=dx^{2}+dy^{2}+dz^{2}}$. This distance is independent of the coordinate system. The distance is a physical quantity; the coordinate system is just a set of labels. When we move in spacetime, we can also define a 4-dimensional spacetime triangle whose hypotenuse is the spacetime interval. The Pythagorean theorem in our 4-dimensional spacetime is not a straightforward generalization from 3 dimensions, but it has the property that the length of the hypotenuse is completely independent the coordinate system. In the rest frame of the particle, ${\displaystyle dx_{i}dx^{i}=0}$ and ${\displaystyle ds^{2}=-dt^{2}}$. We can define the proper time by ${\displaystyle d\tau ^{2}\equiv -ds^{2}.\,\!}$ So we could also use ${\displaystyle d\tau }$ as an invariant interval. It will only differ from ${\displaystyle ds}$ by a minus sign. #### Lorentz transformation as a matrix operation With our new formalism, we can write the Lorentz transformation as a matrix acting on a vector. The Lorentz transformation will be denoted by the 2-index object ${\displaystyle \Lambda _{\,\,\,\,\nu }^{\mu }}$. The transformed four-vector is given by ${\displaystyle (x^{\prime })^{\mu }=\Lambda _{\,\,\,\,\nu }^{\mu }x^{\nu }.\,\!}$ This is just matrix multiplication where ${\displaystyle x^{\mu }=\left({\begin{array}{c}t\\x\\y\\z\end{array}}\right)\,\!}$ and, for example, ${\displaystyle \Lambda _{\,\,\,\,\nu }^{\mu }=\left({\begin{array}{cccc}\gamma &-v\gamma &0&0\\-v\gamma &\gamma &0&0\\0&0&1&0\\0&0&0&1\end{array}}\right).\,\!}$ for a boost along the ${\displaystyle x}$-axis. Rotations can be implemented by using the lower-right ${\displaystyle 3\times 3}$ block as a rotation matrix for a 3-dimensional vector. The covariant transformation is given by ${\displaystyle x_{\mu }^{\prime }=\Lambda _{\mu }^{\,\,\,\,\nu }x_{\nu },\,\!}$ where ${\displaystyle \Lambda _{\mu }^{\,\,\,\,\nu }=\eta _{\mu \sigma }\Lambda _{\,\,\,\,\kappa }^{\sigma }\eta ^{\kappa \nu }}$. We can also write the Minkowski metric as a matrix, i.e., ${\displaystyle \eta ^{\mu \nu }=\eta _{\mu \nu }=\left({\begin{array}{cccc}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}}\right).\,\!}$ So, when in doubt, all of these tensor manipulations can be done by simple matrix multiplication. Just make sure you’ve got the right matrix representation and that you’re multiplying the matrices in the right order. You can either check explicitly or infer from the Lorentz invariance of ${\displaystyle x^{\mu }x_{\mu }}$ that ${\displaystyle \Lambda _{\,\,\,\,\nu }^{\mu }\Lambda _{\mu }^{\,\,\,\,\sigma }=\delta _{\nu }^{\sigma },\,\!}$ where ${\displaystyle \delta _{\nu }^{\sigma }}$ is called the Kronecker delta. It is basically the identity matrix in that ${\displaystyle x^{\mu }=\delta _{\nu }^{\mu }x^{\nu }.\,\!}$ Raising and lowering indices on the Kronecker delta has no real significance. The order of the indices also doesn’t matter. #### Four-velocity We define the four-velocity as ${\displaystyle U^{\mu }\equiv {\frac {dx^{\mu }}{d\tau }}.\,\!}$ Since ${\displaystyle d\tau }$ and ${\displaystyle dx^{\mu }dx_{\mu }}$ are Lorentz scalars, the four-velocity is also a four-vector, i.e., ${\displaystyle U^{\mu }U_{\mu }}$ is Lorentz invariant. The ${\displaystyle 0^{\mathrm {th} }}$ component of ${\displaystyle U^{\mu }}$ does not have an intuitive interpretation. The spatial components of ${\displaystyle U^{\mu }}$ are not quite the same as the real velocity which would be ${\displaystyle dx^{i}/dt}$. Only in the non-relativistic limit, when ${\displaystyle d\tau \simeq dt}$, do the spatial components of ${\displaystyle U^{\mu }}$ begin to approximate the real velocity. Using ${\displaystyle d\tau =dt/\gamma (v)}$, where ${\displaystyle v}$ is the real velocity of the particle, we can express the four-velocity as ${\displaystyle U^{\mu }=\gamma (v)(1,\mathbf {v} ),\,\!}$ where ${\displaystyle \mathbf {v} }$ is the 3-dimensional real velocity vector. In particular, ${\displaystyle U^{\mu }=(1,0,0,0)}$ in the particle’s rest frame which means that ${\displaystyle U^{\mu }U_{\mu }=-1.\,\!}$ #### Four-acceleration The four-acceleration is defined as ${\displaystyle a^{\mu }\equiv {\frac {U^{\mu }}{d\tau }}.\,\!}$ By the same arguments given in the section on four-velocity, ${\displaystyle a^{\mu }}$ is also a four-vector. Again, the spatial components approximate the real acceleration, ${\displaystyle d^{2}x^{i}/dt^{2}}$, only in the non-relativistic limit. Interstingly, the four-acceleration is always orthogonal to the four-velocity, i.e., ${\displaystyle a^{\mu }U_{\mu }={\frac {dU^{\mu }}{d\tau }}U_{\mu }={\frac {1}{2}}{\frac {d}{d\tau }}(U^{\mu }U_{\mu })={\frac {1}{2}}{\frac {d(-1)}{d\tau }}=0}$. ## Energy and momentum We will have to define what we mean by energy and momentum in special relativity. We will try to choose definitions that reduce to the well-known Newtonian expressions in the non-relativistic limit. ### Four-momentum We define the four-momentum as ${\displaystyle p^{\mu }\equiv mU^{\mu },\,\!}$ where ${\displaystyle m}$ is the mass of the particle. Sometimes people define mass so that it actually changes from one reference frame to another. That is where the term “rest mass” comes from, i.e., the mass measured in the rest frame of the particle. We are not going to take that approach. For us, the mass is a Lorentz scalar. We will define the energy to be ${\displaystyle E\equiv p^{0}\,\!}$ and the momentum to be the spatial part of ${\displaystyle p^{\mu }}$. Using ${\displaystyle U^{\mu }=\gamma (v)(1,\mathbf {v} )}$, we find that ${\displaystyle E=m\gamma \,\!}$ and ${\displaystyle p^{i}=mv^{i}\gamma .\,\!}$ In the particle’s rest frame, we have ${\displaystyle E=m}$, which is just ${\displaystyle E=mc^{2}}$ with ${\displaystyle c=1}$. So we have discovered that mass is the rest energy of a particle. Since ${\displaystyle U^{\mu }}$ is a four-vector and ${\displaystyle p^{\mu }}$ is just proportional to ${\displaystyle U^{\mu }}$, we can conclude that ${\displaystyle p^{\mu }}$ is also a four-vector. In particular, we have ${\displaystyle p^{\mu }p_{\mu }=|\mathbf {p} |^{2}-E^{2}}$. In the rest frame, we have ${\displaystyle p^{\mu }p_{\mu }=m^{2}}$. So we have just found that ${\displaystyle |\mathbf {p} |^{2}-E^{2}=m^{2}}$ which can be rearranged to read ${\displaystyle E^{2}=|\mathbf {p} |^{2}+m^{2}.\,\!}$ #### Non-relativistic limit We were free to define energy and momentum however we liked, but it would be nice if those definitions were reasonable in the sense that they reduce to Newtonian energy and momentum in the non-relativistic limit. Our expression for energy was ${\displaystyle E=m\gamma =m/{\sqrt {1-v^{2}}}}$. The non-relativistic limit is the ${\displaystyle v\ll 1}$ limit. We can expand our expression for ${\displaystyle E}$ to second order in ${\displaystyle v}$ to get ${\displaystyle E\simeq mc^{2}+{\frac {1}{2}}mv^{2},\,\!}$ where the factors of ${\displaystyle c}$ have been restored since we are now in a pseudo-Newtonian regime. The energy looks like the usual kinetic energy for a non-relativistic particle but with some extra constant offset. This offset is not physical in Newtonian physics; only energy differences are relevant. In the fully Newtonian limit, ${\displaystyle c\to \infty }$ and the energy offset is infinite. That’s why it’s good to pause the ${\displaystyle c\to \infty }$ limit at this point and redefine the zero of our energy scale so that ${\displaystyle E=mv^{2}/2}$. Expanding ${\displaystyle p^{i}=mv^{i}\gamma }$ to second order in ${\displaystyle v}$ gives ${\displaystyle p^{i}\simeq mv^{i},\,\!}$ which is the usual non-relativistic expression for momentum. There are no factors of ${\displaystyle c}$ to restore in this expression. So it looks like our definitions of relativistic energy and momentum were reasonable after all. This section is based on Rybicki and Lightman’s treatment in Section 4.8. We want to generalize the Larmor formula for dipole radiation to particles moving at relativistic speeds. Assuming we’ve already derived the dipole formula for non-relativistic motion, a good starting point is a frame in which the particle is moving, at least momentarily, at speeds which are small compared to the speed of light. So let’s start in the instantaneous rest frame of the particle. We can form a four-momentum representing the sum of all the four-momenta of all the photons emitted. In some small time interval ${\displaystyle dt}$, the particle emits an energy ${\displaystyle dE}$. This radiation is not emitted isotropically, but there is no net flux of momentum. For any given direction, the same amount of radiation is emitted in the opposite direction. The spatial components of the four-momentum of the radiation vanish, i.e., ${\displaystyle dp^{i}=0}$. So if we transform to another frame, we will have ${\displaystyle dE^{\prime }=\gamma dE}$. At the same time, ${\displaystyle dt^{\prime }=\gamma dt}$, since the unprimed frame is the one in which the particle is instantaneously at rest. Then the emitted power is ${\displaystyle P={\frac {dE}{dt}}={\frac {E^{\prime }}{dt^{\prime }}}=P^{\prime }.\,\!}$ The factors of ${\displaystyle \gamma }$ cancel, and we find that the power is the same in both reference frames. But we could have transformed to any reference frame. So we have just proved that the radiated power is a Lorentz scalar so long as there is no net flux of momentum in the particle’s rest frame. In particular, we can apply this result to the Larmor formula for dipole radiation: ${\displaystyle P={\frac {2}{3}}q^{2}|\mathbf {a} |^{2},\,\!}$ where ${\displaystyle |\mathbf {a} |}$ is the real 3-dimensional acceleration. This is definitely valid in the instantaneous rest frame of the particle where the speeds are very small compared with the speed of light. But this expression is not Lorentz invariant, since it depends on ${\displaystyle a^{i}a_{i}}$ instead of ${\displaystyle a^{\mu }a_{\mu }}$. Because we are in the instantaneous rest frame of the particle, ${\displaystyle U^{i}=0}$ and ${\displaystyle U^{0}=1}$. At the same time, recall that ${\displaystyle a^{\mu }U_{\mu }=0}$ always. Since ${\displaystyle U^{0}\not =0}$, we must have ${\displaystyle a^{0}=0}$. But then ${\displaystyle a^{\mu }a_{\mu }=a^{i}a_{i}}$. Then we can replace ${\displaystyle |\mathbf {a} |^{2}}$ in the Larmor formula with ${\displaystyle a^{\mu }a_{\mu }}$ to get ${\displaystyle P={\frac {2}{3}}q^{2}a^{\mu }a_{\mu }.\,\!}$ All of the factors in this expression are Lorentz invariant, so this is a Lorentz invariant formula for the total power emitted by a radiating dipole. ## Relativistic electrodynamics The Maxwell Equations are Lorentz invariant. Unfortunately, the most familiar form of Maxwell’s equations (${\displaystyle \nabla \cdot \mathbf {E} =4\pi \rho }$, etc.) does not make the Lorentz invariance manifest. But we can define a few tensors and rewrite Maxwell’s equation in a manifestly Lorentz invariant form. First we define the four-current as ${\displaystyle j^{\mu }=(\rho ,\mathbf {j} ),\,\!}$ where ${\displaystyle \rho }$ is the charge density and ${\displaystyle \mathbf {j} }$ is the 3-dimensional current. Then the continuity equation can be written as ${\displaystyle \partial _{\mu }j^{\mu }={\dot {\rho }}+\nabla \cdot \mathbf {j} =0,\,\!}$ where ${\displaystyle \partial _{\mu }={\frac {\partial }{\partial x^{\mu }}}}$. So already we’ve been able to write the continuity equation in a manifestly Lorentz invariant form. Now let’s define the four-potential as ${\displaystyle A^{\mu }=(\phi ,\mathbf {A} ),\,\!}$ where ${\displaystyle \phi }$ is the scalar potential and ${\displaystyle \mathbf {A} }$ is the vector potential. We want to work in the Lorentz gauge for which the condition is ${\displaystyle \partial _{\mu }A^{\mu }=0.\,\!}$ This is a good gauge for us, because it will allow us to write the equations of motions for ${\displaystyle A^{\mu }}$ in a manifestly Lorentz invariant form. The Lorentz gauge actually originated with a physicist whose last name was Lorenz (that’s not a typo). Unforunately for Lorenz, Hendrik Lorentz became much more famous and the Lorenz gauge turns out to be associated with Lorentz invariance. So the gauge seems to have gone down in history as the Lorentz gauge and not the Lorenz gauge. In this gauge, the equations of motion are ${\displaystyle \partial ^{\nu }\partial _{\nu }A^{\mu }=-4\pi j^{\mu }.\,\!}$ Note that ${\displaystyle \partial ^{\nu }\partial _{\nu }=\Box }$ is the d’Alembertian operator. Now we can define the field-strength tensor as ${\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }.\,\!}$ Notice that ${\displaystyle F_{\mu \nu }}$ is antisymmetric in its indices, i.e., ${\displaystyle F_{\mu \nu }=-F_{\nu \mu }}$. Using this definition of the field-strength tensor we can write ${\displaystyle \partial _{\sigma }F_{\mu \nu }+\partial _{\mu }F_{\nu \sigma }+\partial _{\nu }F_{\sigma \mu }=0.\,\!}$ Using the gauge condition, the equations of motion can be written in terms of ${\displaystyle F_{\mu \nu }}$ as ${\displaystyle \partial _{\nu }F^{\mu \nu }=4\pi j^{\mu }.\,\!}$ The previous two equations are Lorentz invariant and equivalent to the conventional form of Maxwell’s equations. Now let’s try to recover the Maxwell’s equations for the electric and magnetic fields. This will be a backwards argument, since we defined the potentials through the electric and magnetic fields and the field-strength tensor through the four-potential. That is, we shouldn’t be at all surprised to see the familiar form of Maxwell’s equations emerge from this formalism. Recall that ${\displaystyle \mathbf {E} =-\nabla \phi -{\dot {\mathbf {A} }}}$ and ${\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }$. Then ${\displaystyle E_{i}=\partial _{i}A_{0}-\partial _{0}A_{i}=F_{0i}}$ and ${\displaystyle B_{i}=\epsilon _{ijk}\partial _{j}A_{k}}$, where ${\displaystyle \epsilon _{ijk}}$ is the Levi-Civita symbol. So ${\displaystyle B_{i}=F_{jk}}$ when ${\displaystyle ijk}$ is an even permutation of ${\displaystyle 123}$, and ${\displaystyle B_{i}=-F_{jk}}$ when ${\displaystyle ijk}$ is an odd permutation. As a matrix, ${\displaystyle F_{\mu \nu }}$ can be written as ${\displaystyle F_{\mu \nu }=\left({\begin{array}{cccc}0&-E_{x}&-E_{y}&-E_{z}\\E_{x}&0&B_{z}&-B_{y}\\E_{y}&-B_{z}&0&B_{x}\\E_{z}&B_{y}&-B_{x}&0\end{array}}\right).\,\!}$ Now that we’ve related the field-strength tensor to the electric and magnetic fields, we can rewrite our equation of motion (${\displaystyle \partial _{\nu }F^{\mu \nu }=4\pi j^{\mu }}$) in terms of ${\displaystyle E}$ and ${\displaystyle B}$. We find that this equation of motion is equivalent to the two inhomogeneous Maxwell’s equations: ${\displaystyle \nabla \cdot \mathbf {E} =4\pi \rho \,\!}$ and ${\displaystyle \nabla \times \mathbf {B} =4\pi \mathbf {j} +{\dot {\mathbf {E} }}.\,\!}$ We can use ${\displaystyle \partial _{\sigma }F_{\mu \nu }+\partial _{\mu }F_{\nu \sigma }+\partial _{\nu }F_{\sigma \mu }=0}$ to recover the two homogeneous Maxwell’s equations: ${\displaystyle \nabla \cdot \mathbf {B} =0\,\!}$ and ${\displaystyle \nabla \times \mathbf {E} =-{\dot {\mathbf {B} }}.\,\!}$ So we have shown that Maxwell’s equations need only be written in terms of the field-strength tensor in order to make their Lorentz invariance manifest. ### Lorentz transformation of the electric and magnetic fields The field-strength tensor ${\displaystyle F_{\mu \nu }}$ has two covariant indices. We saw in the section on Lorentz transformations how to perform a covariant transformation. Since ${\displaystyle F_{\mu \nu }}$ has two indices, we will need to perform a transformation on both. The transformation looks like ${\displaystyle F_{\mu \nu }^{\prime }=\Lambda _{\mu }^{\,\,\,\,\sigma }\Lambda _{\nu }^{\,\,\,\,\kappa }F_{\sigma \kappa }.\,\!}$ This can also be evaluated using ordinary matrix multiplication. You want to be a little careful, though. You should take the transpose of ${\displaystyle \Lambda _{\nu }^{\,\,\,\,\kappa }}$ and put it all the way on the right. Otherwise, you’re not performing matrix multiplication. Once you’ve done the multiplication, you can just read off the components of ${\displaystyle F_{\mu \nu }^{\prime }}$ to see how the fields transformed. For a boost along the ${\displaystyle x}$-axis, {\displaystyle {\begin{aligned}E_{x}^{\prime }&=E_{x},&B_{x}^{\prime }&=B_{x},\end{aligned}}\,\!} {\displaystyle {\begin{aligned}E_{y}^{\prime }&=\gamma (E_{y}-vB_{z}),&B^{\prime }&=\gamma (B_{y}+vE_{z}),\end{aligned}}\,\!} {\displaystyle {\begin{aligned}E_{z}^{\prime }&=\gamma (E_{z}+vB_{y})&\mathrm {and} &&B_{z}^{\prime }&=\gamma (B_{z}-vE_{y}).\end{aligned}}\,\!} Whereas as a rotation would rotate the components of ${\displaystyle E}$ into each other and the components of ${\displaystyle B}$ into each other, the Lorentz boost actually rotates ${\displaystyle E}$ into ${\displaystyle B}$. This also means that a Lorentz boost can create magnetic fields. Suppose we only have ${\displaystyle E_{y}}$ in one frame and all other field components vanish. In a boosted frame we would pick up a non-zero ${\displaystyle B_{z}}$ even though the original frame had no magnetic field at all. For this reason, people sometimes say that magnetism is merely a relativistic effect. Notice also that the fields parallel to the boost are not affected. ## External references Rybicki and Lightman, Radiative Processes in Astrophysics, Ch. 4 Griffiths, Introduction to Electrodynamics, 3rd Ed., Chs. 10, 12 Carroll, Spacetime and Geometry, Ch. 1 Weinberg, Gravitation and Cosmology, Ch. 2 Jackson, Classical Electrodynamics, 3rd Ed., Ch. 11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 327, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562265872955322, "perplexity": 143.64677683816538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00581.warc.gz"}
https://arxiv.org/abs/hep-th/0411112
hep-th (what is this?) (what is this?) # Title: Dirac Sigma Models Abstract: We introduce a new topological sigma model, whose fields are bundle maps from the tangent bundle of a 2-dimensional world-sheet to a Dirac subbundle of an exact Courant algebroid over a target manifold. It generalizes simultaneously the (twisted) Poisson sigma model as well as the G/G-WZW model. The equations of motion are satisfied, iff the corresponding classical field is a Lie algebroid morphism. The Dirac Sigma Model has an inherently topological part as well as a kinetic term which uses a metric on worldsheet and target. The latter contribution serves as a kind of regulator for the theory, while at least classically the gauge invariant content turns out to be independent of any additional structure. In the (twisted) Poisson case one may drop the kinetic term altogether, obtaining the WZ-Poisson sigma model; in general, however, it is compulsory for establishing the morphism property. Comments: 28 pages, Latex Subjects: High Energy Physics - Theory (hep-th); Differential Geometry (math.DG) Journal reference: Commun.Math.Phys. 260 (2005) 455-480 DOI: 10.1007/s00220-005-1416-4 Report number: FSU-TPI-08/04 Cite as: arXiv:hep-th/0411112 (or arXiv:hep-th/0411112v1 for this version) ## Submission history From: Alexei Kotov [view email] [v1] Thu, 11 Nov 2004 17:35:04 GMT (40kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835189938545227, "perplexity": 1605.2021158126288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886237.6/warc/CC-MAIN-20180116070444-20180116090444-00191.warc.gz"}
https://www.physicsforums.com/threads/thick-nonconducting-sheets.182891/
# Thick nonconducting sheets 1. Sep 3, 2007 ### quantum_bit 1. The problem statement, all variables and given/known data Two very large, nonconducting plastic sheets, each 10.0 cm thick, carry uniform charge densities a,b,c,d on their surfaces. These surface charge densities have the values a= -6.00 nC, b= +5.00 nC, c= +2.00 nC, and d= +4.00 n\C. Find the magnitude of the electric field at the point C, in the middle of the right-hand sheet. looks like: a----10 cm------b------12cm-----c------10cm------d Point C is here-----------------------------> 2. Relevant equations Infinite sheet of charge field (charge)/(2*epsilon_0) N/C 3. The attempt at a solution Well because the sheet is "Two very large, nonconducting plastic sheets," I treated them as thin infinite sheets with the distance of the thickness between them. Adding the electric force vectors does not yield the correct answer, not sure where to go from there. I get the answer 169.491. I am not off by an order of magnitude even though it looks so. I have other problems of the same type that also give incorrect results. I know the answer is 1.69×10^6 N/C but I am not sure how to get there. Last edited: Sep 4, 2007 2. Sep 4, 2007 ### quantum_bit Close this I figured it out, the answer I was given as "correct" was the wrong one there was another answer in the paragraph talking about a correction. This was actually noted in the paragraph I just passed over it. I also used nanoC rather than micro Columbs, the symbols looked similiar in the problem and I thought it it said n not the mu symbol. Last edited: Sep 4, 2007 3. Jun 28, 2009 ### mgreene can you explain how you did this calculation? when I use the formula for the electric field due to an infinite sheet of charge that you have entered in your first post, i get 1.129x10^5 N/C instead of 1.69x10^5 I have simply ((4x10^-6)-(2x10^-6))/(2*E_o) = 1.129x10^5 thanks 4. Jun 28, 2009 ### mgreene ok i realized you must add the charges from all 4 surfaces so 2+5-4-6= -3microcoulombs Similar Discussions: Thick nonconducting sheets
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597182273864746, "perplexity": 1574.478486572151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00626.warc.gz"}
http://math.stackexchange.com/questions/21100/importance-of-rank-of-a-matrix/219879
# Importance of rank of a matrix What is the importance of rank of a matrix ? I know that rank of a matrix is the number of linearly independent rows/columns (whichever is smaller). Why is it a problem if a matrix is rank deficient?. Also why is the smaller value between row and column the rank? A intuitive or a descriptive answer (also in terms of geometry) would help a lot. - Do you know what the column and row space of a matrix is? Do you know about vector spaces? The immediate geometric interpretation is that RankA is the dimension of the vector space spanned by the column vectors. –  AnonymousCoward Feb 9 '11 at 1:55 I know the vector spaces are a collection of vectors that satisfies the axioms that are stated for the vector spaces. I do not understand what you mean by "vector spaces spanned by column vectors". –  Sunil Feb 9 '11 at 1:58 There are nice applications in graph theory when dealing with adjacency matrices. The RankA will tell you things about the corresponding graph like the number of connected components. I dont know enough about this to make it an answer though. –  AnonymousCoward Feb 9 '11 at 2:01 ## 5 Answers A rank of the matrix is probably the most important concept you learn in Matrix Algebra. There are two ways to look at the rank of a matrix. One from a theoretical setting and the other from a applied setting. From a theoretical setting, if we say that a linear operator has a rank $p$, it means that the range of the linear operator is a $p$ dimensional space. From a matrix algebra point of view, column rank denotes the number of independent columns of a matrix while row rank denotes the number of independent rows of a matrix. An interesting, and I think a non-obvious (though the proof is not hard) fact is the row rank is same as column rank. When we say a matrix $A \in \mathbb{R}^{n \times n}$ has rank $p$, what it means is that if we take all vectors $x \in \mathbb{R}^{n \times 1}$, then $Ax$ spans a $p$ dimensional sub-space. Let us see this in a 2D setting. For instance, if $A = \left( \begin{array}{cc} 1 & 2 \\ 2 & 4 \end{array} \right) \in \mathbb{R}^{2 \times 2}$ and let $x = \left( \begin{array}{c} x_1 \\ x_2 \end{array} \right) \in \mathbb{R}^{2 \times 1}$, then $\left( \begin{array}{c} y_1 \\ y_2 \end{array} \right) = y = Ax = \left( \begin{array}{c} x_1 + 2x_2 \\ 2x_1 + 4x_2 \end{array} \right)$. The rank of matrix $A$ is $1$ and we find that $y_2 = 2y_1$ which is nothing but a line passing through the origin in the plane. What has happened is the points $(x_1,x_2)$ on the $x_1 - x_2$ plane have all been mapped on to a line $y_2 = 2y_1$. Looking closely, the points in the $x_1 - x_2$ plane along the line $x_1 + 2x_2 = c = \text{const}$, have all been mapped onto a single point $(c,2c)$ in the $y_1 - y_2$ plane. So the single point $(c,2c)$ on the $y_1 - y_2$ plane represents a straight line $x_1 + 2x_2 = c$ in the $x_1 - x_2$ plane. This is the reason why you cannot solve a linear system when it is rank deficient. The rank deficient matrix $A$ maps $x$ to $y$ and this transformation is neither onto (points in the $y_1 - y_2$ plane not on the line $y_2 = 2y_1$ e.g. $(2,3)$ are not mapped onto, which results in no solutions) nor one-to-one (every point $(c,2c)$ on the line $y_2 = 2y_1$ corresponds to the line $x_1 + 2x_2 =c$ in the $x_1 - x_2$ plane, which results in infinite solutions). An observation you can make here is that the product of the slopes of the line $x_1 + 2x_2 = c$ and $y_2 = 2y_1$ is $-1$. This is true in general for higher dimensions as well. From an applied setting, rank of a matrix denotes the information content of the matrix. The lower the rank, the lower is the "information content". For instance, when we say a rank $1$ matrix, the matrix can be written as a product of a column vector times a row vector i.e. if $u$ and $v$ are column vectors, the matrix $uv^T$ is a rank one matrix. So all we need to represent the matrix is $2n-1$ elements. In general, if we know that a matrix $A \in \mathbb{R}^{m \times n}$ is of rank $p$, then we can write $A$ as $U V^T$ where $U \in \mathbb{R}^{m \times p}$ and is of rank $p$ and $V \in \mathbb{R}^{n \times p}$ and is of rank $p$. So if we know that a matrix $A$ is of rank $p$ all we need is only $2np-p^2$ of its entries. So if we know that a matrix is of low rank, then we can compress and store the matrix and can do efficient matrix operations using it. The above ideas can be extended for any linear operator and these in fact form the basis for various compression techniques. You might also want to look up Singular Value Decomposition which gives us a nice (though expensive) way to make low rank approximations of a matrix which allows for compression. From solving a linear system point of view, when the square matrix is rank deficient, it means that we do not have complete information about the system, ergo we cannot solve the system. - A beautiful explication, Sivaram. –  Uticensis Mar 5 '11 at 0:46 "we cannot solve the system." - we can, but not in the usual sense... hence least squares, Tikhonov regularization, and a bunch of other fancy tricks. –  Guess who it is. Apr 29 '12 at 5:04 As for an application of SVD and low-rank approximations, see here. –  Guess who it is. Apr 29 '12 at 5:05 If you are interested in learning the why's of linear algebra I highly recommend viewing Gilbert Strang's Linear Algebra Course and purchase his book. - was helpful. Thanks –  Sunil Feb 9 '11 at 4:03 The rank of a matrix is of major importance. It is closely connected to the nullity of the matrix (which is the dimension of the solution space of the equation $A\mathbf{x}=\mathbf{0}$), via the Dimension Theorem: Dimension Theorem. Let $A$ be an $m\times n$ matrix. Then $\mathrm{rank}(A)+\mathrm{nullity}(A) = n$. Even if all you know about matrices is that they can be used to solve systems of linear equations, this tells you that the rank is very important, because it tells you whether $A\mathbf{x}=\mathbf{0}$ has a single solution or multiple solutions. When you think of matrices as being linear transformations (there is a correspondence between $m\times n$ matrices with coefficients in a field $\mathbf{F}$, and the linear transformations between a given vector space over $\mathbf{F}$ of dimension $n$ with a given basis, and a vector space of dimension $m$ with a given basis), then the rank of the matrix is the dimension of the image of that linear transformation. The simplest way of computing the Jordan Canonical Form of a matrix (an important way of representing a matrix) is to use the ranks of certain matrices associated to $A$; the same is true for the Rational Canonical Form. Really, the rank just shows all over the place, it is usually relatively easy to compute, and has a lot of applications and important properties. They will likely not be completely apparent until you start seeing the myriad applications of matrices to things like vector calculus, linear algebra, and the like, but trust me, they're there. - The rank of the matrix $A$ is equal to the dimension of the Image of $A$ (which is spanned by columns of $A$), if that's a sufficient enough geometrical explanation. You can read about vector spaces here and about the image of a matrix here. - Can you please explain more in detail ? What do you mean by image of A ? –  Sunil Feb 9 '11 at 2:04 I posted a link (goo.gl/hKyvN) that explains what it is. Basically, it is the span of the columns, which means you take columns of matrix A, multiply them by all possible scalars, and get some space (which is called image of that matrix). –  InterestedGuest Feb 9 '11 at 2:05 The rank of a matrix is a building stone to understanding Matrix Completion, which tackles such problems as the Netflix Prize and related issues in recommender systems. The topic of "rank" in higher dimensional space ($>2$) is an interesting topic of research. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999615907669067, "perplexity": 141.04721512882216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928423.12/warc/CC-MAIN-20150521113208-00270-ip-10-180-206-219.ec2.internal.warc.gz"}
https://arxiv.org/abs/1401.2942
hep-ex (what is this?) # Title:Observation of the associated production of a single top quark and a W boson in pp collisions at sqrt(s) = 8 TeV Abstract: The first observation of the associated production of a single top quark and a W boson is presented. The analysis is based on a data set corresponding to an integrated luminosity of 12.2 inverse femtobarns of proton-proton collisions at sqrt(s) = 8 TeV recorded by the CMS experiment at the LHC. Events with two leptons and a jet originating from a b quark are selected. A multivariate analysis based on kinematic and topological properties is used to separate the signal from the dominant t t-bar background. An excess consistent with the signal hypothesis is observed, with a significance which corresponds to 6.1 standard deviations above a background-only hypothesis. The measured production cross section is 23.4 +- 5.4 pb, in agreement with the standard model prediction. Comments: Replaced with published version. Added journal reference and DOI Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: Phys. Rev. Lett. 112 (2014) 231802 DOI: 10.1103/PhysRevLett.112.231802 Report number: CMS-TOP-12-040, CERN-PH-EP-2013-237 Cite as: arXiv:1401.2942 [hep-ex] (or arXiv:1401.2942v2 [hep-ex] for this version) ## Submission history From: Cms Collaboration [view email] [v1] Mon, 13 Jan 2014 18:24:58 UTC (270 KB) [v2] Tue, 10 Jun 2014 16:40:46 UTC (390 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277353286743164, "perplexity": 1770.5044442942808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00509.warc.gz"}
https://ncalculators.com/geometry/point-slope-form-calculator.htm
# Point Slope Form of a Line Calculator Coordinates (xA, yA) Slope (m) Point Slope Form 8x - y - 36 = 0 GENERATE WORK ## Point Slope Form - work with steps Input Data : Coordinates (x_A, y_A) = (5, 4) Slope (m) = 8 Objective : Find slope of a straight line? Formula : y - y_A = m(x - x_A) Solution : y - 4 = 8 (x - 5) y - 4 = 8x - 40 y = 8x - 40 + 4 y = 8x - 36 8x - y - 36 = 0 Point slope form calculator uses coordinates of a point A(x_A,y_A) and slope m in the two- dimensional Cartesian coordinate plane and find the equation of a line that passes through A. This tool allows us to find the equation of a line in the general form Ax + By + C = 0. It’s an online Geometry tool requires one point in the two-dimensional Cartesian coordinate plane and coefficient m. It is necessary to follow the next steps: 1. Enter coordinates (x_A,y_A) of two point A and the coefficient for the slope m in the box. These values must be real numbers or parameters. 2. Press the "GENERATE WORK" button to make the computation. 3. Point slope form calculator will give the equation of line in the general form. This line passes through the point A and has the slope m. Input: An ordered pair of real numbers or variables as coordinates of a point and a real number or variable as coefficient of a slope. Output: An equation of a line in the general form, Ax + By + C = 0. Slope Formula: The equation of the line through point A(x_A,y_A) with the slope m is determined by the formula y−y_A =m(x−x_A) ## Define What is the Slope of Line? The slope of a line in the two-dimensional Cartesian coordinate plane is usually represented by the letter m, and it is sometimes called the rate of change between two points. This is because it is the change in the y-coordinates divided by the corresponding change in the x-coordinates between two distinct points on the line. If we have coordinates of two points A(x_A,y_A) and B(x_B,y_B) in the two-dimensional Cartesian coordinate plane, then the slope m of the line through A(x_A,y_A) and B(x_B,y_B) is fully determined by the following formula m=\frac{y_B-y_A}{x_B-x_A} In other words, the formula for the slope can be written as $$m=\frac{\Delta y}{\Delta x}=\frac{{\rm vertical \; change}}{{\rm horizontal \; change}}=\frac{{\rm rise}}{{\rm run}}$$ As we know, the Greek letter ∆, means difference or change. The slope m of a line y = mx + b can be defined also as the rise divided by the run. Rise means how high or low we have to move to arrive from the point on the left to the point on the right, so we change the value of y. Therefore, the rise is the change in y, ∆y. Run means how far left or right we have to move to arrive from the point on the left to the point on the right, so we change the value of x. The run is the change in x, ∆x. The slope m of a line y = mx + b describes its steepness. For instance, a greater slope value indicates a steeper incline. There are four different types of slope: 1. Positive slope m > 0, if a line y = mx + b is increasing, i.e. if it goes up from left to right 2. Negative slope m < 0, if a line y = mx + b is decreasing, i.e. if it goes down from left to right 3. Zero slope, m = 0, if a line y = mx + b is horizonal. In this case, the equation of the line is y = b 4. Undefined slope, if a line y = mx + b is vertical. This is because division by zero leads to infinities. So, the equation of the line is x = a. All vertical lines x = a have an infinite or undefined slope. ### How to Find Point Slope Form of Line? If we plug in the coordinates x_A and y_A and the given value of slope m, into the equation y − y_A = m(x − x_A), we will obtain the equation of the line that passes through the point A(x_A,y_A) and has the slope m. In many cases, we can find the equation of the line by hand, especially for integers. But, if the input values are big real number or number with many decimals, then we should use the point slope form calculator to get an accurate result. The point slope work with steps shows the complete step-by-step calculation for finding the general equation of line through point A at coordinates (5,4) with the slope m = 8. For any other combinations of point and slope, just supply the coordinates of point and slope coefficient and click on the "GENERATE WORK" button. The grade school students may use this point Slope calculator to generate the work, verify the results or do their homework problems efficiently. ### Real World Problems Using Point Slope of a Line As we mentioned, the fundamental applications of slope or the rate of change are in geometry, especially in analytic geometry. But, the rate of change is also fundamental to the study of calculus. For non-linear functions, the rate of change varies along the function. The first derivative of the function at a point is the slope of the tangent line to the function at the point. So, the first derivative is the rate of change of the function at the point. In physics, in definitions of some magnitudes such as displacement, velocity and acceleration, the rate of change play important role. For instance, the rate of change of a function is connected to the average velocity. The rate of change can be found also in many fields of life, for instance population growth, birth and death rates, etc. ### Slope Practice Problems Practice Problem 1: A tree grows at steady rate of 15 inches per day and achieves its full height of 500 inches in 30 days. Write the general equation of this linear model. Practice Problem 2: The slope of line is −5 and line passes through point (4,0). Find the general equation of the line. The point slope form calculator, formula, example calculation (work with steps) and practice problems would be very useful for grade school students (K-12 education) to learn what are different equations of a line in geometry, how to find the general equation of a line. They will be able to solve real-world problems using linear models in point slope form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323776721954346, "perplexity": 301.1122665720288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00203.warc.gz"}
http://physics.stackexchange.com/questions/16181/partition-function-as-characteristic-function-of-energy?answertab=oldest
# Partition Function as characteristic function of energy? I'm going through a book on statistical mechanics and there it says that the partition function $$Z = \sum_{\mu_S} e^{-\beta H(\mu_S)}$$ where $\mu_S$ denotes a microstate of the system and $H(\mu_S)$ the Hamiltonian, is proportional to the characteristic function $\hat p(\beta)$ of the energy probability distribution function. This allows us to make then the next step and conclude that $\ln Z$ is the cumulant generating function with the nice result that $$\langle H \rangle = -\frac{\partial \ln Z}{\partial \beta}$$ and $$\langle H - \langle H \rangle \rangle^2 = \frac{\partial^2 \ln Z}{\partial \beta^2}$$ but I fail to see why $Z$ is proportional to characteristic function. Also, if I imagine that $Z$ is the characteristic function to the energy, then wouldn't I have to evaluate the derivative at $\beta = 0$? I know that the two formulas above can also be obtained by explicitly doing the calculation using the definition of $Z$ in the first line, but I'd like to generalize this result to the momenta and cumulants of all orders. - I see the confusion. You can just do a shift of variables, to separate the temperature 1/beta from the the parameter in your characteristic function, which is taken to zero after the derivatives. – user1631 Oct 13 '12 at 1:13 If it is still actual for you. The probability density $$e^{-\beta H\left( \mu\right) }$$ is the so called canonical (Gibbs) distribution.There are plenty of methods how to derive it. I can reproduce the simplest one. Let's imagine that your system has the Hamiltonian $H\left( \mu\right)$ and you would like to study it for a certain temperature. In order to set the temperature you put your system inside a thermostat, so that your system exchanges only energy with the thermostat but the volume and the number of particles are constant. Let's suppose that the thermostat is a big tank filled by an ideal gas so that its energy: $$h=\sum_{i=1}^{N}\frac{P_{i}^{2}}{2m}.$$ The total system (your system+thermostat) is isolated thus the total energy is fixed. Therefore, the distribution with respect to the total energy is a delta-function: $$\rho\left( E\right) =\Lambda\delta\left( h+H-E\right) ,$$ where $\Lambda$ is a some normalization factor such that $$\int \rho\left( E\right)\,d\Gamma =1,\qquad\left( 1\right)$$ where $d\Gamma$ is an element of the full phase space: $$d\Gamma=d\mu\prod_{i=1}^{N}d^{3}P_{i}\,d^{3}Q_{i}.$$ Let's integrate out all degrees of freedom of the thermostat: $$\rho\left( H\right) =\int\prod_{i=1}^{N}d^{3}P_{i}\,d^{3}Q_{i} \,\Lambda\delta\left( H+\sum_{i=1}^{N}\frac{P_{i}^{2}}{2m}-E\right) =\Lambda V^{N}\int\prod_{i=1}^{N}d^{3}P_{i}\,\,\delta\left( H+\sum_{i=1}^{N} \frac{P_{i}^{2}}{2m}-E\right) ,$$ where $V$ is the volume of thermostat. The integration measure can be simplified as follows: $$\int\left[\prod_{i=1}^{N}d^{3}P_{i}\right] f(\epsilon)=\frac{2\pi^{3N/2}}{\Gamma\left( 3N/2\right) }\int\left[\epsilon^{3N-1}d\epsilon\right] f(\epsilon),$$ where $$\epsilon^{2}=\sum_{i=1}^{N}P_{i}^{2}.$$ Hence the integration can be preformed as follows: $$\rho\left( H\right) =\Lambda V^{N}\frac{2\pi^{3N/2}}{\Gamma\left( 3N/2\right) }\int\,d\epsilon\,\epsilon^{3N-1}\,\delta\left( H+\frac {\epsilon^{2}}{2m}-E\right) =\Lambda V^{N}\frac{2m\pi^{3N/2}}{\Gamma\left( 3N/2\right) }\,\left( E-H\right) ^{\frac{3N}{2}-1}.$$ Let's now consider the $N\rightarrow\infty$ limit so that $$\frac{E}{N}\approx\frac{h}{N}=\frac{3T}{2}.$$ The distribution takes the form: $$\rho\left( H\right) \sim\left( 1-\frac{H}{E}\right) ^{3N-2}\approx\left( 1-\frac{H}{\frac{3N}{2}T}\right) ^{\frac{3N}{2}-1}\approx\exp\left( -\frac{H}{T}\right) .$$ The normalization factor can be found from the normalization condition (1). Finally the probability density for the energy of your system takes the form: $$\rho\left( H\right) =\frac{e^{-\beta H\left( \mu\right) }}{Z},\quad Z=\int d\mu\,e^{-\beta H\left( \mu\right) }.$$ In fact, the result is independent of the nature of the thermostat, see e.g. L.D. Landau, E.M. Lifshitz, Volume 5 of Course of Theoretical Physics, Statistical physics Part 1 Ch. III, The Gibbs distribution. - The statement is a purely mathematical one. Let $p(E)$ be the probability distribution function for energy. The characteristic function of this distribution will be $\hat{p}(\beta) = \mathbb{E}[e^{i \beta E}] = \sum_E e^{i \beta E} p(E)$. So if the distribution is the Gibbs one $p(E) \propto e^{-\beta E}$ then we see that $Z$ is proportional to $\hat{p}$. The rest then follows from standard probability theory. - I think the confusion is simple. Let $\gamma$ be the transform parameter of the generating function. The generating function is $G(\gamma) = <exp(-\gamma H)> \propto \sum_{\mu_S} e^{-(\beta+\gamma) H(\mu_S)}$ for the Gibbs distribution. If we take $\gamma ->0$ we get the partition function. Taking the derivative w.r.t. $\gamma$ is equivalent to taking derivative w.r.t $\beta$ for this particular distribution, since they appear only in their sum. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870763421058655, "perplexity": 124.3723682238776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701806508/warc/CC-MAIN-20130516105646-00047-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.nag.com/numeric/CL/nagdoc_cl24/html/C06/c06rfc.html
c06 Chapter Contents c06 Chapter Introduction NAG Library Manual # NAG Library Function Documentnag_sum_fft_cosine (c06rfc) ## 1  Purpose nag_sum_fft_cosine (c06rfc) computes the discrete Fourier cosine transforms of $m$ sequences of real data values. The elements of each sequence and its transform are stored contiguously. ## 2  Specification #include #include void nag_sum_fft_cosine (Integer m, Integer n, double x[], NagError *fail) ## 3  Description Given $m$ sequences of $n+1$ real data values ${x}_{\mathit{j}}^{\mathit{p}}$, for $\mathit{j}=0,1,\dots ,n$ and $\mathit{p}=1,2,\dots ,m$, nag_sum_fft_cosine (c06rfc) simultaneously calculates the Fourier cosine transforms of all the sequences defined by $x^ k p = 2n 12 x0p + ∑ j=1 n-1 xjp × cos jk πn + 12 -1k xnp , k= 0, 1, …, n ​ and ​ p= 1, 2, …, m .$ (Note the scale factor $\sqrt{\frac{2}{n}}$ in this definition.) This transform is also known as type-I DCT. Since the Fourier cosine transform defined above is its own inverse, two consecutive calls of nag_sum_fft_cosine (c06rfc) will restore the original data. The transform calculated by this function can be used to solve Poisson's equation when the derivative of the solution is specified at both left and right boundaries (see Swarztrauber (1977)). The function uses a variant of the fast Fourier transform (FFT) algorithm (see Brigham (1974)) known as the Stockham self-sorting algorithm, described in Temperton (1983), together with pre- and post-processing stages described in Swarztrauber (1982). Special coding is provided for the factors $2$, $3$, $4$ and $5$. ## 4  References Brigham E O (1974) The Fast Fourier Transform Prentice–Hall Swarztrauber P N (1977) The methods of cyclic reduction, Fourier analysis and the FACR algorithm for the discrete solution of Poisson's equation on a rectangle SIAM Rev. 19(3) 490–501 Swarztrauber P N (1982) Vectorizing the FFT's Parallel Computation (ed G Rodrique) 51–83 Academic Press Temperton C (1983) Fast mixed-radix real Fourier transforms J. Comput. Phys. 52 340–350 ## 5  Arguments 1:     mIntegerInput On entry: $m$, the number of sequences to be transformed. Constraint: ${\mathbf{m}}\ge 1$. 2:     nIntegerInput On entry: one less than the number of real values in each sequence, i.e., the number of values in each sequence is $n+1$. Constraint: ${\mathbf{n}}\ge 1$. 3:     x[$\left({\mathbf{n}}+1\right)×{\mathbf{m}}$]doubleInput/Output On entry: the $m$ data sequences to be transformed. The $\left(n+1\right)$ data values of the $\mathit{p}$th sequence to be transformed, denoted by ${x}_{\mathit{j}}^{\mathit{p}}$, for $\mathit{j}=0,1,\dots ,n$ and $\mathit{p}=1,2,\dots ,m$, must be stored in ${\mathbf{x}}\left[\left(p-1\right)×\left({\mathbf{n}}+1\right)+j\right]$. On exit: the $m$ Fourier cosine transforms, overwriting the corresponding original sequences. The $\left(n+1\right)$ components of the $\mathit{p}$th Fourier cosine transform, denoted by ${\stackrel{^}{x}}_{\mathit{k}}^{\mathit{p}}$, for $\mathit{k}=0,1,\dots ,n$ and $\mathit{p}=1,2,\dots ,m$, are stored in ${\mathbf{x}}\left[\left(p-1\right)×\left({\mathbf{n}}+1\right)+k\right]$. 4:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value. NE_INT On entry, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{m}}\ge 1$. On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{n}}\ge 1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. ## 7  Accuracy Some indication of accuracy can be obtained by performing a subsequent inverse transform and comparing the results with the original sequence (in exact arithmetic they would be identical). ## 8  Parallelism and Performance nag_sum_fft_cosine (c06rfc) is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. The time taken by nag_sum_fft_cosine (c06rfc) is approximately proportional to $nm\mathrm{log}\left(n\right)$, but also depends on the factors of $n$. nag_sum_fft_cosine (c06rfc) is fastest if the only prime factors of $n$ are $2$, $3$ and $5$, and is particularly slow if $n$ is a large prime, or has large prime factors. This function internally allocates a workspace of order $\mathit{O}\left(n\right)$ double values. ## 10  Example This example reads in sequences of real data values and prints their Fourier cosine transforms (as computed by nag_sum_fft_cosine (c06rfc)). It then calls nag_sum_fft_cosine (c06rfc) again and prints the results which may be compared with the original sequence. ### 10.1  Program Text Program Text (c06rfce.c) ### 10.2  Program Data Program Data (c06rfce.d) ### 10.3  Program Results Program Results (c06rfce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 44, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937965869903564, "perplexity": 1779.828963609285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246644526.54/warc/CC-MAIN-20150417045724-00284-ip-10-235-10-82.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/449344/how-do-i-get-a-tikz-picture-and-text-to-align-in-a-certain-way
# How do I get a tikz picture and text to align in a certain way? So I want to have text next to a graph, and my solution was creating a two celled table. \begin{center} \begin{tabular}{ c p{100pt} } \begin{tikzpicture} \draw[thin,gray!40] (0,0) grid (4,4); \draw[->](0,0)--(4,0) node[right]{$x$}; \draw[->](0,0)--(0,4) node[above]{$y$}; \draw[line width=2pt ,red,-stealth](1,0)--(1,4); \end{tikzpicture} & Blah blah blah blah $\boldsymbol{u}$. \\ And when it comes out, the text lines up with the bottom of the picture, is there a way which I can "trick" the text into thinking that it should line up with the top of the picture. • Welcome to TeX SX! Could you post a complete compilable code? – Bernard Sep 4 '18 at 21:44 • You can always set the baseline of the tizkpicture to something. Give e.g. the $y$ node a name and align it to it. \begin{tikzpicture}[baseline=(y.base)] \draw[thin,gray!40] (0,0) grid (4,4); \draw[->](0,0)--(4,0) node[right]{$x$}; \draw[->](0,0)--(0,4) node[above](y){$y$}; \draw[line width=2pt ,red,-stealth](1,0)--(1,4); \end{tikzpicture} – user121799 Sep 4 '18 at 21:46 Just spelling out my comment. \documentclass{article} \usepackage{tikz,amsmath,amssymb} \begin{document} \begin{center} \begin{tabular}{ c p{100pt} } \begin{tikzpicture}[baseline=(y.base)] \draw[thin,gray!40] (0,0) grid (4,4); \draw[->](0,0)--(4,0) node[right]{$x$}; \draw[->](0,0)--(0,4) node[above](y){$y$}; \draw[line width=2pt ,red,-stealth](1,0)--(1,4); \end{tikzpicture} & Blah blah blah blah $\boldsymbol{u}$. \\ \end{tabular} \end{center} \end{document} And yes, I also know one should not use the center environment, but I have no idea how the full document looks like so I kept it. • I'll be happy to remove my answer in favor of someone who also explains why one should not use center and so on. Of course, I am aware of the fact that one can use arbitrary shifts e.g. by saying [baseline=([yshift=-5pt]y.base)], or one could use minipages and so on. – user121799 Sep 4 '18 at 22:24 As an alternative to side by side minipages or tabular, you can use a tcolorbox with options sidebyside (places upper and lower boxes side by side) and empty (no tcolorbox is drawn). You decide the vertical alignment between parts with sidebyside align (center by default). \documentclass{article} \usepackage{tikz,amsmath,amssymb} \usepackage[most]{tcolorbox} \begin{document} \begin{tcolorbox}[sidebyside, empty] \begin{tikzpicture}[baseline=(y.base)] \draw[thin,gray!40] (0,0) grid (4,4); \draw[->](0,0)--(4,0) node[right]{$x$}; \draw[->](0,0)--(0,4) node[above](y){$y$}; \draw[line width=2pt ,red,-stealth](1,0)--(1,4); \end{tikzpicture} \tcblower Blah blah blah blah $\boldsymbol{u}$. \end{tcolorbox} \begin{tcolorbox}[sidebyside, empty, sidebyside align=top] \begin{tikzpicture}[baseline=(y.base)] \draw[thin,gray!40] (0,0) grid (4,4); \draw[->](0,0)--(4,0) node[right]{$x$}; \draw[->](0,0)--(0,4) node[above](y){$y$}; \draw[line width=2pt ,red,-stealth](1,0)--(1,4); \end{tikzpicture} \tcblower Blah blah blah blah $\boldsymbol{u}$. \end{tcolorbox} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526252746582031, "perplexity": 3241.987967911828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00535.warc.gz"}
https://terrytao.wordpress.com/2011/04/
You are currently browsing the monthly archive for April 2011. Perhaps the most fundamental differential operator on Euclidean space ${{\bf R}^d}$ is the Laplacian $\displaystyle \Delta := \sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}.$ The Laplacian is a linear translation-invariant operator, and as such is necessarily diagonalised by the Fourier transform $\displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx.$ Indeed, we have $\displaystyle \widehat{\Delta f}(\xi) = - 4 \pi^2 |\xi|^2 \hat f(\xi)$ for any suitably nice function ${f}$ (e.g. in the Schwartz class; alternatively, one can work in very rough classes, such as the space of tempered distributions, provided of course that one is willing to interpret all operators in a distributional or weak sense). Because of this explicit diagonalisation, it is a straightforward manner to define spectral multipliers ${m(-\Delta)}$ of the Laplacian for any (measurable, polynomial growth) function ${m: [0,+\infty) \rightarrow {\bf C}}$, by the formula $\displaystyle \widehat{m(-\Delta) f}(\xi) := m( 4\pi^2 |\xi|^2 ) \hat f(\xi).$ (The presence of the minus sign in front of the Laplacian has some minor technical advantages, as it makes ${-\Delta}$ positive semi-definite. One can also define spectral multipliers more abstractly from general functional calculus, after establishing that the Laplacian is essentially self-adjoint.) Many of these multipliers are of importance in PDE and analysis, such as the fractional derivative operators ${(-\Delta)^{s/2}}$, the heat propagators ${e^{t\Delta}}$, the (free) Schrödinger propagators ${e^{it\Delta}}$, the wave propagators ${e^{\pm i t \sqrt{-\Delta}}}$ (or ${\cos(t \sqrt{-\Delta})}$ and ${\frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}}}$, depending on one’s conventions), the spectral projections ${1_I(\sqrt{-\Delta})}$, the Bochner-Riesz summation operators ${(1 + \frac{\Delta}{4\pi^2 R^2})_+^\delta}$, or the resolvents ${R(z) := (-\Delta-z)^{-1}}$. Each of these families of multipliers are related to the others, by means of various integral transforms (and also, in some cases, by analytic continuation). For instance: 1. Using the Laplace transform, one can express (sufficiently smooth) multipliers in terms of heat operators. For instance, using the identity $\displaystyle \lambda^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{-t\lambda}\ dt$ (using analytic continuation if necessary to make the right-hand side well-defined), with ${\Gamma}$ being the Gamma function, we can write the fractional derivative operators in terms of heat kernels: $\displaystyle (-\Delta)^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{t\Delta}\ dt. \ \ \ \ \ (1)$ 2. Using analytic continuation, one can connect heat operators ${e^{t\Delta}}$ to Schrödinger operators ${e^{it\Delta}}$, a process also known as Wick rotation. Analytic continuation is a notoriously unstable process, and so it is difficult to use analytic continuation to obtain any quantitative estimates on (say) Schrödinger operators from their heat counterparts; however, this procedure can be useful for propagating identities from one family to another. For instance, one can derive the fundamental solution for the Schrödinger equation from the fundamental solution for the heat equation by this method. 3. Using the Fourier inversion formula, one can write general multipliers as integral combinations of Schrödinger or wave propagators; for instance, if ${z}$ lies in the upper half plane ${{\bf H} := \{ z \in {\bf C}: \hbox{Im} z > 0 \}}$, one has $\displaystyle \frac{1}{x-z} = i\int_0^\infty e^{-itx} e^{itz}\ dt$ for any real number ${x}$, and thus we can write resolvents in terms of Schrödinger propagators: $\displaystyle R(z) = i\int_0^\infty e^{it\Delta} e^{itz}\ dt. \ \ \ \ \ (2)$ In a similar vein, if ${k \in {\bf H}}$, then $\displaystyle \frac{1}{x^2-k^2} = \frac{i}{k} \int_0^\infty \cos(tx) e^{ikt}\ dt$ for any ${x>0}$, so one can also write resolvents in terms of wave propagators: $\displaystyle R(k^2) = \frac{i}{k} \int_0^\infty \cos(t\sqrt{-\Delta}) e^{ikt}\ dt. \ \ \ \ \ (3)$ 4. Using the Cauchy integral formula, one can express (sufficiently holomorphic) multipliers in terms of resolvents (or limits of resolvents). For instance, if ${t > 0}$, then from the Cauchy integral formula (and Jordan’s lemma) one has $\displaystyle e^{itx} = \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} \frac{e^{ity}}{y-x+i\epsilon}\ dy$ for any ${x \in {\bf R}}$, and so one can (formally, at least) write Schrödinger propagators in terms of resolvents: $\displaystyle e^{-it\Delta} = - \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} e^{ity} R(y+i\epsilon)\ dy. \ \ \ \ \ (4)$ 5. The imaginary part of ${\frac{1}{\pi} \frac{1}{x-(y+i\epsilon)}}$ is the Poisson kernel ${\frac{\epsilon}{\pi} \frac{1}{(y-x)^2+\epsilon^2}}$, which is an approximation to the identity. As a consequence, for any reasonable function ${m(x)}$, one has (formally, at least) $\displaystyle m(x) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} \frac{1}{x-(y+i\epsilon)}) m(y)\ dy$ which leads (again formally) to the ability to express arbitrary multipliers in terms of imaginary (or skew-adjoint) parts of resolvents: $\displaystyle m(-\Delta) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} R(y+i\epsilon)) m(y)\ dy. \ \ \ \ \ (5)$ Among other things, this type of formula (with ${-\Delta}$ replaced by a more general self-adjoint operator) is used in the resolvent-based approach to the spectral theorem (by using the limiting imaginary part of resolvents to build spectral measure). Note that one can also express ${\hbox{Im} R(y+i\epsilon)}$ as ${\frac{1}{2i} (R(y+i\epsilon) - R(y-i\epsilon))}$. Remark 1 The ability of heat operators, Schrödinger propagators, wave propagators, or resolvents to generate other spectral multipliers can be viewed as a sort of manifestation of the Stone-Weierstrass theorem (though with the caveat that the spectrum of the Laplacian is non-compact and so the Stone-Weierstrass theorem does not directly apply). Indeed, observe the *-algebra type properties $\displaystyle e^{s\Delta} e^{t\Delta} = e^{(s+t)\Delta}; \quad (e^{s\Delta})^* = e^{s\Delta}$ $\displaystyle e^{is\Delta} e^{it\Delta} = e^{i(s+t)\Delta}; \quad (e^{is\Delta})^* = e^{-is\Delta}$ $\displaystyle e^{is\sqrt{-\Delta}} e^{it\sqrt{-\Delta}} = e^{i(s+t)\sqrt{-\Delta}}; \quad (e^{is\sqrt{-\Delta}})^* = e^{-is\sqrt{-\Delta}}$ $\displaystyle R(z) R(w) = \frac{R(w)-R(z)}{z-w}; \quad R(z)^* = R(\overline{z}).$ Because of these relationships, it is possible (in principle, at least), to leverage one’s understanding one family of spectral multipliers to gain control on another family of multipliers. For instance, the fact that the heat operators ${e^{t\Delta}}$ have non-negative kernel (a fact which can be seen from the maximum principle, or from the Brownian motion interpretation of the heat kernels) implies (by (1)) that the fractional integral operators ${(-\Delta)^{-s/2}}$ for ${s>0}$ also have non-negative kernel. Or, the fact that the wave equation enjoys finite speed of propagation (and hence that the wave propagators ${\cos(t\sqrt{-\Delta})}$ have distributional convolution kernel localised to the ball of radius ${|t|}$ centred at the origin), can be used (by (3)) to show that the resolvents ${R(k^2)}$ have a convolution kernel that is essentially localised to the ball of radius ${O( 1 / |\hbox{Im}(k)| )}$ around the origin. In this post, I would like to continue this theme by using the resolvents ${R(z) = (-\Delta-z)^{-1}}$ to control other spectral multipliers. These resolvents are well-defined whenever ${z}$ lies outside of the spectrum ${[0,+\infty)}$ of the operator ${-\Delta}$. In the model three-dimensional case ${d=3}$, they can be defined explicitly by the formula $\displaystyle R(k^2) f(x) = \int_{{\bf R}^3} \frac{e^{ik|x-y|}}{4\pi |x-y|} f(y)\ dy$ whenever ${k}$ lives in the upper half-plane ${\{ k \in {\bf C}: \hbox{Im}(k) > 0 \}}$, ensuring the absolute convergence of the integral for test functions ${f}$. (In general dimension, explicit formulas are still available, but involve Bessel functions. But asymptotically at least, and ignoring higher order terms, one simply replaces ${\frac{e^{ik|x-y|}}{4\pi |x-y|}}$ by ${\frac{e^{ik|x-y|}}{c_d |x-y|^{d-2}}}$ for some explicit constant ${c_d}$.) It is an instructive exercise to verify that this resolvent indeed inverts the operator ${-\Delta-k^2}$, either by using Fourier analysis or by Green’s theorem. Henceforth we restrict attention to three dimensions ${d=3}$ for simplicity. One consequence of the above explicit formula is that for positive real ${\lambda > 0}$, the resolvents ${R(\lambda+i\epsilon)}$ and ${R(\lambda-i\epsilon)}$ tend to different limits as ${\epsilon \rightarrow 0}$, reflecting the jump discontinuity in the resolvent function at the spectrum; as one can guess from formulae such as (4) or (5), such limits are of interest for understanding many other spectral multipliers. Indeed, for any test function ${f}$, we see that $\displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda+i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy$ and $\displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda-i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{-i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy.$ Both of these functions $\displaystyle u_\pm(x) := \int_{{\bf R}^3} \frac{e^{\pm i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy$ solve the Helmholtz equation $\displaystyle (-\Delta-\lambda) u_\pm = f, \ \ \ \ \ (6)$ but have different asymptotics at infinity. Indeed, if ${\int_{{\bf R}^3} f(y)\ dy = A}$, then we have the asymptotic $\displaystyle u_\pm(x) = \frac{A e^{\pm i \sqrt{\lambda}|x|}}{4\pi|x|} + O( \frac{1}{|x|^2}) \ \ \ \ \ (7)$ as ${|x| \rightarrow \infty}$, leading also to the Sommerfeld radiation condition $\displaystyle u_\pm(x) = O(\frac{1}{|x|}); \quad (\partial_r \mp i\sqrt{\lambda}) u_\pm(x) = O( \frac{1}{|x|^2}) \ \ \ \ \ (8)$ where ${\partial_r := \frac{x}{|x|} \cdot \nabla_x}$ is the outgoing radial derivative. Indeed, one can show using an integration by parts argument that ${u_\pm}$ is the unique solution of the Helmholtz equation (6) obeying (8) (see below). ${u_+}$ is known as the outward radiating solution of the Helmholtz equation (6), and ${u_-}$ is known as the inward radiating solution. Indeed, if one views the function ${u_\pm(t,x) := e^{-i\lambda t} u_\pm(x)}$ as a solution to the inhomogeneous Schrödinger equation $\displaystyle (i\partial_t + \Delta) u_\pm = - e^{-i\lambda t} f$ and using the de Broglie law that a solution to such an equation with wave number ${k \in {\bf R}^3}$ (i.e. resembling ${A e^{i k \cdot x}}$ for some amplitide ${A}$) should propagate at (group) velocity ${2k}$, we see (heuristically, at least) that the outward radiating solution will indeed propagate radially away from the origin at speed ${2\sqrt{\lambda}}$, while inward radiating solution propagates inward at the same speed. There is a useful quantitative version of the convergence $\displaystyle R(\lambda \pm i\epsilon) f \rightarrow u_\pm, \ \ \ \ \ (9)$ known as the limiting absorption principle: Theorem 1 (Limiting absorption principle) Let ${f}$ be a test function on ${{\bf R}^3}$, let ${\lambda > 0}$, and let ${\sigma > 0}$. Then one has $\displaystyle \| R(\lambda \pm i\epsilon) f \|_{H^{0,-1/2-\sigma}({\bf R}^3)} \leq C_\sigma \lambda^{-1/2} \|f\|_{H^{0,1/2+\sigma}({\bf R}^3)}$ for all ${\epsilon > 0}$, where ${C_\sigma > 0}$ depends only on ${\sigma}$, and ${H^{0,s}({\bf R}^3)}$ is the weighted norm $\displaystyle \|f\|_{H^{0,s}({\bf R}^3)} := \| \langle x \rangle^s f \|_{L^2_x({\bf R}^3)}$ and ${\langle x \rangle := (1+|x|^2)^{1/2}}$. This principle allows one to extend the convergence (9) from test functions ${f}$ to all functions in the weighted space ${H^{0,1/2+\sigma}}$ by a density argument (though the radiation condition (8) has to be adapted suitably for this scale of spaces when doing so). The weighted space ${H^{0,-1/2-\sigma}}$ on the left-hand side is optimal, as can be seen from the asymptotic (7); a duality argument similarly shows that the weighted space ${H^{0,1/2+\sigma}}$ on the right-hand side is also optimal. We prove this theorem below the fold. As observed long ago by Kato (and also reproduced below), this estimate is equivalent (via a Fourier transform in the spectral variable ${\lambda}$) to a useful estimate for the free Schrödinger equation known as the local smoothing estimate, which in particular implies the well-known RAGE theorem for that equation; it also has similar consequences for the free wave equation. As we shall see, it also encodes some spectral information about the Laplacian; for instance, it can be used to show that the Laplacian has no eigenvalues, resonances, or singular continuous spectrum. These spectral facts are already obvious from the Fourier transform representation of the Laplacian, but the point is that the limiting absorption principle also applies to more general operators for which the explicit diagonalisation afforded by the Fourier transform is not available. (Igor Rodnianski and I are working on a paper regarding this topic, of which I hope to say more about soon.) In order to illustrate the main ideas and suppress technical details, I will be a little loose with some of the rigorous details of the arguments, and in particular will be manipulating limits and integrals at a somewhat formal level. A few days ago, I found myself needing to use the Fredholm alternative in functional analysis: Theorem 1 (Fredholm alternative) Let ${X}$ be a Banach space, let ${T: X \rightarrow X}$ be a compact operator, and let ${\lambda \in {\bf C}}$ be non-zero. Then exactly one of the following statements hold: • (Eigenvalue) There is a non-trivial solution ${x \in X}$ to the equation ${Tx = \lambda x}$. • (Bounded resolvent) The operator ${T-\lambda}$ has a bounded inverse ${(T-\lambda)^{-1}}$ on ${X}$. Among other things, the Fredholm alternative can be used to establish the spectral theorem for compact operators. A hypothesis such as compactness is necessary; the shift operator ${U}$ on ${\ell^2({\bf Z})}$, for instance, has no eigenfunctions, but ${U-z}$ is not invertible for any unit complex number ${z}$. The claim is also false when ${\lambda=0}$; consider for instance the multiplication operator ${Tf(n) := \frac{1}{n} f(n)}$ on ${\ell^2({\bf N})}$, which is compact and has no eigenvalue at zero, but is not invertible. It had been a while since I had studied the spectral theory of compact operators, and I found that I could not immediately reconstruct a proof of the Fredholm alternative from first principles. So I set myself the exercise of doing so. I thought that I had managed to establish the alternative in all cases, but as pointed out in comments, my argument is restricted to the case where the compact operator ${T}$ is approximable, which means that it is the limit of finite rank operators in the uniform topology. Many Banach spaces (and in particular, all Hilbert spaces) have the approximation property that implies (by a result of Grothendieck) that all compact operators on that space are almost finite rank. For instance, if ${X}$ is a Hilbert space, then any compact operator is approximable, because any compact set can be approximated by a finite-dimensional subspace, and in a Hilbert space, the orthogonal projection operator to a subspace is always a contraction. (In more general Banach spaces, finite-dimensional subspaces are still complemented, but the operator norm of the projection can be large.) Unfortunately, there are examples of Banach spaces for which the approximation property fails; the first such examples were discovered by Enflo, and a subsequent paper of by Alexander demonstrated the existence of compact operators in certain Banach spaces that are not approximable. I also found out that this argument was essentially also discovered independently by by MacCluer-Hull and by Uuye. Nevertheless, I am recording this argument here, together with two more traditional proofs of the Fredholm alternative (based on the Riesz lemma and a continuity argument respectively). [This is a  (lightly edited) repost of an old blog post of mine, which had attracted over 400 comments, and as such was becoming difficult to load; I request that people wishing to comment on that puzzle use this fresh post instead.  -T] This  is one of my favorite logic puzzles, because of the presence of two highly plausible, but contradictory, solutions to the puzzle.  Resolving this apparent contradiction requires very clear thinking about the nature of knowledge; but I won’t spoil the resolution here, and will simply describe the logic puzzle and its two putative solutions.  (Readers, though, are welcome to discuss solutions in the comments.) — The logic puzzle — There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth). Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople). One day, a blue-eyed foreigner visits to the island and wins the complete trust of the tribe. One evening, he addresses the entire tribe to thank them for their hospitality. However, not knowing the customs, the foreigner makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”. What effect, if anything, does this faux pas have on the tribe? Note 1:  For the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander. Note 2: Bear in mind that this is a logic puzzle, rather than a description of a real-world scenario.  The puzzle is not to determine whether the scenario is plausible (indeed, it is extremely implausible) or whether one can find a legalistic loophole in the wording of the scenario that allows for some sort of degenerate solution; instead, the puzzle is to determine (holding to the spirit of the puzzle, and not just to the letter) which of the solutions given below (if any) are correct, and if one solution is valid, to correctly explain why the other solution is invalid.  (One could also resolve the logic puzzle by showing that the assumptions of the puzzle are logically inconsistent or not well-defined.  However, merely demonstrating that the assumptions of the puzzle are highly unlikely, as opposed to logically impossible to satisfy, is not sufficient to resolve the puzzle.) Note 3: An essentially equivalent version of the logic puzzle is also given at the xkcd web site.  Many other versions of this puzzle can be found in many places; I myself heard of the puzzle as a child, though I don’t recall the precise source. Below the fold are the two putative solutions to the logic puzzle.  If you have not seen the puzzle before, I recommend you try to solve it first before reading either solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 127, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549034237861633, "perplexity": 321.5022402897179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00185-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/magnetic-flux.215528/
# Magnetic flux 1. Feb 15, 2008 ### v_pino What is the definition of Magnetic Flux? My textbook tells me that it may be 'visualised as the total number of magnetic field lines rather tan their concentration... be aware that this is NOT a definition.' thank you 2. Feb 15, 2008 ### Defennder Have you learnt what is electric flux yet? If so, magnetic flux is the same as electric flux when you replace the electric field with a magnetic one. But probably you haven't since otherwise you would have understood it easily. Just think of it this way: Suppose you have a magnetic field and you want to know how much of the magnetic field passes through a specified plane surface. Just visualise it as magnetic field lines passing through that plane surface. The magnetic flux would then be the dot product of the magnetic field B with the surface area represented as a vector A normal to the surface. The surface integral is used to represent the summation of all the magnetic flux measured at any area element dA on your surface. See: http://en.wikipedia.org/wiki/Magnetic_flux
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819648265838623, "perplexity": 308.86400663102904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00718.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=377184
MathSciNet bibliographic data MR377184 34C10 Read, Thomas T. Exponential solutions of $y\sp{\prime\prime}+(r-q)y=0$$y\sp{\prime\prime}+(r-q)y=0$ and the least eigenvalue of Hill's equation. Proc. Amer. Math. Soc. 50 (1975), 273–280. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982452392578125, "perplexity": 4747.748791846987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830064.24/warc/CC-MAIN-20160723071030-00046-ip-10-185-27-174.ec2.internal.warc.gz"}
https://patriciabarber.com/when-will-waqx/452e3d-doppler-frequency-formula
When the observer is moving in the x-direction but the source is stationary, you can take the general frequency equation, set vS= 0, and solve for fO. c is the speed of sound. f = f0 * (v + vr) / (v + vs) Where f is the observed frequency; f0 is the emitted frequency; v is the wave velocity; vr is the receiver velocity; vs is the source velocity; Doppler Effect Definition Doppler effect for a moving observer. Doppler effect: reflection off a moving object. The velocity of sound in air =340 m/s. The Doppler shift can be described by the following formula: f = f₀(v + vr)/(v + vs) where: f is the observed frequency of the wave, expressed in Hz;; f₀ is the frequency of the emitted wave, also expressed in Hz;; v is the velocity of the waves in the medium. In 1842 Christian Doppler hypothesized that sound frequencies change, relative to the observer, when emitted from a moving sound source. The Doppler effect is the perceived change in frequency of sound emitted by a source moving relative to the observer: as a plane flies overhead, the note of the engine becomes noticeably lower, as does the siren noise from a fast-moving emergency vehicle as it passes. F D = 2 (800 ft/sec Cos θ° Cos 17.7°) (7x10 Hz 9 / 9.8357 x 10 9 ft/sec) = 10,845 Hz. Doppler effect also known as Doppler shift, is the change in frequency of a wave for an observer moving relative to the source of the wave. In 1845 Buys Ballot proved Doppler’s Hypothesis correct. Doppler Effect Formula. As the ambulance approaches, the waves seem shorter and the frequency seems higher than when it moves away from you. Doppler Effect Equation. A positive Doppler shift indicates that the target is approaching the transmitter. f= actual frequency of the sound wave. Rearranging gives the convenient form . Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. The frequency of the received signal will decrease, when the target moves away from the Radar. This is called the Doppler shift frequency. $\begingroup$ Doppler resolution is directly connected to the velocity measurement resolution of the target because as the doppler frequency and target velocity are related as given by you through the equation. According to the Doppler effect, we will get the following two possible cases − The frequency of the received signal will increase, when the target moves towards the direction of the Radar. ft is the transmitted frequency. f 0 is the frequency emitted. The Doppler equation usually written in ultrasound textbooks is: Δf = 2 * v * cos (θ) * f0 / c The pulse repetition frequency (PRF) must be twice as high as the expected maximum Doppler shift. The formula for determining the frequency during this event is as follows: ƒ = observed frequency c= speed of sound Vs= velocity of source (negative if it’s moving toward the observer) ƒ0 = emitted frequency of source Suppose you are standing on the corner of 5th Avenue and 34thStreet waiting for the light to change so you can cross the street. fs. Observer moving away from oncoming waves. These generated Doppler shifted signals can simply be converted into an audible signal which … NEXRAD assumes that any echo it is “seeing” is generated by If sampling is too slow, then velocities will alias to negative. velocity difference) / wavelength Oncoming auto at 50 km / h, radar auto at 80 km/h, closing rate of 130 km/h or 36.1 m/s Doppler frequency shift = 2 (36.1 m/s) / (.0039 m) = 18.5 kHz Stationary object, radar auto at 80 km/h, closing rate of 80 km/h or 22.2 m/s Doppler frequency shift = 2 (22.2 m/s) / (.0039 m) = 11.4 kHz Auto ahead at 100 km/h, radar auto at 80 km/h, opening rate … Using the Doppler equation (Equation 1) we calculate the Doppler shifted frequency to be 1299 cycles per second, about 1300 Hz or abbreviated to 1.3 kHz. Christian Doppler was able to show that the observed frequency of a wave depends on the relative speed of the source and the observer. where the relative velocity v s is positive if the source is approaching and negative if receding. Doppler Radar Formulas . Above is the Doppler shift or Doppler effect formula which explains the relationship between the observed frequency and the emitted frequency where the velocity of the source and receiver is lower than the velocity of the waves in the medium. An approaching southbound ambulance is heading your way traveling at 35 miles per hour. We can substitute the data directly into the equation for relativistic Doppler frequency (Equation \ref{eq20}): Given: Actual Frequency of source = n = 2000 Hz, Observer stationary V L = 0, Speed of source = v S = 72 km/h = 72 ×5/18 = 20 m/s, Velocity of sound in air = v = 340 m/s. 0 m / s. \frac {340.0 m/s + 18.0 m/s } {340.0 m/s – 32.0 m/s} 340.0m/s–32.0m/s340.0m/s+18.0m/s. Using the angle equation in Section 2-1, sin ф = x/r = altitude / slant range, so: ф = sin -1 (altitude/slant range) = sin -1 (6,096 m / 20,000 m) = 17.7°. Compute the power spectral density estimate of the slow-time samples using periodogram function and find the peak frequency. From the Doppler shifted wavelength, the observed frequency is. For θ = 0⁰, maximum Doppler frequency is achieved. Doppler effect formula for observed frequency. Apparent Frequency formula is given by. EXAMPLE#1 (Doppler frequency calculation for moving reflector case): Speed of Wave source (m/sec) = 1000 , Operating Frequency (MHz) = 3000(i.e. = wavelength shift. when receding. Online calculator that allows you to calculate the change in radar frequency using the doppler effect, when there is a relative change in the speed. f L =. Table of Contents for Electronics Warfare and Radar Engineering Handbook. This is the currently selected item. = wavelength of the source not moving. This allows for us to measure the velocity of blood through a vessel, for which the equation is: Where: v=velocity of red blood cell targets; f=Doppler shift frequency; f0=transmitted ultrasound beam frequency; θ=angle between the … The relative velocity in case of moving target is given as:: V is the velocity of the target and. FORMULA of Doppler Log:-Doppler effect can be further explained by following equations: fr is the frequency received by observer. The effect was first noted by Christian Doppler … The principles of Doppler as we use it in echocardiography were discovered by Christian Doppler, an astronomer who lived in the mid 19th century in Salzburg, (Austria). The Doppler effect is not all theoretical though. v = velocity of the source. f ‘ = observed frequency. Distance, or “range” to a RADAR echo is given by the formula = 2 where R=range (distance to echo) c = speed of electromagnetic radiation = 3 x 108 m s-1 T = time since pulse was emitted. Doppler shift or Doppler effect is defined as the change in frequency of sound wave due to a reflector moving towards or away from an object, which in the case of ultrasound is the transducer.. Terminology. 0 m / s + 1 8. When the source and the wave move at the same velocity. Δf = fS(c − vS−c)/(c − vS) Δf = fS(− vS)/(c − vS) Thus: Δf = −fSvS/(c − vS) Moving observer and stationary source. Solution. At what frequency is the message received on Earth? The Doppler Effect Calculator uses the following formula: Observed Frequency = Frequency of the Emitted Wave * (Velocity of the Waves in the Medium + Velocity of the Receiver) / (Velocity of the Waves in the Medium + Velocity of the Source) For the calculator, the Velocity of the Waves in the Medium is set to 343.2 m/s as a default. Dr. Brad Muller . To Find: Apparent frequency = n a = ? v L = 0 m/s. the siren of a fast approaching train you heard usually much higher than a fast departing train. 0 m / s – 3 2. f s =. v s = 28.0 m/s. 3GHz), Output Doppler Frequency (Hz) = 20000 i.e. Thus, we will arrange the value in the Doppler Effect Formula to find out the frequency which is: f L =. E.g. $\endgroup$ – Seetha Rama Raju Sanapala Aug 2 '14 at 20:58 0 m / s 3 4 0. In terms of the usual relativity symbols, this becomes. Formula: f = (c + v r / c + v s) × f 0. where, c - is the velocity of waves in the medium f 0 - emitted frequency v r - is the velocity of the receiver relative to the medium; positive if the receiver is moving towards the source v s - is the velocity of the source relative to the medium; positive if the source is moving away from the receiver f - doppler effect observed frequency This is the equation for Doppler shift. The non-relativistic Doppler shifted frequency of an object moving with speed v with respect to a stationary observer, is: and the Doppler shifted wavelength can be shown to be: In these two equations, c 0 is the speed of the wave in a stationary medium (the speed of sound in this case), and the velocity is the radial component of the velocity (the part in a straight line from the observer). Using these values and the Doppler frequency formula, the frequency of the sound heard by the listener as the car gets farther away is: As the police car gets farther away from the listener standing on the sidewalk, the frequency of the sound heard by the listener is . This is the equation for the doppler frequency. 3 4 0. Next lesson. The following formula is used to calculate the observed frequency of the doppler effect. Θ denotes the angular target velocity vector. 20KHz Doppler Frequency. Convert the peak Doppler frequency to speed using the dop2speed function. . Doppler effect formula when source is moving away. The Doppler effect equation is: f = f 0 * (v + v r)/ (v + v s) Where: v: the velocity of waves in the medium v O is Velocity of observer; v g is Velocity of source; If the source … While for θ = 90⁰, the doppler frequency is minimum i.e., 0. Now, let us derive the formula for Doppler frequency. It sends a radio-wave message back to Earth at a frequency of 1.50 GHz. If we know that the frequency of the ambulance siren is 700 Hz, we can c… v + v l v + v s. \frac {v+vl} {v+vs} v+vsv+vl. Doppler effect equation. v =speed of sound waves. c = Speed of light. River Reporter Police Blotter, Adhi Adhi Raat Lyrics, Commencement Meaning In Malay, How To Get Rid Of A Cold Sore Overnight, Naa Songs Telugu Hits, Metabo Nail Gun, Karen Acronym Reddit, American Eagle Mom Jeans, Curvy,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365895986557007, "perplexity": 921.7092845207152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00629.warc.gz"}
http://math.stackexchange.com/questions/39071/unique-minimal-basis
# Unique Minimal Basis? Suppose there exists a collection of finite sets which is finite. We pick up the minimal sub-collection such that any set in the collection can be expressed as a union of sets in the sub-collection. Is the minimal sub-collection unique? - set-theory tag is not appropriate. Maybe elementary-set-theory is. – Aryabhata May 14 '11 at 16:55 Yes it is unique. Let us consider two minimal sub-collections $\mathcal A, \mathcal B$. Neither contains the other by their minimality. Since everything is finite, let $A\in \mathcal A\setminus\mathcal B$ be an element of minimal cardinality. Now $A$ can be expressed as a union of element of $\mathcal B$, which all need to be of smaller cardinality than $A$ (or same but $A\not\in\mathcal B$), but $\mathcal A$ then contains all of them, letting $A$ be expressed by a union of elements of $\mathcal A$ contradicting the minimality of $\mathcal A$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432139992713928, "perplexity": 140.3161228839037}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00019-ip-10-71-132-137.ec2.internal.warc.gz"}
https://arxiv.org/abs/1901.11532?context=math
math # Title:Polysymplectic formulation for BF gravity with Immirzi parameter Abstract: The polysymplectic formulation of the CMPR action, which is a BF-type formulation of General Relativity that involves an arbitrary Immirzi parameter, is performed. We implement a particular scheme within this covariant Hamiltonian approach to analyze the constraints that characterize the CMPR model. By means of the privileged $(n-1)$-forms and the Poisson-Gerstenhaber bracket, inherent to the polysymplectic framework, the BF field equations associated to the CMPR action are obtained and, in consequence, the Einstein equations naturally emerge by solving the simplicity constraints of the theory. Further, from the polysymplectic analysis of the CMPR action the De Donder-Weyl Hamiltonian formulation of the Holst action is recovered, which is consistent with the Lagrangian analysis of this model as reported in the literature. Comments: 19 pages, no figures Subjects: General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph) MSC classes: 83C05, 70S05, 70S15, 37K05 Journal reference: Class. Quantum Grav. 36 115003 (2019) DOI: 10.1088/1361-6382/ab1365 Cite as: arXiv:1901.11532 [gr-qc] (or arXiv:1901.11532v2 [gr-qc] for this version)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776586055755615, "perplexity": 1705.007489011389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697760.44/warc/CC-MAIN-20191019191828-20191019215328-00036.warc.gz"}
http://cognet.mit.edu/node/31117
## Neural Computation March 2008, Vol. 20, No. 3, Pages 813-843 (doi: 10.1162/neco.2007.12-06-414) © 2008 Massachusetts Institute of Technology Dynamics of Learning Near Singularities in Layered Networks Article PDF (569.66 KB) Abstract We explicitly analyze the trajectories of learning near singularities in hierarchical networks, such as multilayer perceptrons and radial basis function networks, which include permutation symmetry of hidden nodes, and show their general properties. Such symmetry induces singularities in their parameter space, where the Fisher information matrix degenerates and odd learning behaviors, especially the existence of plateaus in gradient descent learning, arise due to the geometric structure of singularity. We plot dynamic vector fields to demonstrate the universal trajectories of learning near singularities. The singularity induces two types of plateaus, the on-singularity plateau and the near-singularity plateau, depending on the stability of the singularity and the initial parameters of learning. The results presented in this letter are universally applicable to a wide class of hierarchical models. Detailed stability analysis of the dynamics of learning in radial basis function networks and multilayer perceptrons will be presented in separate work.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900821805000305, "perplexity": 979.585574690424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00214.warc.gz"}
http://www.physicsforums.com/showthread.php?p=4159615
# Why positive curvature implies finite universe? by Dmitry67 Tags: curvature, finite, implies, positive, universe P: 2,456 This post in influenced by 3 new threads in our cosmology forum. Recent observational data favors positive curvature of our Universe. The question I have, however, is why positive curvature implies spatially finite Universe? Yes, it might look quite obvious if we embed curved space into higher dimensional flat space. But we can do it, but not must do it - we can work with GR without embedding, am I correct? Take Klein bottle as example. You can't correctly embed it into 3D space without having intersections with itself. Still it is a valid mathematical object, when you forget about intersections. I tend to believe that 'intersections' require additional axiom, saying that "when 2 different points of space can be mapped to the same point in the higher dimensional space, then it is the same point in lower dimensional space too". Without this axiom, space with positive curvature can be infinite - compare a circle and an infinite spring - they both have the same positive curvature... Also without embedding there is yet another option. For the space with constant positive curvature at least 2 finite configurations exist: sphere and half-sphere (sphere cut in half, with diametrically opposite points interconnected on a 'cut' side). So without embedding, full information about curvature doesn't give us the volume! So how do we know that our favorite 'balloon' is not cut in half? P: 2,889 Well, one has to assume some topological conditions other than positive curvature to infer spatial finitenes, like (I'm not sure about all of them): orientability, self-connectedness, compactness... and others. Emeritus PF Gold P: 5,500 Quote by Dmitry67 This post in influenced by 3 new threads in our cosmology forum. Recent observational data favors positive curvature of our Universe. The question I have, however, is why positive curvature implies spatially finite Universe? Yes, it might look quite obvious if we embed curved space into higher dimensional flat space. But we can do it, but not must do it - we can work with GR without embedding, am I correct? Embedding is irrelevant. Differential geometry has a broad class of theorems that relate curvature to topology. One of them is (IIRC, could be getting the details wrong) that if you have a simply connected three-dimensional Riemannian space with constant, positive curvature, it has the topology of a three-sphere. In a cosmology with positive spatial curvature, this applies to the surfaces of constant cosmological time (i.e., the preferred time coordinate defined by the symmetry of the spacetime). Thanks P: 3,850 ## Why positive curvature implies finite universe? Seems like you should still be able to conclude there's a finite volume under certain assumptions, even if the curvature is not constant. But what exactly does positive or negative curvature mean in that case? Since the curvature of a 3-space (the Ricci tensor) has 6 components. Sci Advisor PF Gold P: 4,860 I wonder if some form of Myer's theorem is known for psuedo-riemannian geometry in 4 dimensions: http://en.wikipedia.org/wiki/Myers_theorem (Positive Ricci curvature for above states that the Ricci tensor contracted with any unit tangent vector (twice) is positive). PF Gold P: 4,860 Quote by PAllen I wonder if some form of Myer's theorem is known for psuedo-riemannian geometry in 4 dimensions: http://en.wikipedia.org/wiki/Myers_theorem (Positive Ricci curvature for above states that the Ricci tensor contracted with any unit tangent vector (twice) is positive). And this appears to be an appropriate generalization: http://intlpress.com/JDG/archive/1979/14-1-105.pdf Emeritus PF Gold P: 5,500 Quote by Bill_K Seems like you should still be able to conclude there's a finite volume under certain assumptions, even if the curvature is not constant. But what exactly does positive or negative curvature mean in that case? Since the curvature of a 3-space (the Ricci tensor) has 6 components. I think one way of stating it is that for any two orthogonal directions a and b, $R^a_{bab}$ (no implied sum) has a certain sign. But I'm pretty sure that a positive curvature can't be sufficient to prove anything about finite volume. E.g., a hyperboloid of one sheet would be a counterexample in two dimensions. P: 2,456 Whats about the following toy model. I have sphere in 3-dimensional space. Every point on it is defined by 2 polar coordinates P and Q. Let's map P and Q into a plane. We assume that point with P=2*pi is the same as point with P=0, and Q=2*pi the same as Q=0. But this is an extra assumption. Let's say that area covered by P and Q is infinite and is not looped to itself. So, after travelling all around the globe, traveler finds himself in some different place, not at the same point... This model looks self-consistent to me, even it can't be correctly embedded. The question is, is it consistent with GR? PF Gold P: 4,860 Quote by Dmitry67 Whats about the following toy model. I have sphere in 3-dimensional space. Every point on it is defined by 2 polar coordinates P and Q. Let's map P and Q into a plane. We assume that point with P=2*pi is the same as point with P=0, and Q=2*pi the same as Q=0. But this is an extra assumption. Let's say that area covered by P and Q is infinite and is not looped to itself. So, after travelling all around the globe, traveler finds himself in some different place, not at the same point... This model looks self-consistent to me, even it can't be correctly embedded. The question is, is it consistent with GR? Have you looked at the references I provided? The one from post #6 seems to be about the best possible answer: if the spacetime meets certain cosmologically plausible conditions (much more general than FRW models), and spatial curvature is positive (in the sense I described in post #5) and bounded below by some number, then not only can you say it is spatially closed, you can put an upper bound on its diameter. P: 2,456 But look my previous post: my toy Universe has finite diameter, but it is not spacially finite!!! PF Gold P: 4,860 Quote by Dmitry67 But look my previous post: my toy Universe has finite diameter, but it is not spacially finite!!! Sorry, but the statement that it is a sphere and yet geodesics don't close is a contradiction. A sphere is defined by topology and geometry, not coordinate conventions. A geometric sphere has, as a property, that all geodesics are closed curves. OK, so you propose that there is a 'funny 2-surface' with constant positive curvature everywhere, that is not closed. Well, that contradicts Meyer's theorem. Forget embedding, this means that if you go through the steps to actually set it up as manifold with the proposed properties, you will fail. I suspect where you must fail is that in defining the open sets for your magical surface, you get a contradiction - some point must be near and not near some other point, at the same time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794927000999451, "perplexity": 609.5119546902391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1985297/is-is-possible-to-find-a-basis-for-the-column-space-of-a-given-reduced-row-ech
# Is is possible to find a basis for the column space of $A$,given reduced row echelon form of matrix $A$ and $A^T$, Suppose $A$ is a $3$x$4$ matrix and the reduced row echelon form of $A$ is $\begin{pmatrix}1&0&0&1\\0&1&2&2&\\0&0&0&0\end{pmatrix}$ and the reduced row echelon form of $A^T$ is $\begin{pmatrix}1&0&2\\0&1&-1\\0&0&0\\0&0&0\end{pmatrix}$ Find a basis for $R(A)$, where $R(A)$ is the column space of $A$ I don't think this is possible, but in the answer key, it said that $R(A)$ = $R(A^T)^T$, which has basis $$\{(1, 0, 2)^T,(0, 1, -1)^T\}$$ How does this work? • You mean $R(A^T)^\perp$. $S^T$ doesn't mean anything unless $S$ is a matrix (or vector). – Omnomnomnom Oct 26 '16 at 1:10 • @Omnomnomnom What do you mean by $S$? – user59036 Oct 26 '16 at 1:15 • Oh, excuse me. I think your book really means $$R(A) = R[(A^T)^T]$$ I was confused without the extra brackets. Now it makes sense. – Omnomnomnom Oct 26 '16 at 1:15 • Remember that row-reduction does not change the row-space of a matrix – Omnomnomnom Oct 26 '16 at 1:16 • – BCLC Oct 26 '16 at 4:03 TL;DR Row space of $A^T$ = column space of $A$ When we want to find a basis for the row space of a matrix $A$, we could use the the rows of $A$ except that it is not always the case that the rows of $A$ are linearly independent. 1. So we have to eliminate rows which can be written as linearly combinations of other rows. 2. Now we perform EROs on $A$ until we reach row-echelon form (or reduced row-echelon form) to get row vectors that, like the original matrix $A$, span the row space of $A$. 3. This time however, the row vectors (apart from the zero row vectors) that we get are linearly independent. 4. Thus, we have a basis for the row space of $A$. For the column space of $A$, the procedure is the same as above if we replace 'row' with 'column'. ECOs however are difficult because we are used to addition vertically as in the EROs. So instead of performing ECOs on $A$, we perform EROs on $A^T$. This gives us row vectors (apart from the zero row vectors) that are linearly independent and span the row space of $A^T$, which is equivalent to the column space of $A$. • do you agree with the commenter? – Anonymous Oct 26 '16 at 4:11 • @Anonymous Not sure about the first comment but Omnomnomnom seems to be right about others – BCLC Oct 26 '16 at 4:17 • lol, I remember you, you commented once with that question in one of my solutions and I had no idea what you were actually trying to say. – Anonymous Oct 26 '16 at 4:23 • @Anonymous LOL – BCLC Oct 26 '16 at 5:13 • LOL yeah that one! – Anonymous Oct 26 '16 at 5:16 The non-zero rows of of the row echelon form of $A^T$ give you a basis of the column space of $A$ if you tranpose them. The row space of a matrix is preserved as perform elementary row operation. As you perform elementary row operations to the transpose of the matrix, you are actually performing column operations to the original matrix while preserving the column space. It is known that the non-zero rows of the row echelon forms are linearly independent and hence form a basis to the row space.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288171529769897, "perplexity": 149.3274783908914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00488.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=111989
## s, p, d, f 204929947 Posts: 76 Joined: Fri Apr 06, 2018 11:03 am ### s, p, d, f What is the different between the orbitals? 404975170 Posts: 68 Joined: Thu Jul 27, 2017 3:00 am ### Re: s, p, d, f Each orbital in a su shell has a different shape characterized by a different letter (s, p, d,f). They have electrons with different angular momentums so this sets them apart from each other. Lenaschelzig1C Posts: 21 Joined: Fri Apr 06, 2018 11:05 am ### Re: s, p, d, f Also, different orbitals are different sizes and shapes Taizha 1C Posts: 36 Joined: Fri Apr 06, 2018 11:01 am ### Re: s, p, d, f What type of midterm questions/practice problems are possible for s, p, d, f? Bianca Nguyen 1B Posts: 36 Joined: Fri Apr 06, 2018 11:04 am ### Re: s, p, d, f I would guess that a possible problem could be giving you something like “4d” and asking you to write out or choose the correct possible four quantum numbers (n, l, ml, and ms) that 4d could have RubyLake1F Posts: 41 Joined: Fri Apr 06, 2018 11:03 am ### Re: s, p, d, f I find this to be a helpful visual: This is a depiction of the second shell (n=2), in which there are two possible sub-shells (s and p). The s sub-shell has one orbital, which is spherical and holds a maximum of two electrons (which must have opposite spin values). The p sub-shell has 3 orbitals, which can each hold a maximum of two electrons each with opposite spin (for a total of 6 possible electrons in the p sub shell). The s sub-shell will fill first, then the p sub-shell. In higher shells, there will be more possible sub shells and more possible orbitals. There are 5 d orbitals and therefore they can hold a total of 10 electrons, and there are 7 f orbitals meaning they can hold a total of 14 electrons. Here is the source: (Will Sweatman, 2016) Paywand Baghal Posts: 32 Joined: Fri Apr 06, 2018 11:01 am ### Re: s, p, d, f Taizha 1C wrote:What type of midterm questions/practice problems are possible for s, p, d, f? there was a homework problem where it asked us to draw it out, so we might be asked to differentiate the different orbitals (s, p, d, f) Kelsey Li 3B Posts: 34 Joined: Fri Sep 28, 2018 12:26 am ### Re: s, p, d, f The difference between the orbitals of s,p,d, and f is the shape of the orbital. Within these orbitals are subshells and s=1, p=3, d=5, and f=7. Ester Garcia 1F Posts: 29 Joined: Fri Sep 28, 2018 12:17 am ### Re: s, p, d, f Another distinction between the orbitals is the number of nodal planes. For examples, the s orbital has none, the p orbitals have one, the d orbitals have two, and the f orbitals have three. BenJohnson1H Posts: 68 Joined: Fri Sep 28, 2018 12:17 am ### Re: s, p, d, f Would it be correct to say that 4d could have n=4, l=2, ml=-1, and ms=+1/2? and if so, would it be correct to assume that each possible configuration of quantum numbers for 4d represents an electron, giving the total number of electrons in that shell? Aria Soeprono 2F Posts: 64 Joined: Fri Sep 28, 2018 12:27 am ### Re: s, p, d, f 505168807 wrote:Would it be correct to say that 4d could have n=4, l=2, ml=-1, and ms=+1/2? and if so, would it be correct to assume that each possible configuration of quantum numbers for 4d represents an electron, giving the total number of electrons in that shell? You are correct that 4d could have those quantum numbers, however, if it doesn't specify otherwise and you are describing the quantum numbers of a 4d orbital, you would say n=4, l= 2, ml=-2,-1,0,1,2, and ms=+1/2 or -1/2, in which n, l, and ml give information on the orbital itself and ms signifies which electron it is referring to. RoopshaChatterjee 1G Posts: 30 Joined: Fri Sep 28, 2018 12:17 am ### Re: s, p, d, f Orbitals are the regions of space in which electrons are most likely to be found. Each orbital is denoted by a number and a letter in which the letters (s,p,d,f), describe the shape of the orbital. An s orbital, the first energy level, is spherical and has no nodal planes. A p orbital has two lobes on either side of the nucleus and there is a nodal plane with zero probability of e- density. A d plane has 4 lobes of e- density located in xy-yz-zx planes, and there are a total of 5 orbitals. The shape of the f orbital is a little bit more complicated but still has a nodal plane. 204765696 Posts: 23 Joined: Wed May 16, 2018 3:00 am ### Re: s, p, d, f The orbitals s, p, d, f have different shapes and sizes Anjali_Kumar1F Posts: 62 Joined: Fri Sep 28, 2018 12:25 am ### Re: s, p, d, f The orbitals s,p,d, f have different sizes and shapes. - s is spherical shape with 1 orbital -p has 2 lobes on either side of nucleus with 3 orbitals -d has 4 lobes of e- located on XY-ZX planes with 5 orbitals f has more complicated shapes with 7 orbitals Josephine Lu 4L Posts: 62 Joined: Fri Sep 28, 2018 12:18 am ### Re: s, p, d, f On Test 2, would we be required to draw the different types of orbitals or identify various diagrams of orbitals? Dong Hyun Lee 4E Posts: 68 Joined: Fri Sep 28, 2018 12:23 am ### Re: s, p, d, f If i remember correctly, Lavelle stated that he would not make us draw the different orbitals. But I would memmorize the number of lobes and planes they do have as S is spherical and does not have a nodal plane. P orbitals have 2 lobes on either side and a nodal plane. D has 44 lobes and etc. Emma Randolph 1J Posts: 65 Joined: Fri Sep 28, 2018 12:29 am ### Re: s, p, d, f For test 2, do we need to know how to find the quantum numbers of an electron? How do you do that?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505949378013611, "perplexity": 2122.360734452971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00031.warc.gz"}
https://physics.stackexchange.com/questions/205934/tension-in-the-simple-pendulum-polar-coordinates
# Tension in the simple pendulum (polar coordinates) Let's consider the simple pendulum as is displayed here or over there (page 10). The analysis of the second Newton's law in polar coordinates goes as follows: $$\vec{F} = m\frac{d^2\vec{r}}{dt^2}, \\ F_r \hat{r} + F_\theta \hat{\theta} = m\frac{d^2 (r\hat{r})}{dt^2} , \\ F_r \hat{r} + F_\theta \hat{\theta} = m(\ddot{r} - r\dot{\theta}^2) \hat{r} + m(r\ddot{\theta} + 2\dot{r}\dot{\theta}) \hat{\theta} , \\ F_r \hat{r} + F_\theta \hat{\theta} = ma_r \hat{r} + m a_\theta \hat{\theta} .$$ Substituing the forces we get, $$-T + mg\cos(\theta) = ma_r = m(\ddot{r} - r\dot{\theta}^2) , \\ -mg\sin(\theta) = ma_\theta = m(r\ddot{\theta} + 2\dot{r}\dot{\theta})$$ Considering the restrictions $r = L$ and $\dot{r} = \ddot{r} = 0$ we get $$-T + mg\cos(\theta) = m(- L \dot{\theta}^2) , \\ -mg\sin(\theta) = m(L\ddot{\theta})$$ The second one is the known pendulum equation $$\ddot{\theta} + \frac{g}{L}\sin(\theta) = 0 ,$$ while the first one is a much less used equation $$T = mL \dot{\theta}^2 + mg\cos(\theta)$$ ¿Is it the correct equation to calculate the tension? Note that this implies that $a_r \neq 0$; which in words means that the radial acceleration is different from zero which looks unphysical, ¿where is the trick? ¿Has it something to do with noninertial forces? Yes this is the correct equation for $T$ and yes $a_r \neq 0$. In fact $$a_r = -L \dot{\theta}^2$$ The particle must accelerate in the normal direction in order to track a radial path. If $a_r=0$ then the path would be a straight line.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868028163909912, "perplexity": 325.10949076202934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739104.67/warc/CC-MAIN-20200813220643-20200814010643-00101.warc.gz"}
http://math.stackexchange.com/questions/226874/can-1113-1-be-divided-exactly-by-6/226881
# Can $11^{13}-1$ be divided exactly by 6? Can $11^{13}-1$ be divided exactly by 6? My solution: $$11^2 \equiv 1 \pmod 6$$ $$11^{12} \equiv 1 \pmod 6$$ $$11^{13} \equiv 5 \pmod 6$$ Hence, $(11^{13}-\mathbf{5})$ can be divided exactly by 6. However, according to the solution on my book, ($11^{13}-\mathbf{1}$)can be divide exactly by 6. What's wrong? - The solution according to what? –  Thomas Andrews Nov 1 '12 at 16:40 You have shown $11^{13}-5$ can be divided by 6; so too can $11^{13}+1$. So $11^{13}-1$ cannot; $11^{1}-1=10$ cannot either. –  Henry Nov 1 '12 at 16:40 $11^{13} - 5 \equiv 0 \mod 6$ so $11^3 - 1 \equiv 4 \mod 6$. –  Graphth Nov 1 '12 at 16:41 $11 \equiv -1 \mod 6$ should help copmputations like this. –  Arthur Nov 1 '12 at 16:41 Probably $\:11^{13}\!-1\:$ is a typo for $\:11^{13}\!+1\equiv (-1)^{13}\!+1\equiv 0\pmod 6.\ \$ –  Bill Dubuque Nov 1 '12 at 18:45 ## 3 Answers It is already not divisible by $3$; notice that \begin{align} 11^{13} - 1 &\equiv (-1)^{13} - 1 \\ &\equiv -1 - 1 \\ &\equiv -2 \\ &\equiv 1 \\ &\not\equiv 0 \, (\text{mod} \, 3). \end{align} Note $11 \equiv 2 \equiv -1 \, (\text{mod} \, 3)$. - Assume $11^{13}-1$ was divisible by $6$, then we'd have $$11^{13}-1\equiv 0\pmod 6.$$ In other words, $11^{13}\equiv 1$. However, by your computation $11^{13}\equiv 5$, this is a contradiction because $1$ and $5$ are not congruent modulo $6$. Hence, $11^{13}-1$ is not divisible by $6$. - As $\frac {11}{6}$ gives remainder of $5$ or $-1$ . we take $-1$ as it eases our calculation . 1) $\frac{11^{13}}6$ gives remainder $-1$ . 2) $\frac{1}6$ gives remainder 1 FIND : RESULT $1$ - RESULT $2$ $-1 -1 = -2$ which is equal to $4$ as $6-2 =4$ So remainder is $4$ . It means it is definitely NOT DIVISIBLE by 6 . .You can check your answers in one of the most trusted sites : wolframalpha , where you can do such calculation involving large numbers . I have done the following for you in the link below . http://www.wolframalpha.com/input/?i=11%5E13%E2%88%921+mod+6 Even scientific calculators in computers perform, mod operation of such large numbers with ease . These computing sites give confidence on our calculation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998852550983429, "perplexity": 946.2734220604295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927844.14/warc/CC-MAIN-20150521113207-00206-ip-10-180-206-219.ec2.internal.warc.gz"}
http://unapologetic.wordpress.com/2010/07/28/fubinis-theorem/?like=1&_wpnonce=9d779cb1c0
# The Unapologetic Mathematician ## Fubini’s Theorem We continue our assumptions that $(X,\mathcal{S},\mu)$ and $(Y,\mathcal{T},\nu)$ are both $\sigma$-finite measure spaces, and we consider the product space $(X\times Y,\mathcal{S}\times\mathcal{T},\mu\times\nu)$. The first step towards the measure-theoretic version of Fubini’s theorem is a characterization of sets of measure zero. Given a subset $E\subseteq X\times Y$, a necessary and sufficient condition for $E$ to have measure zero is that the $X$-section $E_x$ have $\nu(E_x)=0$ for almost all $x\in X$. Another one is that the $Y$-section $E^y$ have $\mu(E^y)=0$ for almost all $y\in Y$. Indeed, the definition of the product measure tells us that $\displaystyle\lambda(E)=\int\nu(E_x)\,d\mu(x)=\int\mu(E^y)\,d\nu(y)$ Since the function $x\mapsto\nu(E_x)$ is integrable and nonnegative, our condition for an integral to vanish says that the integral is zero if and only if $\nu(E_x)=0$ $\mu$-almost everywhere. Similarly, we see that the integral of $\mu(E^y)$ is zero if and only if $\mu(E^y)=0$ $\nu$-almost everywhere. Now if $h$ is a non-negative measurable function on $X\times Y$, then we have the following equalities between the double integral and the two iterated integrals: $\displaystyle\int h\,d(\mu\times\nu)=\iint h\,d\mu\,d\nu=\iint h\,d\nu\,d\mu$ If $h$ is the characteristic function $\chi_E$ of a measurable set $E$, then we find that \displaystyle\begin{aligned}\int\chi_E(x,y)\,d\nu(y)&=\int\chi_{E_x}(y)\,d\nu(y)=\nu(E_x)\\\int\chi_E(x,y)\,d\mu(x)&=\int\chi_{E^y}(x)\,d\mu(x)=\mu(E^y)\end{aligned} and thus \displaystyle\begin{aligned}\iint\chi_E(x,y)\,d\nu\,d\mu=\int\nu(E_x)\,d\mu&=\left[\mu\times\nu\right](E)=\int\chi_E\,d(\mu\times\nu)\\\iint\chi_E(x,y)\,d\mu\,d\nu=\int\mu(E^y)\,d\nu&=\left[\mu\times\nu\right](E)=\int\chi_E\,d(\mu\times\nu)\end{aligned} Next we assume that $h$ is a simple function. Then $h$ is a finite linear combination of characteristic functions of measurable sets. But clearly all parts of the asserted equalities are linear in the function $h$, and so since they hold for characteristic functions of measurable sets they must hold for any simple function as well. Finally, given any non-negative measurable function $h$, we can find an increasing sequence of simple functions $\{h_n\}$ converging pointwise to $h$. The monotone convergence theorem tells us that $\displaystyle\lim\limits_{n\to\infty}\int h_n\,d(\mu\times\nu)=\int h\,d(\mu\times\nu)$ We define the functions $\displaystyle f_n(x)=\int h_n(x,y)\,d\nu(y)$ and conclude that since $\{h_n\}$ is an increasing sequence, $\{f_n\}$ must me an increasing sequence of non-negative measurable functions as well. For every $x$ the monotone convergence theorem tells us that $\displaystyle\lim\limits_{n\to\infty}f_n(x)=f(x)=\int h(x,y)\,d\nu(y)$ As a limit of a sequence of non-negative measurable functions, $f$ must also be a non-negative measurable function. One last invocation of the monotone convergence theorem tells us that $\displaystyle\lim\limits_{n\to\infty}\int f_n\,d\mu=\int f\,d\mu$ which proves the equality of the double integral and one of the iterated integrals. The other equality follows similarly. And now we come to Fubini’s theorem itself: if $h$ is an integrable function on $X\times Y$, then almost every section of $h$ is integrable. If we define the functions \displaystyle\begin{aligned}f(x)&=\int h(x,y)\,d\nu(y)\\g(y)&=\int h(x,y)\,d\mu(x)\end{aligned} wherever these symbols are defined, then $f$ and $g$ are both integrable, and $\displaystyle\int h\,d(\mu\times\nu)=\int f\,d\mu=\int g\,d\nu$ Since a real-valued function is integrable if and only if both its positive and negative parts are, it suffices to consider non-negative functions $h$. The latter equalities follow, then, from the above discussion. Since the measurable functions $f$ and $g$ have finite integrals, they must be integrable. And since they’re integrable, they must be finite-valued a.e., which implies the assertions about the integrability of sections of $h$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974721670150757, "perplexity": 75.90311657676475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931003959.7/warc/CC-MAIN-20141125155643-00224-ip-10-235-23-156.ec2.internal.warc.gz"}
https://planetmath.org/arithmeticalhierarchy
# arithmetical hierarchy The arithmetical hierarchy is a hierarchy of either (depending on the context) formulas or relations. The relations of a particular level of the hierarchy are exactly the relations defined by the formulas of that level, so the two uses are essentially the same. The first level consists of formulas with only bounded quantifiers, the corresponding relations are also called the Primitive Recursive relations (this definition is equivalent to the definition from computer science). This level is called any of $\Delta^{0}_{0}$, $\Sigma^{0}_{0}$ and $\Pi^{0}_{0}$, depending on context. A formula $\phi$ is $\Sigma^{0}_{n}$ if there is some $\Delta^{0}_{0}$ formula $\psi$ such that $\phi$ can be written: $\phi(\vec{k})=\exists x_{1}\forall x_{2}\cdots{Q}x_{n}\psi(\vec{k},\vec{x})$ $\text{ where }Q\text{ is either }\forall\text{ or }\exists\text{, whichever % maintains the pattern of alternating quantifiers}$ The $\Sigma^{0}_{1}$ relations are the same as the Recursively Enumerable relations. Similarly, $\phi$ is a $\Pi^{0}_{n}$ relation if there is some $\Delta^{0}_{0}$ formula $\psi$ such that: $\phi(\vec{k})=\forall x_{1}\exists x_{2}\cdots Qx_{n}\psi(\vec{k},\vec{x})$ $\text{ where }Q\text{ is either }\forall\text{ or }\exists\text{, whichever % maintains the pattern of alternating quantifiers}$ A formula is $\Delta^{0}_{n}$ if it is both $\Sigma^{0}_{n}$ and $\Pi^{0}_{n}$. Since each $\Sigma^{0}_{n}$ formula is just the negation of a $\Pi^{0}_{n}$ formula and vice-versa, the $\Sigma^{0}_{n}$ relations are the complements of the $\Pi^{0}_{n}$ relations. The relations in $\Delta^{0}_{1}=\Sigma^{0}_{1}\cap\Pi^{0}_{1}$ are the Recursive relations. Higher levels on the hierarchy correspond to broader and broader classes of relations. A formula or relation which is $\Sigma^{0}_{n}$ (or, equivalently, $\Pi^{0}_{n}$) for some integer $n$ is called arithmetical. The superscript $0$ is often omitted when it is not necessary to distinguish from the analytic hierarchy. Functions can be described as being in one of the levels of the hierarchy if the graph of the function is in that level. Title arithmetical hierarchy Canonical name ArithmeticalHierarchy Date of creation 2013-03-22 12:55:11 Last modified on 2013-03-22 12:55:11 Owner CWoo (3771) Last modified by CWoo (3771) Numerical id 19 Author CWoo (3771) Entry type Definition Classification msc 03B10 Synonym arithmetic hierarchy Synonym arithmetic Synonym arithmetical Synonym arithmetic formula Synonym arithmetical formulas Related topic AnalyticHierarchy Defines sigma n Defines sigma-n Defines pi n Defines pi-n Defines delta n Defines delta-n Defines recursive Defines recursively enumerable Defines delta-0 Defines delta 0 Defines delta-1 Defines delta 1 Defines arithmetical
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 29, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948386549949646, "perplexity": 771.9610757392344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827727.65/warc/CC-MAIN-20181216121406-20181216143406-00279.warc.gz"}
https://cstheory.stackexchange.com/questions/14471/reverse-chernoff-bound/14476
# Reverse Chernoff bound Is there an reverse Chernoff bound which bounds that the tail probability is at least so much. i.e if $X_1,X_2,\ldots,X_n$ are independent binomial random variables and $\mu=\mathbb{E}[\sum_{i=1}^n X_i]$. Then can we prove $Pr[\sum_{i=1}^n X_i\geq (1+\delta)\mu]\geq f(\mu,\delta,n)$ for some function $f$. • Your example is asking too much: with $p=n^{-2/3}$, a standard Chernoff bound shows that $\Pr[|T\cap S_1| \geq \sqrt{1.1}n^{1/3}]$ and $\Pr[|T\cap S_2|\sqrt{1.1}\leq n^{1/3}]$ are at most $\exp(-cn^{1/3})$ for some $c$. Nov 25, 2012 at 21:19 • You are right, I got confused about which term in chernoff bound has the square. I have changed the question to reflect a weaker bound. I don't think it will help me in my current application, but it might be interesting for other reasons. Nov 25, 2012 at 21:50 Here is an explicit proof that a standard Chernoff bound is tight up to constant factors in the exponent for a particular range of the parameters. (In particular, whenever the variables are 0 or 1, and 1 with probability 1/2 or less, and $\epsilon\in(0,1/2)$, and the Chernoff upper bound is less than a constant.) If you find a mistake, please let me know. Lemma 1. (tightness of Chernoff bound) Let $X$ be the average of $k$ independent, 0/1 random variables (r.v.). For any $\epsilon\in(0,1/2]$ and $p\in(0,1/2]$, assuming $\epsilon^2 p k \ge 3$, (i) If each r.v. is 1 with probability at most $p$, then $$\displaystyle \Pr[X\le (1-\epsilon)p] ~\ge~ \exp\big({-9\epsilon^2 pk}\big).$$ (ii) If each r.v. is 1 with probability at least $p$, then $$\displaystyle \Pr[X\ge (1+\epsilon)p] ~\ge~ \exp\big({-9\epsilon^2 pk}\big).$$ Proof. We use the following observation: Claim 1. If $1\le \ell \le k-1$, then $\displaystyle {k \choose \ell} ~\ge~ \frac{1}{e\sqrt{2\pi\ell}} \Big(\frac{k}{\ell}\Big)^{\ell} \Big(\frac{k}{k-\ell}\Big)^{k-\ell}$ Proof of Claim 1. By Stirling's approximation, $i!=\sqrt{2\pi i}(i/e)^ie^\lambda$ where $\lambda\in[1/(12i+1),1/12i].$ Thus, $k\choose \ell$, which is $\frac{k!}{\ell! (k-\ell)!}$, is at least $$\frac{\sqrt{2\pi k}\,(\frac{k}{e})^k} { \sqrt{2\pi \ell}\,(\frac{\ell}{e})^\ell ~~\sqrt{2\pi (k-\ell)}\,(\frac{k-\ell}{e})^{k-\ell} } \exp\Big(\frac{1}{12k+1} - \frac{1}{12\ell} - \frac{1}{12(k-\ell)}\Big)$$ $$~\ge~ \frac{1}{\sqrt{2\pi\ell}} \Big(\frac{k}{\ell}\Big)^{\ell} \Big(\frac{k}{k-\ell}\Big)^{k-\ell}e^{-1}.$$ QED Proof of Lemma 1 Part (i). Without loss of generality assume each 0/1 random variable in the sum $X$ is 1 with probability exactly $p$. Note $\Pr[X\le (1-\epsilon)p]$ equals the sum $\sum_{i = 0}^{\lfloor(1-\epsilon)pk\rfloor} \Pr[X=i/k]$, and $\Pr[X=i/k] = {k \choose i} p^i (1-p)^{k-i}$. Fix $\ell = \lfloor(1-2\epsilon)pk\rfloor+1$. The terms in the sum are increasing, so the terms with index $i\ge\ell$ each have value at least $\Pr[X=\ell/k]$, so their sum has total value at least $(\epsilon pk - 2) \Pr[X=\ell/k]$. To complete the proof, we show that $$(\epsilon pk - 2) \Pr[X=\ell/k] ~\ge~ \exp({-9\epsilon^2 pk}).$$ The assumptions $\epsilon^2pk\ge 3$ and $\epsilon\le 1/2$ give $\epsilon pk \ge 6$, so the left-hand side above is at least $\frac{2}{3}\epsilon pk\, {k \choose \ell} p^\ell(1-p)^{k-\ell}$. Using Claim 1, to bound $k\choose \ell$, this is in turn at least $A\, B$ where $A = \frac{2}{3e}\epsilon p k/ \sqrt{2\pi \ell}$ and $B= \big(\frac{k}{\ell}\big)^\ell \big(\frac{k}{k-\ell}\big)^{k-\ell} p^\ell (1-p)^{k-\ell}.$ To finish we show $A\ge \exp(-\epsilon^2pk)$ and $B \ge \exp(-8\epsilon^2 pk)$. Claim 2. $A \ge \exp({-\epsilon^2 pk})$ Proof of Claim 2. The assumptions $\epsilon^2 pk \ge 3$ and $\epsilon\le 1/2$ imply (i) $pk\ge 12$. By definition, $\ell \le pk + 1$. By (i), $p k \ge 12$. Thus, (ii) $\ell \,\le\, 1.1 pk$. Substituting the right-hand side of (ii) for $\ell$ in $A$ gives (iii) $A \ge \frac{2}{3e} \epsilon \sqrt{p k / 2.2\pi}$. The assumption, $\epsilon^2 pk \ge 3$, implies $\epsilon\sqrt{ pk} \ge \sqrt 3$, which with (iii) gives (iv) $A \ge \frac{2}{3e}\sqrt{3/2.2\pi} \ge 0.1$. From $\epsilon^2pk \ge 3$ it follows that (v) $\exp(-\epsilon^2pk) \le \exp(-3) \le 0.04$. (iv) and (v) together give the claim. QED Claim 3. $B\ge \exp({-8\epsilon^2 pk})$. Proof of Claim 3. Fix $\delta$ such that $\ell=(1-\delta)pk$. The choice of $\ell$ implies $\delta\le 2\epsilon$, so the claim will hold as long as $B \ge \exp(-2\delta^2pk)$. Taking each side of this latter inequality to the power $-1/\ell$ and simplifying, it is equivalent to $$\frac{\ell}{p k} \Big(\frac{k-\ell}{(1-p) k}\Big)^{k/\ell-1} ~\le~ \exp\Big(\frac{2\delta^2 pk}{\ell}\Big).$$ Substituting $\ell= (1-\delta)pk$ and simplifying, it is equivalent to $$(1-\delta) \Big(1+\frac{\delta p}{1-p}\Big)^{\displaystyle \frac{1}{(1-\delta)p}-1} ~\le~ \exp\Big(\frac{2\delta^2}{1-\delta}\Big).$$ Taking the logarithm of both sides and using $\ln(1+z)\le z$ twice, it will hold as long as $$-\delta\, +\,\frac{\delta p}{1-p}\Big(\frac{1}{(1-\delta)p}-1\Big) ~\le~ \frac{2\delta^2}{1-\delta}.$$ The left-hand side above simplifies to $\delta^2/\,(1-p)(1-\delta)$, which is less than $2\delta^2/(1-\delta)$ because $p\le 1/2$. QED Claims 2 and 3 imply $A B \ge \exp({-\epsilon^2pk})\exp({- 8\epsilon^2pk})$. This implies part (i) of the lemma. Proof of Lemma 1 Part (ii). Without loss of generality assume each random variable is $1$ with probability exactly $p$. Note $\Pr[X\ge (1+\epsilon)p] = \sum_{i = \lceil(1-\epsilon)pk\rceil}^n \Pr[X=i/k]$. Fix $\hat\ell = \lceil (1+2\epsilon)pk \rceil - 1$. The last $\epsilon pk$ terms in the sum total at least $(\epsilon pk-2)\Pr[X=\hat\ell/k]$, which is at least $\exp({-9\epsilon^2 pk})$. (The proof of that is the same as for (i), except with $\ell$ replaced by $\hat\ell$ and $\delta$ replaced by $-\hat\delta$ such that $\hat\ell = (1+\hat\delta)pk$.) QED • Note: This result has appeared in print as Lemma 4 of doi.org/10.1137/12087222X . Jul 15 at 20:15 The Berry-Esseen theorem can give tail probability lower bounds, as long as they are higher than $$n^{-1/2}$$. Another tool you can use is the Paley-Zygmund inequality. It implies that for any even integer $$k$$, and any real-valued random variable $$X$$, $$\Pr[|X| \ge \frac{1}{2}(\mathbb{E}[X^k])^{1/k}] \geq \frac{\mathbb{E}[X^k]^2}{4\mathbb{E}[X^{2k}]}$$ Together with the multinomial theorem, for $$X$$ a sum of $$n$$ rademacher random variables Paley-Zygmund can get you pretty strong lower bounds. Also it works with bounded-independence random variables. For example you easily get that the sum of $$n$$ 4-wise independent $$\pm 1$$ random variables is $$\Omega(\sqrt{n})$$ with constant probability. If you are indeed okay with bounding sums of Bernoulli trials (and not, say, bounded random variables), the following is pretty tight. Slud's Inequality*. Let $\{X_i\}_{i=1}^n$ be i.i.d. draws from a Bernoulli r.v. with $\mathbb{E}(X_1) = p$, and let integer $k\leq n$ be given. If either (a) $p\leq 1/4$ and $np \leq k$, or (b) $np \leq k \leq n(1-p)$, then $$\text{Pr}\big[\sum_i X_i \geq k\big] \geq 1 - \Phi\left(\frac{k-np}{\sqrt{np(1-p)}}\right),$$ where $\Phi$ is the cdf of a standard normal. (Treating the argument to $\Phi$ as transforming the standard normal, this agrees exactly with what the CLT tells you; in fact, it tells us that Binomials satisfying the conditions of the theorem will dominate their corresponding Gaussians on upper tails.) From here, you can use bounds on $\Phi$ to get something nicer. For instance, in Feller's first book, in the section on Gaussians, it is shown for every $z>0$ that $$\frac{z}{1+z^2}\varphi(z) < 1-\Phi(z) < \frac{1}{z}\varphi(z),$$ where $\varphi$ is the density of a standard normal. There are similar bounds in the Wikipedia article for "Q-function" as well. Other than that, and what other people have said, you can also try using the Binomial directly, perhaps with some Stirling. (*) Some newer statements of Slud's inequality leave out some of these conditions; I've reproduced the one in Slud's paper. The de Moivre-Laplace Theorem shows that variables like $|T\cap S_1|$, after being suitably normalised and under certain conditions, will converge in distribution to a normal distribution. That's enough if you want constant lower bounds. For lower bounds like $n^{-c}$, you need a slightly finer tool. Here's one reference I know of (but only by accident - I've never had the opportunity to use such an inequality myself). Some explicit lower bounds on tail probabilities of binomial distributions are given as Theorem 1.5 the book Random graphs by Béla Bollobás, Cambridge, 2nd edition, where further references are given to An introduction to probability and its applications by Feller and Foundations of Probability by Rényi. The exponent in the standard Chernoff bound as it is stated on Wikipedia is tight for 0/1-valued random variables. Let $0<p<1$ and let $X_1,X_2,\ldots$ be a sequence of independent random variables such that for each $i$, $\Pr[X_i=1]=p$ and $\Pr[X_i=0]=1-p$. Then for every $\varepsilon>0$, $$\frac{2^{-D(p+\varepsilon\| p)\cdot n}}{n+1}\leq \Pr\left[ \sum_{i=1}^n X_i \geq (p+\varepsilon)n\right]\leq 2^{-D(p+\varepsilon\| p)\cdot n}.$$ Here, $D(x\| y)=x \log_2(x/y)+(1-x)\log_2((1-x)/(1-y))$, which is the Kullback-Leibler divergence between Bernoulli random variables with parameters $x$ and $y$. As mentioned, the upper bound in the inequality above is proved on Wikipedia (https://en.wikipedia.org/wiki/Chernoff_bound) under the name "Chernoff-Hoeffding Theorem, additive form". The lower bound can be proved using e.g. the "method of types". See Lemma II.2 in [1]. Also, this is covered in the classic textbook on information theory by Cover and Thomas. [1] Imre Csiszár: The Method of Types. IEEE Transactions on Information Theory (1998). http://dx.doi.org/10.1109/18.720546 • It is also worth noting that $D(p+\delta p\|p)=\frac{p}{2-2p}\delta^2+O(\delta^3)$, and for common case of $p=1/2$ it is $\frac{1}{2}\delta^2+O(\delta^4)$. This shows that when $\delta=O(n^{-1/3})$ the typical $e^{-C \delta^2}$ bound is sharp. (And when $\delta=O(n^{-1/4})$ for $p=1/2$). Jul 26, 2017 at 19:21 The Generalized Littlewood-Offord Theorem isn't exactly what you want, but it gives what I think of as a "reverse Chernoff" bound by showing that the sum of random variables is unlikely to fall within a small range around any particular value (including the expectation). Perhaps it will be useful. Formally, the theorem is as follows. Generalized Littlewood-Offord Theorem: Let $a_1, \ldots, a_n$, and $s>0$ be real numbers such that $|a_i| \ge s$ for $1 \le i \le n$ and let $X_1, \ldots, X_n$ be independent random variables that have values zero and one. For $0 < p \le \frac{1}{2}$, suppose that $p \le \Pr[X_i = 0] \le 1-p$ for all $1 \le i \le n$. Then, for any $r \in \mathcal{R}$, $$\Pr \left[ r \le \sum_{i=1}^{n}{a_iX_i} < r+s\right] \le \frac{c_p}{\sqrt{n}}$$ Where $c_p$ is a constant depending only on $p$. • It may be helpful to others to know that this type of result is also known as a "small ball inequality" and Nguyen and Vu have a terrific survey people.math.osu.edu/nguyen.1261/cikk/LO-survey.pdf. My perspective here slightly differs from yours. I think of a "reverse Chernoff" bound as giving a lower estimate of the probability mass of the small ball around 0. I think of a small ball inequality as qualitatively saying that the small ball probability is maximized by the ball at 0. In this sense reverse Chernoff bounds are usually easier to prove than small ball inequalities. Jan 15, 2016 at 18:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9896589517593384, "perplexity": 142.46256382460822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00560.warc.gz"}
https://abhishekparab.wordpress.com/2010/01/19/562/
Result: ${v: \mathbb{Q}^* \rightarrow \mathbb Z}$ is a discrete valuation satisfying the following properties: • ${v}$ is surjective. • ${v(ab) = v(a) + v(b) \quad \text{for all} \quad a,b\in \mathbb Q^*}$ • ${v(a+b) \geq \{ v(a), v(b) \} \quad \text{provided} \quad a+b \neq 0}$ Then $v=v_p$ for some prime $p$, given by, $v_p\displaystyle\left(p^r \displaystyle\frac{a}{b} \right) = r$ for $(a,p)=(b,p)=1$. Proof: It is a fact (cf. Dummit & Foote Ex 39 Sec. 7.4) that ${R = \{ x \in \mathbb{Q}^* : v(x) \geq 0 \}}$ is a local ring with a unique maximal ideal ${\mathfrak m}$ of elements of positive valuation. (Recollect that an element of ${R}$ is a unit iff its valuation is zero.) Now, ${v(1)=0 \Rightarrow 1 \in R \Rightarrow \mathbb Z \leq R}$. Claim: ${\mathbb Z \cap \mathfrak m = (p)}$. Clearly, ${v}$ being surjective, ${\mathbb Z \cap \mathfrak m \neq (0)}$ because otherwise, each nonzero integer would have valuation zero and so would every nonzero rational. If ${\mathbb Z \cap \mathfrak m = (n)}$ then we factorize ${n=ab}$ with ${a,b}$ integers. Then ${ab\in \mathfrak m \Rightarrow a\in \mathfrak m }$ or ${b \in \mathfrak m}$. Thus ${\mathbb Z \cap \mathfrak m}$ must be a prime ideal. So the claim is justified. Now given ${\displaystyle\frac{a}{b}}$, write ${\displaystyle\frac{a}{b}=p^r \frac{a'}{b'}}$ with ${(p,a')=(p,b')=1}$. I claim that ${a'}$ and ${b'}$ are units. Since ${(p,a')=1}$, we can write ${px+a'y=1}$. If ${a'}$ is not a unit then ${a'\in \mathfrak m}$ and thus ${1\in \mathfrak m}$, a contradiction. A similar argument suggests that ${b'}$ is also a unit. Hence, $\displaystyle v\displaystyle\left(\frac{a}{b}\right)=rv(p)+v(a')-v(b')=rv(p)$ Now ${v}$ surjective implies, there exists ${\displaystyle\frac{a}{b}=p^r \frac{a'}{b'}}$ such that ${v\displaystyle\left(\displaystyle\frac{a}{b}\right)=rv(p)=1}$. This leaves two possibilities, namely, ${v(p)=\pm 1}$. But ${v(p)=-1}$ gives an easy contradiction: ${-1=v(p)\geq \min\{v(1),v(p-1)\} = 0}$. Thus the only possibility is that ${v(p)=1}$ and thus, ${v=v_p}$. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921826124191284, "perplexity": 115.6341202880793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425381.3/warc/CC-MAIN-20170725202416-20170725222416-00045.warc.gz"}
https://www.cliffsnotes.com/study-guides/algebra/linear-algebra/real-euclidean-vector-spaces/projection-onto-a-subspace
## Projection onto a Subspace Figure 1 Let S be a nontrivial subspace of a vector space V and assume that v is a vector in V that does not lie in S. Then the vector v can be uniquely written as a sum, v S + v S , where v S is parallel to S and v S is orthogonal to S; see Figure . The vector v S , which actually lies in S, is called the projection of v onto S, also denoted proj S v. If v 1, v 2, …, v r form an orthogonal basis for S, then the projection of v onto S is the sum of the projections of v onto the individual basis vectors, a fact that depends critically on the basis vectors being orthogonal: Figure shows geometrically why this formula is true in the case of a 2‐dimensional subspace S in R 3. Figure 2 Example 1: Let S be the 2‐dimensional subspace of R 3 spanned by the orthogonal vectors v 1 = (1, 2, 1) and v 2 = (1, −1, 1). Write the vector v = (−2, 2, 2) as the sum of a vector in S and a vector orthogonal to S. From (*), the projection of v onto S is the vector Therefore, v = v S where v S = (0, 2, 0) and That v S = (−2, 0, 2) truly is orthogonal to S is proved by noting that it is orthogonal to both v 1 and v 2: In summary, then, the unique representation of the vector v as the sum of a vector in S and a vector orthogonal to S reads as follows: See Figure . Figure 3 Example 2: Let S be a subspace of a Euclidean vector space V. The collection of all vectors in V that are orthogonal to every vector in S is called the orthogonal complement of S: ( S is read “S perp.”) Show that S is also a subspace of V. Proof. First, note that S is nonempty, since 0S . In order to prove that S is a subspace, closure under vector addition and scalar multiplication must be established. Let v 1 and v 2 be vectors in S ; since v 1 · s = v 2 · s = 0 for every vector s in S proving that v 1 + v 2S . Therefore, S is closed under vector addition. Finally, if k is a scalar, then for any v in S , ( k v) · s = k( v · s) = k(0) = 0 for every vector s in S, which shows that S is also closed under scalar multiplication. This completes the proof. Example 3: Find the orthogonal complement of the x−y plane in R 3. At first glance, it might seem that the x−z plane is the orthogonal complement of the x−y plane, just as a wall is perpendicular to the floor. However, not every vector in the x−z plane is orthogonal to every vector in the x−y plane: for example, the vector v = (1, 0, 1) in the x−z plane is not orthogonal to the vector w = (1, 1, 0) in the x−y plane, since v · w = 1 ≠ 0. See Figure . The vectors that are orthogonal to every vector in the x−y plane are only those along the z axis; this is the orthogonal complement in R 3 of the x−y plane. In fact, it can be shown that if S is a k‐dimensional subspace of R n , then dim S = n − k; thus, dim S + dim S = n, the dimension of the entire space. Since the x−y plane is a 2‐dimensional subspace of R 3, its orthogonal complement in R 3 must have dimension 3 − 2 = 1. This result would remove the x−z plane, which is 2‐dimensional, from consideration as the orthogonal complement of the x−y plane. Figure 4 Example 4: Let P be the subspace of R 3 specified by the equation 2 x + y = 2 z = 0. Find the distance between P and the point q = (3, 2, 1). The subspace P is clearly a plane in R 3, and q is a point that does not lie in P. From Figure , it is clear that the distance from q to P is the length of the component of q orthogonal to P. Figure 5 One way to find the orthogonal component q P is to find an orthogonal basis for P, use these vectors to project the vector q onto P, and then form the difference q − proj P q to obtain q P . A simpler method here is to project q onto a vector that is known to be orthogonal to P. Since the coefficients of x, y, and z in the equation of the plane provide the components of a normal vector to P, n = (2, 1, −2) is orthogonal to P. Now, since the distance between P and the point q is 2. The Gram‐Schmidt orthogonalization algorithm. The advantage of an orthonormal basis is clear. The components of a vector relative to an orthonormal basis are very easy to determine: A simple dot product calculation is all that is required. The question is, how do you obtain such a basis? In particular, if B is a basis for a vector space V, how can you transform B into an orthonormal basis for V? The process of projecting a vector v onto a subspace S—then forming the difference v − proj S v to obtain a vector, v S , orthogonal to S—is the key to the algorithm. Example 5: Transform the basis B = { v 1 = (4, 2), v 2 = (1, 2)} for R 2 into an orthonormal one. The first step is to keep v 1; it will be normalized later. The second step is to project v 2 onto the subspace spanned by v 1 and then form the difference v 2proj v1 v 2 = v ⊥1 Since the vector component of v 2 orthogonal to v 1 is as illustrated in Figure . Figure 6 The vectors v 1 and v ⊥1 are now normalized: Thus, the basis B = { v 1 = (4, 2), v 2 = (1, 2)} is transformed into the orthonormal basis shown in Figure . Figure 7 The preceding example illustrates the Gram‐Schmidt orthogonalization algorithm for a basis B consisting of two vectors. It is important to understand that this process not only produces an orthogonal basis B′ for the space, but also preserves the subspaces. That is, the subspace spanned by the first vector in B′ is the same as the subspace spanned by the first vector in B′ and the space spanned by the two vectors in B′ is the same as the subspace spanned by the two vectors in B. In general, the Gram‐Schmidt orthogonalization algorithm, which transforms a basis, B = { v 1, v 2,…, v r }, for a vector space V into an orthogonal basis, B′ { w 1, w 2,…, w r }, for V—while preserving the subspaces along the way—proceeds as follows: Step 1. Set w 1 equal to v 1 Step 2. Project v 2 onto S 1, the space spanned by w 1; then, form the difference v 2proj S 1 v 2 This is w 2. Step 3. Project v 3 onto S 2, the space spanned by w 1 and w 2; then, form the difference v 3proj S 2 v 3. This is w 3. Step i. Project v i onto S i −1, the space spanned by w 1, …, w i−1 ; then, form the difference v i proj S i−1 v i . This is w i . This process continues until Step r, when w r is formed, and the orthogonal basis is complete. If an orthonormal basis is desired, normalize each of the vectors w i . Example 6: Let H be the 3‐dimensional subspace of R 4 with basis Find an orthogonal basis for H and then—by normalizing these vectors—an orthonormal basis for H. What are the components of the vector x = (1, 1, −1, 1) relative to this orthonormal basis? What happens if you attempt to find the componets of the vector y = (1, 1, 1, 1) relative to the orthonormal basis? The first step is to set w 1 equal to v 1. The second step is to project v 2 onto the subspace spanned by w 1 and then form the difference v 2proj W1 v 2 = W 2. Since the vector component of v 2 orthogonal to w 1 is Now, for the last step: Project v 3 onto the subspace S 2 spanned by w 1 and w 2 (which is the same as the subspace spanned by v 1 and v 2) and form the difference v 3proj S 2 v 3 to give the vector, w 3, orthogonal to this subspace. Since and and { w 1, w 2} is an orthogonal basis for S 2, the projection of v 3 onto S 2 is This gives Therefore, the Gram‐Schmidt process produces from B the following orthogonal basis for H You may verify that these vectors are indeed orthogonal by checking that w 1 · w 2 = w 1 · w 3 = w 2 · w 3 = 0 and that the subspaces are preserved along the way: An orthonormal basis for H is obtained by normalizing the vectors w 1, w 2, and w 3 Relative to the orthonormal basis B′′ = { ŵ 1, ŵ 2, ŵ 3}, the vector x = (1, 1, −1, 1) has components These calculations imply that a result that is easily verified. If the components of y = (1, 1, 1, 1) relative to this basis are desired, you might proceed exactly as above, finding These calculations seem to imply that The problem, however, is that this equation is not true, as the following calculation shows: What went wrong? The problem is that the vector y is not in H, so no linear combination of the vectors in any basis for H can give y. The linear combination gives only the projection of y onto H. Example 7: If the rows of a matrix form an orthonormal basis for R n , then the matrix is said to be orthogonal. (The term orthonormal would have been better, but the terminology is now too well established.) If A is an orthogonal matrix, show that A −1 = A T. Let B = { 1, 2, …, n } be an orthonormal basis for R n and consider the matrix A whose rows are these basis vectors: The matrix A T has these basis vectors as its columns: Since the vectors 1, 2, …, n are orthonormal, Now, because the ( i, j) entry of the product AA T is the dot product of row i in A and column j in A T Thus, A −1 = A T. [In fact, the statement A −1 = A T is sometimes taken as the definition of an orthogonal matrix (from which it is then shown that the rows of A form an orthonormal basis for R n ).] An additional fact now follows easily. Assume that A is orthogonal, so A −1 = A T. Taking the inverse of both sides of this equation gives which implies that A T is orthogonal (because its transpose equals its inverse). The conclusion means that if the rows of a matrix form an orthonormal basis for R n , then so do the columns.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350942373275757, "perplexity": 373.278011970998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174167.41/warc/CC-MAIN-20170219104614-00272-ip-10-171-10-108.ec2.internal.warc.gz"}
https://accesspharmacy.mhmedical.com/content.aspx?bookid=2147&sectionid=161351266
## ORGANIZATION OF CLASS This chapter considers the drugs that mimic the effects of adrenergic nerve stimulation (or stimulation of the adrenal medulla). In other words, these compounds mimic the effects of norepinephrine or epinephrine. These drugs are sometimes referred to as adrenomimetics or sympathomimetics. Remember that the actions of the sympathetic nervous system are mediated through α and β receptors. Remember that: α1 = most vascular smooth muscle; agonists contract β1 = heart; agonists increase rate β2 = respiratory and uterine smooth muscle; agonists relax There are other effects of sympathetic stimulation, but the three listed in the preceding box are the most important. The adrenergic agonists are often divided into direct- and indirect-acting agonists. This is a useful distinction for a number of reasons. The indirect-acting drugs do not bind to specific receptors, but act by releasing stored norepinephrine. This means that their actions are nonspecific. The direct-acting drugs bind to the receptors, so specificity of action is a possibility. The drugs are also sometimes divided into catecholamines and noncatecholamines. This is yet another division based on structure (and our focus here is not on structures). However, this distinction is useful for one concept. Do you remember from Chapter 6 that norepinephrine is metabolized by catechol-o-methyltransferase (COMT) and monoamine oxidase (MAO)? Well, the other catecholamines are also metabolized by these enzymes; however, the noncatecholamines are not. ## DIRECT-ACTING AGONISTS The focus here is to learn the specificity of the drugs for their receptor targets. If you know the effect of stimulation of the target receptors, then you can deduce the drug actions and adverse effects. Only EPINEPHRINE and NOREPINEPHRINE activate both α and β receptors. Although this is an oversimplification, it provides a useful starting point. The rest of the direct-acting drugs act on either α or β receptors (Figure 9–1). Epinephrine has approximately equal effects at α and β receptors. In addition, it has approximately equal effects at β1 and β2 receptors. ###### FIGURE 9–1 A classification of adrenergic agonists is presented. Affinity for the α receptors is shown at the top of the diagram and affinity for the β receptors at the bottom. Epinephrine and norepinephrine have affinity for both α and β receptors and are, therefore, placed in the middle. Epinephrine has a number of uses, including the treatment of allergic reactions and shock, the control of localized bleeding, and the prolongation of the action of local anesthetics. NOREPINEPHRINE has a relatively low affinity for β2 receptors. Norepinephrine activates both α and β receptors, but activates β1 receptors more than ... ### Pop-up div Successfully Displayed This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8086135983467102, "perplexity": 4879.351487680207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538082.57/warc/CC-MAIN-20210123125715-20210123155715-00132.warc.gz"}
https://www.physicsforums.com/threads/thermal-expansion-of-a-shell.203239/
# Thermal expansion of a shell 1. Dec 8, 2007 ### miss photon [SOLVED] thermal expansion hi everybody my question is: a spherical shell is heated. the volume changes according to the equation V(T)=V(0)(1+yT) where y=volume coeff. of thermal expansion. does this volume refer to the volume enclosed by the shell or the volume of the material making up the shell? 2. Dec 8, 2007 ### pixel01 By the shell itself. A spherical shell and a solid one (same dia. and material) should be expanding to the same size. 3. Dec 8, 2007 ### Staff: Mentor It doesn't matter. All volumes expand by the same fraction, whether you take the volume of the shell material or the volume enclosed by the shell. When the material expands, so does the volume it encloses. 4. Dec 9, 2007 ### miss photon let me put it in another way. if the shell of radius R has a spherical cavity of radius r, what will be the change in the two radii on heating? will the increase in both be 'aT' where a=linear coeff of thermal expansion? 5. Dec 9, 2007 ### Staff: Mentor Yes. Assuming the material is isotropic, all linear dimensions expand by the same fraction.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858498334884644, "perplexity": 1879.84913547045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719468.5/warc/CC-MAIN-20161020183839-00560-ip-10-171-6-4.ec2.internal.warc.gz"}
https://onepetro.org/SPEIOGCEC/proceedings-abstract/06IOGCEC/All-06IOGCEC/SPE-100735-MS/141440
This paper presents a comprehensive set of experimental data for the membrane efficiency of four shales when interacting with different water-based and oil-based muds. Pressure transmission tests were used to measure the membrane efficiency using three different cations and two different anions at different concentrations (water activities). It was found that the measured membrane efficiencies of shales when exposed to salt solutions were low, ranging from 0.18% to 4.23%. Useful correlations are presented between the membrane efficiency and other shale properties. Results suggest that the membrane efficiency of shales is directly proportional to the ratio of the cation exchange capacity and permeability of shales. Higher cation exchange capacities and lower permeabilities correlate very well with higher membrane efficiencies. Moreover, the ratio of the hydrated solute (ion) size to shale pore throat determines a shale's ability to restrict solutes from entering the pore space and controls its membrane efficiency. Cations and anions with large hydrated radii yielded higher membrane efficiencies, compared to ions with small hydrated diameters. Thus, the formulation of drilling fluids must take into account the types of cation and anion in the water-based fluid. It was also found that the membrane efficiency of oil-based muds was high, however, these membrane efficiencies were not 100 % as postulated by many researchers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984507322311401, "perplexity": 3994.730762161285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00548.warc.gz"}
http://www.talks.cam.ac.uk/talk/index/20766
# Busy Periods in Fluid Queues with Multiple Emptying Input States A semi-numerical method is derived to compute the Laplace transform of the equilibrium busy period probability density function in a fluid queue with constant output rate when the buffer is non-empty. The input process is controlled by a continuous time semi-Markov chain (CTSMC) with $n$ states such that in each state the input rate is constant. The holding time in states with net positive output rate—so called {\em emptying states}—is assumed to be an exponentially distributed random variable, whereas in states with net positive input rate—{\em filling states}—it may have an arbitrary probability distribution. The result is demonstrated by applying it to various systems, including fluid queues with two on-off input sources. The latter exercise in part shows consistency with prior results but also solves the problem in the case where there are two emptying states. Numerical results are presented for selected examples which expose discontinuities in the busy period distribution when the number of emptying states changes, e.g. as a result of increasing the fluid arrival rate in one or more states of the controlling CTSMC . This talk is part of the Optimization and Incentives Seminar series.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575604200363159, "perplexity": 554.4348998416664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00436.warc.gz"}
https://preprint.impa.br/visualizar?id=1488
Preprint A405/2007 Affine Skeletons and Monge-Ampère Equations Moacyr Alvim Silva | Teixeira, Ralph | Velho, Luiz Keywords: affine distance | medial axis | skeleton | affine geometry | monge-ampère equation | differential propagation An important question about affine skeletons is the existence of differential equations that is related to the 'affine distance' and 'area distance' (hence to the affine skeletons) as the Eikonal equation is related to the 'euclidean distance' (and medial axis). We show that some nonlinear second order PDE of Monge-Ampère type are, in fact, related to the affine skeletons. We also discuss some consequences and ideas that the PDE formulation suggests.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466562271118164, "perplexity": 3977.6854079097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00034.warc.gz"}
https://gl.kwarc.info/oaf/alignment-finder/-/blame/6aa200e5fa30e4dc8e241fe91433b104ebf6caac/tex/usecase.tex
usecase.tex 5.93 KB Dennis Müller committed Apr 25, 2018 1 2 3 4 5 6 7 8 9 10 The Viewfinder algorithm is implemented in the MMT system and exposed within the jEdit-IDE, allowing us to realize the use case stated in the introduction. A screenshot of Jane's theory of beautiful sets is given in Figure \ref{fig:use:source}; it is based on the (basic higher-order logic) foundation of the Math-in-the-Middle\ednote{cite} library developed natively in MMT. \begin{figure}[ht]\centering \fbox{\includegraphics[width=0.6\textwidth]{beautysource}} \fbox{\includegraphics[width=\textwidth]{results}} \caption{A Theory of Beautiful Sets'' in MMT Surface Syntax and Results of the Viewfinder}\label{fig:use:source} \end{figure} Dennis Müller committed Apr 25, 2018 11 Right-clicking anywhere within the theory allows Jane to select \cn{MMT} $\to$ \cn{Find\ Views\ to...} $\to$ \cn{MitM/smglom} (the main Math-in-the-Middle library), telling her (within less than one second) that two views have been found, the most promising of which points to the theory Dennis Müller committed Apr 25, 2018 12 13 14 15 16 17 18 \cn{matroid\_theory} (see Figure \ref{fig:use:target}) in the library. \begin{figure}[ht]\centering \fbox{\includegraphics[width=0.6\textwidth]{matroids}} \caption{The Theory of Matroids in the MitM Library}\label{fig:use:target} \end{figure} Dennis Müller committed Apr 26, 2018 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 \begin{newpart}{DM: Moved} \section{Across-Library Viewfinding}\label{sec:across} We have so far assumed one fixed meta-theory for all theories involved; we will now discuss the situation when looking for views between theories in different libraries (and built on different foundations). Obviously, various differences in available foundational primitives and library-specific best practices and idiosyncracies can prevent the algorithm from finding desired matches. There are two approaches to increasing the number of results in these cases: \begin{itemize} \item In many instances, the translations between two foundations is too complex to be discovered purely syntactically. In these cases we can provide arbitrary translations between theories, which are applied before computing the encoding.\ednote{Mention/cite alignment-translation paper} \item We can do additional transformations before preprocessing theories, auch as normalizing expressions, eliminating higher-order abstract syntax encodings or encoding-related redundant information (such as the type of a typed equality, which in the presence of subtyping can be different from the types of both sides of an equation), or elaborating abbreviations/definitions. \end{itemize} When elaborating definitions, it is important to consider that this may also reduce the number of results, if both theories use similar abbreviations for complex terms, or the same concept is declared axiomatically in one theory, but definitionally in the other. For that reason, we can allow \textbf{several abstract syntax trees for the same constant}, such as one with definitions expanded and one as is''. Similarly, certain idiosyncracies -- such as PVS's common usage of theory parameters -- call for not just matching symbol references, but also variables or possibly even complex expressions. To handle these situations, we additionally allow for \textbf{holes} in the constant lists of an abstract syntax tree, which may be unified with any other symbol or hole, but are not recursed into. The subterms that are to be considered holes can be marked as such during preprocessing. \subsection{Normalization}\label{sec:preproc} The common logical framework used for all the libraries at our disposal -- namely LF and extensions thereof -- makes it easy to systematically normalize theories built on various logical foundations. We currently use the following approaches to preprocessing theories: \begin{itemize} \item Free variables in a term, often occurences of theory parameters as e.g. used extensively in the PVS system, are replaced by holes. \item For foundations that use product types, we curry function types $(A_1 \times\ldots A_n)\to B$ to $A_1 \to \ldots \to A_n\to B$. We treat lambda-expressions and applications accordingly. \item Higher-order abstract syntax encodings are eliminated by raising atomic types, function types, applications and lambdas to the level of the logical framework. This eliminates (redundant) implicit arguments that only occur due to their formalization in the logical framework. This has the advantage that possible differences between the types of the relevant subterms and implicit type arguments (e.g. in the presence of subtyping) do not negatively affect viewfinding. \item We use the curry-howard correspondence to transform axioms and theorems of the form $\vdash (P\Rightarrow Q)$ to function types $\vdash P \to \vdash Q$. Analogously, we transform judgments of the form $\vdash \forall x : A.\;P$ to $\prod_{x:A}\vdash P$. \item For classical logics, we afterwards rewrite all logical connectives using their usual definitions using negation and conjunction only. Double negations are eliminated. \item Typed Equalities are transformed to untyped ones; again getting rid of the redundant type argument of the equality. \item The arguments of conjunctions and equalities are reordered (currently only by their number of subterms). \end{itemize} \subsection{Implementation}\label{sec:pvs} \paragraph{} Using the above normalization methods, we can examplary write down a theory for a commutative binary operator using the Math-in-the-Middle foundation, while targeting e.g. the PVS Prelude library -- allowing us to find all commutative operators, as in Figure \ref{fig:use:pvs}. Dennis Müller committed Apr 25, 2018 50 Dennis Müller committed Apr 25, 2018 51 52 \begin{figure}[ht]\centering \fbox{\includegraphics[width=\textwidth]{pvs}} Dennis Müller committed Apr 25, 2018 53 \caption{Searching for Commutative Operators in PVS}\label{fig:use:pvs} Dennis Müller committed Apr 26, 2018 54 \end{figure} Dennis Müller committed Apr 26, 2018 55 56 57 58 59 \ednote{8 results for NASA, but NASA doesn't work in jEdit because of limited memory} This example also hints at a way to iteratively improve the results of the viewfinder: since we can find properties like commutativity and associativity, we can use the results to in turn inform a better normalization of the theory by exploiting these properties. This in turn would potentially allow for finding more views. \end{newpart}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072303175926208, "perplexity": 1399.5063579618925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00396.warc.gz"}
https://www.maa.org/press/periodicals/convergence/thomas-simpson-and-maxima-and-minima
# Thomas Simpson and Maxima and Minima Author(s): Michel Helfgott Thomas Simpson (1710-1761) was a self-taught English mathematician who started his working life as a weaver, his father’s trade. Quite early he showed a keen interest in mathematics and later in life became an accomplished writer of textbooks on algebra, geometry, the calculus, and other mathematical subjects. His life was quite remarkable, from being a weaver to becoming a fellow of the Royal Society in 1745 (Clarke 1929).  Nowadays, Simpson is best remembered for the numerical integration technique that bears his name. Simpson’s most widely known book appeared in print in 1750 under the title The Doctrine and Application of Fluxions. The fact that it was reprinted as late as 1823 (Simpson 1823) attests to its wide popularity. By modern standards it is an unusual work in the sense that applications of the calculus appear rather early and pervade all of the book. After a first section on the nature of fluxions, and how to calculate with them, Simpson discusses with great care a collection of twenty two examples about maxima and minima. Fifteen of these examples are of a geometrical nature, three are applications to kinematics, and only four are strictly mathematical. We will discuss in detail eight of them, none commonly found in contemporary Calculus textbooks, replacing the word "fluxion" with "derivative" whenever the former appears in Simpson’s book. As expected, a certain amount of editing has been necessary, but we have kept the core of Simpson’s approach and explanations. We share the belief that it is a fruitful endeavor to engage students in the solution of mathematical problems from the past (Swetz 1995). It is to be noted that we will use the concept of function, an idea that took almost two hundred years to mature since Leibniz and Johan Bernoulli introduced it at the end of the seventeenth century. Euler, the greatest eighteenth century mathematician, used the symbol $f(x)$ starting in 1734 (Siu 1995). The notation or definition of function is nowhere to be found in Doctrine although one might surmise that it is implicitly employed, one way or another, in the work. Furthermore, Simpson does not apply the second derivative test for extrema; it is not even stated in his work. Neither does he use the first derivative test except when discussing example XXII, as we will see later on. Despite this fact, no errors are to be found throughout the section on maxima and minima; the very nature of the problems, mostly applications to geometry and kinematics, helped Simpson avoid any pitfalls. For him it was enough to take the first derivative of the pertinent expression and then find the critical point. Of course, we can check, through the first or second derivative test, that things work well in all the examples of Simpson’s that we will discuss. After these preliminary considerations, let us discuss in detail some of the examples from section II of Doctrine. We will state them almost verbatim, then we will provide a solution patterned on Simpson’s solution, and finally we will make some remarks pertinent to each problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169117569923401, "perplexity": 677.3046084851958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00801.warc.gz"}
http://tex.stackexchange.com/questions/3799/end-of-theorem-marker-placement/3907
# End of theorem marker placement I have a question about marker placement after theorems. I am using a style file provided by Oxford University Press, which I am not used to. It is available here: http://www.oxfordjournals.org/our_journals/imrn/for_authors/tex_template.zip This style file wants to put an \openbox at the end of the statement of each theorem, and a \filledbox at the end of every proof. If one concludes a proof with an equation, I know to use \qedhere to position the end of proof marker correctly. But I don't know how to do something similar in the above situation (after the statement of a theorem). Many of the thoerems, propositions etc. end with equations, and if I use the above style sheet the box is placed too low. How do I fix this? - Did you try using the \qedhere command? If so, what are the results? –  Willie Wong Oct 5 '10 at 23:59 (Also, if you are submitting to IMRN, why not just let the publisher/copyeditor worry about it?) –  Willie Wong Oct 6 '10 at 0:00 I tried using \qedhere, but it puts a black box, rather than a white box, at the desired spot! You are right that I should just let the publisher deal with it, but it is really annoying me (and makes the document more difficult to read). –  geordie Oct 6 '10 at 8:44 I should also mention that when I use \qedhere the white box is replaced by the black box, so it is almost the right thing to do. –  geordie Oct 6 '10 at 8:44 ## 2 Answers You can redefine \qedsymbol just before using \qedhere as in: \begin{proposition} This is a funny equation \begin{equation*} a = b + c\,. \let\qedsymbol\openbox\qedhere \end{equation*} \end{proposition} I don't know if it is possible, but you could maybe also suggest the people from the journal to fix their class file so that \qedhere works as expected? - thanks ... this works very nicely. Willie Wong's suggestion below also works, but the alignment is better with the above option. (That is, it is right aligned as it should be, not just next to the final equation etc.) –  geordie Oct 9 '10 at 23:08 this is all very well, but it only works if equation numbers are on the right side of the page. the ams document class default is for equation numbers on the left, so simply dropping the qed box into the equation number position doesn't work. that's why the definition of \qedhere in amsthm is so complicated. more comments on other answers. –  barbara beeton Jan 6 '11 at 17:52 @barbara, thanks for your comment! I've checked the link you posted below and indeed it looks very useful. If you could craft a short new answer pointing to the link and with a small example that would be great! –  Juan A. Navarro Jan 7 '11 at 11:37 Try this: in the math environment, put the \qedhere command inside of an \mbox, as in $.... some numbers and equation. \mbox{\qedhere}$ it won't work completely correctly, but now the mark inserted is the openbox, and not the filled box. I think this is a bug with amsthm (see below the cut), so you'd be best off just leaving well-enough alone and let the journal deal with it later. The oupau class apparently uses amsthm and not ntheorem for its theorem needs. And this is how it defines the open-box symbol for the QED in the theorem environment: \providecommand{\qedsymbolthm}{\openbox} \DeclareRobustCommand{\qedthm}{% \ifmmode \mathqed \else \leavevmode\unskip\penalty9999 \hbox{}\nobreak\hfill \quad\hbox{\qedsymbolthm}% \fi } \def\@begintheorem#1#2[#3]{% \pushQED{\qedthm}\deferred@thm@head{\the\thm@headfont \thm@indent \@ifempty{#1}{\let\thmname\@gobble}{\let\thmname\@iden}% \@ifempty{#2}{\let\thmnumber\@gobble}{\let\thmnumber\@iden}% \@ifempty{#3}{\let\thmnote\@gobble}{\let\thmnote\@iden}% \thm@swap\swappedhead\thmhead{#1}{#2}{#3}% \the\thm@headpunct \thmheadnl % possibly a newline. \hskip\thm@headsep }% \ignorespaces} \def\@endtheorem{\popQED\endtrivlist\@endpefalse } I don't think the problem is actually with Oxford University Press! I think the problem lies in amsthm! See the definition there for \qedhere \newcommand{\qedhere}{% \begingroup \let\mathqed\math@qedhere \let\qed@elt\setQED@elt \QED@stack\relax\relax \endgroup } and the definition for the proof environment \providecommand{\qedsymbol}{\openbox}% \newenvironment{proof}[1][\proofname]{\par \pushQED{\qed}% \normalfont \topsep6\p@\@plus6\p@\relax \trivlist \item[\hskip\labelsep \itshape #1\@addpunct{.}]\ignorespaces }{% \popQED\endtrivlist\@endpefalse } I'm thinking that the pushQED and popQED commands are defined just so they can accomodate different end symbols! The problem, apparently, lies in the definition of \qedhere, which calls \math@qedhere when it sits in a math environment. And unfortunately, instead of the definitions used in \setQED@elt (which is called in text mode), which process the current qed symbol that's in the QED stack, \math@qedhere depends on \newcommand{\mathqed}{\quad\hbox{\qedsymbol}} \def\linebox@qed{\hfil\hbox{\qedsymbol}\hfilneg} which explictly references the \qedsymbol, which is defined to be the filled box in oupau.cls. So in short, the amsthm package, uses two different ways of accessing the QED symbol depending on whether the environment ends naturally (with \popQED) or if you insert the symbol using \qedhere inside a math environment. This, I think, is a bug. - It is a bit hard to say what should the “correct” behavior of internal (not documented) macros of amsthm. In their definitions \qedsymbol is used all over the place, not just in the definition of \qed so I'm not sure if they really intended the push/pop pair of commands to accomodate for different symbols. –  Juan A. Navarro Oct 8 '10 at 15:48 ...which is why I'm glad you took a look also. I agree it is hard to figure out what the "intent" is. I am a bit surprised that OUP opted for hacking amsthm instead of ntheorem, which, IIRC, already has the same sets of defaults (open box for thms and filled box for proofs), and works better. –  Willie Wong Oct 8 '10 at 16:29 the documentation of the qed handling in amsthm is sketchy, but not entirely absent. see amsclass.pdf -- the source from which amsthm.sty was generated (amsclass.dtx) is identified at the top of that file. i have added a request to our "open" list to improve this documentation. –  barbara beeton Jan 6 '11 at 18:28 it was not originally recognized that "boxes" would be wanted at the end of anything other than proofs, but over the past year there have been numerous requests for this facility. a wholesale upgrade of this feature, including adding the ability to mark non-proofs, is on our to-do list. in the meantime, see this entry in the ams author faq: ams.org/faq?faq_id=212 . it contains a link to an example file that demonstrates various tactics that can be used with amsthm to get different symbols and put them in different locations. –  barbara beeton Jan 6 '11 at 18:37 @SamNead -- sadly, although there have been many requests to make it available also within theorem-class environments, the basic \qedhere works only in the proof environment. (this was a design decision.) it is likely that it will be extended, but certainly not for many months (the upgrade has been delayed many times for other priorities, and may be delayed again). until then, adjusting the spacing by hand is the only alternative i know of, and i've tried many things. –  barbara beeton Jan 9 '13 at 13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8480215072631836, "perplexity": 1313.475742852133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067075.78/warc/CC-MAIN-20141017150107-00090-ip-10-16-133-185.ec2.internal.warc.gz"}
http://cms.math.ca/10.4153/CJM-2013-031-9
location:  Publications → journals → CJM Abstract view # Infinitesimal Rigidity of Convex Polyhedra through the Second Derivative of the Hilbert-Einstein Functional Published:2013-08-20 Printed: Aug 2014 • Ivan Izmestiev, Institut für Mathematik, Freie Universität Berlin, Arnimallee 2, D-14195 Berlin, Germany Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF ## Abstract The paper is centered around a new proof of the infinitesimal rigidity of convex polyhedra. The proof is based on studying derivatives of the discrete Hilbert-Einstein functional on the space of "warped polyhedra" with a fixed metric on the boundary. The situation is in a sense dual to using derivatives of the volume in order to prove the Gauss infinitesimal rigidity of convex polyhedra. This latter kind of rigidity is related to the Minkowski theorem on the existence and uniqueness of a polyhedron with prescribed face normals and face areas. In the spherical and in the hyperbolic-de Sitter space, there is a perfect duality between the Hilbert-Einstein functional and the volume, as well as between both kinds of rigidity. We review some of the related work and discuss directions for future research. Keywords: convex polyhedron, rigidity, Hilbert-Einstein functional, Minkowski theorem MSC Classifications: 52B99 - None of the above, but in this section 53C24 - Rigidity results
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182116150856018, "perplexity": 1002.2134946894945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894378.97/warc/CC-MAIN-20140722025814-00050-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/mond-vs-dm.231224/
# MOND vs DM • Start date • #1 140 0 My question is this: Why are we (as a community) so eager to disregard modifying Newtonian Dynamics (MOND)? Of course it's incredibly accurate on macroscopic scales, but isn't science meant to be progressive? Since Zwicky's observation of galactic cluster rotation in 1922 and the consequent studies of galactic rotation curves, DM has been the much favoured resolution to the "extra mass" problem. The invocation of a non-baryonic (and not directly observable) form of matter seems just as ludicrous to me as altering Newtonian law. Indeed it sounds like a bit of a botch.. We even know classical mechanics breaks down on small scales (QM), so what is there to say it doesn't act differently on larger scales also? Analogous is the search for the Higgs Boson in PP. If the LHC fails to find evidence for this I doubt they'll propose a new unobservable particle to mediate the Higgs field! The standard model, which has been tried and tested for decades, will collapse. What are your thoughts on this? Rob Last edited: • #2 126 0 I think that skepticism is high regarding MOND because the cutoff value re: acceleration is derived empirically, and not from first principles. Of course, so was the initial idea for dark matter, so I think that you're right -- neither method is "better" than the other. Hopefully an alternative derived from first principles will appear in the near future. Last edited: • #3 140 0 I see what you mean and you're quite right, it just seems equally botched to say that we're missing some amount of mass to fit in with our rotation models so this must be dark unobservable matter! • #4 Jonathan Scott Gold Member 2,313 1,018 As far as I know, the cut-off in MOND is there only as a means of explaining why the effect hasn't been detected in laboratory or solar-system experiments. I think that MOND has to cut in below the critical acceleration, but it doesn't actually have to cut out until a much higher acceleration, because the effect it would have if it simply added to the Newtonian (or GR) acceleration would be too small to be relevant for intermediate accelerations. I don't understand how MOND can explain how the particles forming a star, which individually have significant accelerations far exceeding the MOND threshold due to the gravity of the star, are affected in such a way that the overall motion of the star is only MOND-like below the overall acceleration cut-off. Some people have told me that relativistic MOND theory claims to address this, but when I've looked into the detail I've so far only found vague assertions that in GR the motion of the whole is not necessarily exactly determined by the motions of the parts. Personally I find MOND quite physically plausible apart from the cut-off; if the universe is finite, it makes sense that a region containing significant mass would have a boundary that is not flat but rather "conical" in a solid angle sense, and this would lead to accelerations proportional to the square root of the mass enclosed and inversely proportional to the radius, as in MOND. However, this effect would persist right down to small masses and even individual particles. (In fact, the MOND acceleration at the surface of a particle becomes equal to the Newtonian gravitational acceleration at something of the order of 100 times the mass of an electron, which seems quite an interesting coincidence). As far as I can tell, if there were no cut-off it should be possible to detect MOND effects at the solar system level (they are well known to be of a similar order of magnitude to the Pioneer anomaly) and even in a laboratory setting, so any MOND effect on that scale should have been spotted by now, even in ordinary Cavendish-type experiments. Certainly, the Pioneer anomaly does not match a MOND prediction very well, but there could be other factors involved. I've not seen any specific report of any experiment which specifically rules out MOND effects on this scale, and it's not clear to me whether this would have been detected as a result of other experiments, such as those to measure G, which were not specifically looking for a 1/r acceleration law. Most experiments try to maximize the mass and minimize the distance in order to get the maximum Newtonian force, which reduces the relative strength of the MOND effect. Also, although I've seen reports of experiments checking for gravity variation with higher powers of 1/r than 1/r^2, I've not seen any checking for variation with 1/r. There is also the well-known fact that laboratory experiments to measure G have given results which vary far more widely than expected. I'd be very interested to know if any constraints on MOND-like 1/r acceleration terms have been established from laboratory experiments. • #5 20 0 Hey astrorob, I would like to make some general remarks regarding your original post here. I say general, as I don't really have any detailed knowledge of the MOND theory. First off, any new scientific models proposed are rarely accepted immediately without thorough scrutiny. That would, I hope we can all agree, be very unscientific. Thus it is only to expect that it will take some time for the theory to gain momentum, even if it were to hold up to complete scrutiny at an early stage, which I don't think it has. An important point in that regard, which has already been mentioned, is that MOND isn't derived from first principles, but rather constructed to fit some observation. As also pointed out, so was the dark matter hypothesis, but there are some important differences. MOND is based on a modification of an already established theory without any real physical motivation (unlike Einsteins "modification" of the same theory). As far as dark matter goes, it doesn't really change any known theories, but rather has the advantage that the Standard Model of Particle Physics as it stands today, is incomplete and thus actually leaves room for some hitherto unknown particle(s). Newtonian theory doesn't have that same obvious room for modifications beyond that of Einstein. Another thing that I've always found somewhat odd about MOND, is why modify Newtonian theory, rather then relativistic theory? I mean, if MOND is ever to be succesful, it would sooner or later have to incorporate realtivistic effects, so why not start with the already known relativistic theory? Anyways, to sum up. The reason MOND isn't the talk of the town may be because it hasn't stood up to all scutiny yet, and that it doesn't have any testable predictions which at the end of the day is the alpha and omega for any scientific theory. • #6 561 1 As far as I've seen the thing that really killed a lot of the momentum for MOND was the bullet cluster observation. This observation was widely interpreted as requiring dark matter to explain, and is widely considered to be a direct observation (via gravitational lensing) of dark matter. Once you have one example of something in the universe which is basically guaranteed to be dark matter, it gets very hard to convince yourself that that incident was caused by dark matter but others were caused by something else. • #7 140 0 As far as I've seen the thing that really killed a lot of the momentum for MOND was the bullet cluster observation. This observation was widely interpreted as requiring dark matter to explain, and is widely considered to be a direct observation (via gravitational lensing) of dark matter. Very true, I do remember reading that also, but gravitational lensing isn't direct observation is it? It implicitly defines there to be something there rather than actually detecting what it is. Spinny said: First off, any new scientific models proposed are rarely accepted immediately without thorough scrutiny. That would, I hope we can all agree, be very unscientific. Thus it is only to expect that it will take some time for the theory to gain momentum, even if it were to hold up to complete scrutiny at an early stage, which I don't think it has. Again, very true. Rereading my original post it seems to perpetuate that I'm pro-MOND, and I didn't mean that to be the case. I'm actually quite impartial at the moment regarding it. What I was trying to ask was why the theory has been so negatively viewed right from the beginning. I'm not saying it should be immediately accepted as gospel, rather than it should've been given more of a fair chance before being completely dismissed for a theory that invokes the existence of unseeable matter. • #8 868 3 As far as I've seen the thing that really killed a lot of the momentum for MOND was the bullet cluster observation. This observation was widely interpreted as requiring dark matter to explain, and is widely considered to be a direct observation (via gravitational lensing) of dark matter. Once you have one example of something in the universe which is basically guaranteed to be dark matter, it gets very hard to convince yourself that that incident was caused by dark matter but others were caused by something else. Yes, it was very convincing to me, then MOG responded: http://www.physorg.com/news113031879.html http://arxiv.org/abs/astro-ph/0702146 MOND was a non-relativistic theory but has since been replaced by the relativistic version, MOG. The cutoff Jonathan speaks of is only a cutoff in the limit, not a cutoff in the sense the effect disappears completely at higher accelerations. I find MOG results very exciting but ultimately I must concur that the structure is ad hoc. Even the interpolation function is by design not well defined. However, even though the structure is ad hoc, this alone does not by itself constitute modeling to fit ALL the data. There are a large variety of rotation curves which MOG fits them all fairly well. Now even the Bullet Cluster data. This variety of fits was not itself modeled. Some people assume it was due to the use of the Tully Fisher relation. However, the Tully Fisher relation alone can be construed as a thorn in the side of Dark Matter. I think when you ask why we are so eager to disregard MOND, or the relativistic version MOG, you are mistaking a recognition that MOG is not well defined and lacking a mechanism as outright rejection. If this was the case we couldn't discuss it here as anything more than crackpottery, which it is not. Personally I agree that MOND/MOG is not sufficiently defined well enough to overtake Dark Matter. I would also say that ignoring it is likely at your own peril. If my rejection of imposing MOG onto our standard model is construed as outright rejection so be it. It's just too early in the game. • #9 125 0 to answer why mond doesn't bother with relativistic effects - the galaxy rotation problem involves speeds too small for relativity theory to be involved in the problem. most galaxy rotation is speed <1000km/sec, or <0.01c. It seems relativity theory isn't central to understanding the galaxy rotation prob. Why bother using relativity theory to calculate how long your icecream will take to melt on a sunny day? The only other reason to incorapate relativity theory into mond would be to make mond part of a grand theory of everything. But mond isn't a 'principles' theory, it isn't made to conform with the rest of physics. The principles for MOND are yet to be generally agreed on. • #10 4 0 See 0804.3804 on how gravitational wave detectors could help settle the MOND vs DM issue. • #11 561 1 Very true, I do remember reading that also, but gravitational lensing isn't direct observation is it? It implicitly defines there to be something there rather than actually detecting what it is. Right, but the circumstances place some tight constraints on what that matter could be. This is how I understand things (this is partly copied from a post of mine in a previous thread): In the bullet cluster, two galaxies were observed colliding. The collision looks pretty normal. But, http://www.shef.ac.uk/physics/teaching/phy111/assessment.htm [Broken], you see small areas of heavy gravitational lensing in what appears to be otherwise empty space. Astronomers interpret this like so: When the two galaxies collided, the normal matter in the galaxies slowed down as it all collided with each other, but the dark matter in each galaxy just kept going on its original trajectory. This interpretation is basically compelled since the matter in these "past the collision" areas cannot be anything that interacts, or else it would have been caught up in the collision too. And it can't interact with or emit light, because we can't see it. But it must be there, since we can observe its presence by the gravitational lensing it causes. Anything which would behave in such a manner would fit our standard definitions of "dark matter". I'm actually not really qualified to interpret the MOND-interpretation-bullet-cluster paper my_wan linked :( so I can't really form a valid opinion on this myself. But it does seem to be my (vague, from reading blogs and articles and stuff) impression though that even if the MOND interpretation of this data is sound, it does not seem to have been widely accepted even by some people in the field who were previously excited about MOND. my_wan, I would like to ask: I do personally think MOND is much more philosophically satisfying than dark matter. But it seems to me that if we accept MOND it should be because it's ultimately simpler-- because it provides a straightforward theoretical explanation for the failures of our models, rather than forcing us to assume invisible structure to the universe just to force observations to coincide with the model again. The problem is though that MOND seems to have quite a lot of parameters which the modelbuilders are free to set in order to make their curves fit; so at a certain point it starts to seem like there is as much there which is arbitrary, fine-tuned or hidden as there was in the dark matter model, and the thing that attracted us to MOND in the first place is gone. Although it is good the MOND people can find a way in their models to accommodate the bullet cluster data, from what little I understand of the paper you link it does seem to me like some of that is happening here-- like the fit of the data does not happen "naturally" but happens because they massaged the model to make it fit. So here is what I wonder: The dark matter explanation of what is happening with that bullet cluster data is very simple and straightforward. It has a "story" to it: The red stuff is luminous matter, the blue stuff is dark matter. The MOND explanation, on the other hand, looking through this paper, it appears you have to really delve very deep into the math to even begin to understand what is happening and why! Is this really the case, or is it just that I'm unfamiliar with the theory? Is there a way to explain why MOND predicts that the bullet cluster lensing would look the way it does, in a way that a layperson could understand and visualize it? Is it possible to make a "story" out of all this math? Last edited by a moderator: • #12 868 3 Right, but the circumstances place some tight constraints on what that matter could be. This is how I understand things (this is partly copied from a post of mine in a previous thread): In the bullet cluster, two galaxies were observed colliding. The collision looks pretty normal. But, http://www.shef.ac.uk/physics/teaching/phy111/assessment.htm [Broken], you see small areas of heavy gravitational lensing in what appears to be otherwise empty space. Astronomers interpret this like so: When the two galaxies collided, the normal matter in the galaxies slowed down as it all collided with each other, but the dark matter in each galaxy just kept going on its original trajectory. This interpretation is basically compelled since the matter in these "past the collision" areas cannot be anything that interacts, or else it would have been caught up in the collision too. And it can't interact with or emit light, because we can't see it. But it must be there, since we can observe its presence by the gravitational lensing it causes. Anything which would behave in such a manner would fit our standard definitions of "dark matter". I'm actually not really qualified to interpret the MOND-interpretation-bullet-cluster paper my_wan linked :( so I can't really form a valid opinion on this myself. But it does seem to be my (vague, from reading blogs and articles and stuff) impression though that even if the MOND interpretation of this data is sound, it does not seem to have been widely accepted even by some people in the field who were previously excited about MOND. my_wan, I would like to ask: I do personally think MOND is much more philosophically satisfying than dark matter. But it seems to me that if we accept MOND it should be because it's ultimately simpler-- because it provides a straightforward theoretical explanation for the failures of our models, rather than forcing us to assume invisible structure to the universe just to force observations to coincide with the model again. The problem is though that MOND seems to have quite a lot of parameters which the modelbuilders are free to set in order to make their curves fit; so at a certain point it starts to seem like there is as much there which is arbitrary, fine-tuned or hidden as there was in the dark matter model, and the thing that attracted us to MOND in the first place is gone. Although it is good the MOND people can find a way in their models to accommodate the bullet cluster data, from what little I understand of the paper you link it does seem to me like some of that is happening here-- like the fit of the data does not happen "naturally" but happens because they massaged the model to make it fit. So here is what I wonder: The dark matter explanation of what is happening with that bullet cluster data is very simple and straightforward. It has a "story" to it: The red stuff is luminous matter, the blue stuff is dark matter. The MOND explanation, on the other hand, looking through this paper, it appears you have to really delve very deep into the math to even begin to understand what is happening and why! Is this really the case, or is it just that I'm unfamiliar with the theory? Is there a way to explain why MOND predicts that the bullet cluster lensing would look the way it does, in a way that a layperson could understand and visualize it? Is it possible to make a "story" out of all this math? The present problem with MOND is it does not provide a straightforward theoretical explanation for the failures of our models. However, MOND does not have a lot of parameters, it is more constrained than even dark matter in this respect. Many of the predictions of MOND are generally independent of what values you choose for the open parameters. The fine tuning for rotation curves, etc. is limited mainly to assumptions about the mass to light ratio of galaxies. You wanted a simple story that helps explain what MOND is. Essentially we have gravity that is a 1/r^2 force. MOND assumes there is a tiny 1/r correction to this. That means that at in something as small as the solar system such effects would be extremely hard to detect. Notice that even though a 1/r correction would decrease with distance it would do so much more slowly than gravity. At some point then it would be as noticeable as the expected acceleration. Only when you measure the acceleration at great distances and low accelerations is the expected gravitational acceleration to MOND correction ratio large enough for a noticeable effect. The constraints MOND does have begs the question: If Dark Matter was the source of anomalous acceleration why does Dark Matter always appear distributed in such that it remains within the constraints of MOND/MOG? The cutoff point confuses some people. It's not a cutoff where the effect simply goes away. The MOND/non-MOND regimes are smoothly joined by an interpolation function. This function is not well defined (not theoretically specified) by MOND but as it turns out it doesn't need to be to define constraints and make predictions. For more detail ("story"): http://www.astro.umd.edu/~ssm/mond/ Last edited by a moderator: • #13 6 0 MOND vs DM vs Magnetic Confinement ? As someone who came into astrophysics from laboratiry plasma physics (fusion) I have a different perspective on this. I've read a few articles explaining flat rotation curves using magnetic confinement. one group is in spain and the other is in los alamos. they seem reasonable to me. the field strength required is in the ballpark of measurements made from radio measurements. I've read some of the criticism of this work and it doesnt seem too deadly. the exception is in explaining interactions between galaxies, but that could be a different mechanism anyway. So, given the possibilities for magnetic confinement of spiral galaxies, I dont understand why it doesnt get more attention. People would rather consider the possibility of new exotic types of matter or a modification to newtons laws than allow that electromagnetic forces might be responsible, and we know these forces exist. is there some bias here ? Could someone explain this to me ? • #14 Jonathan Scott Gold Member 2,313 1,018 The cutoff point confuses some people. It's not a cutoff where the effect simply goes away. The MOND/non-MOND regimes are smoothly joined by an interpolation function. This function is not well defined (not theoretically specified) by MOND but as it turns out it doesn't need to be to define constraints and make predictions. My problem with any sort of cutoff is that it means that a star as a whole (which may be subject to a very small overall acceleration and hence experience MOND effects) is subject to different rules from its constituent particles and fields (which undergo much higher accelerations due to the gravitational forces within the star itself and would therefore seem to be outside the MOND regime). Also, the stars in a double star system would each experience a higher acceleration due to each other and hence would apparently be outside the MOND regime and behave differently from a single star. Neither of these effects seems at all plausible. The only explanation I've seen for this is a very vague assertion (which I don't find convincing) that in relativity it is possible for the motion of the whole to be different from the motion of the parts. • #15 Chronos Gold Member 11,429 743 Dark matter is a stand alone solution. It can account for all observations. MOND cannot make this claim without invoking some amount of dark matter. The rest is Occam's razor. • Last Post Replies 1 Views 1K • Last Post Replies 13 Views 3K • Last Post Replies 2 Views 1K • Last Post Replies 2 Views 2K • Last Post Replies 49 Views 6K • Last Post Replies 0 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 3 Views 4K • Last Post Replies 1 Views 2K • Last Post Replies 6 Views 3K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573728799819946, "perplexity": 613.3765353150568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00641.warc.gz"}
http://civilservicereview.com/2016/01/quadratic-formula/
In the previous post, we have learned how to solve quadratic equations by factoring. In this post, we are going to learn how to solve quadratic equations using the quadratic formula. In doing this, we must identify the values of $a$, $b$, and $c$, in $ax^2 + bx + c = 0$ and substitute their values to the quadratic formula $x = \dfrac{-b \pm \sqrt{b^2 - 4ac }}{2a}$. Note that the value of $a$ is the number in the term containing $x^2$, $b$ is the number in the term containing $x$, and $c$ is the value of the constant (without $x$ or $x^2$). The results in this calculation which are the values of $x$ are the roots of the quadratic equation. Before you calculate using this formula, it is important that you master properties of radical numbers and how to calculate using them. Example 1: Find the roots of $x^2 - 4x - 4 = 0$ Solution From the equation, we can identify $a = 1$, $b = -4$, and $c = -4$. Substituting these values in the quadratic formula, we have $x = \dfrac{-(-4) \pm \sqrt{(-4)^2 - 4(1)(-4)}}{2(1)}$ $x = \dfrac{4 \pm \sqrt{16 + 16}}{2}$ $x = \dfrac{4 \pm \sqrt{32}}{2}$. We know, that $\sqrt{32} = \sqrt{(16)(2)} = \sqrt{16} \sqrt{2} = 4 \sqrt{2}$. So, we have $x = \dfrac{4 \pm 4 \sqrt{2}}{2} = \dfrac{2(2 + \sqrt{2}}{2} = 2 \pm 2 \sqrt{2}$ Therefore, we have two roots $2 + 2 \sqrt{2}$ or $2 - 2 \sqrt{2}$ Example 2: Find the roots of $2x^2 - 6x = 15$ Solution Recall, that it easier to identify the values of $a$, $b$, and $c$ if the quadratic equation is in the general form which is $ax^2 + bx + c = 0$. In order to make the right hand side of the equation above equal to 0, subtract 15 from both sides of the equation by 15. This results to $2x^2 - 6x - 15$. As we can see, $a = 2$, $b = -6$ and $c =-15$. Substituting these values to the quadratic formula, we have $x = \dfrac{-(-6) \pm \sqrt{(-6)^2 - 4(2)(-15)}}{2(2)}$ $x = \dfrac{6 \pm \sqrt{36 + 120}}{4}$ $x = \dfrac{6 \pm \sqrt{156}}{4}$. But $\sqrt{156} = \sqrt{(4)(39)} = \sqrt{4} \sqrt{39} = 2 \sqrt{39}$. Therefore, $x = \dfrac{6 \pm 2 \sqrt{39}}{4}$. Factoring out 2, we have $\dfrac{2(3 \pm \sqrt{39}}{4} = \dfrac{3 \pm \sqrt{39}}{2}$ Therefore, we have two roots: $\dfrac{3 + \sqrt{39}}{2}$ or $\dfrac{3 - \sqrt{39}}{2}$ That’s it. In the next post, we are going to learn how to use quadratic equations on how to solve word problems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051342606544495, "perplexity": 119.77764498092719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00049.warc.gz"}
https://www.lessonplanet.com/teachers/missing-numbers-pre-k-k
# Missing Numbers In this missing numbers worksheet, students solve 6 problems in which a series of numbers with blanks is analyzed. Students fill in the missing numbers. The number range is from 1 to 10 and there is no skip counting. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547688126564026, "perplexity": 955.4734305308742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696677.93/warc/CC-MAIN-20170926175208-20170926195208-00060.warc.gz"}
https://repository.uantwerpen.be/link/irua/97114
Publication Title A superfast method for solving Toeplitz linear least squares problems Author Abstract In this paper we develop a superfast O((m + n) log2(m + n)) complexity algorithm to solve a linear least squares problem with an m × n Toeplitz coefficient matrix. The algorithm is based on the augmented matrix approach. The augmented matrix is further extended to a block circulant matrix and DFT is applied. This leads to an equivalent tangential interpolation problem where the nodes are roots of unity. This interpolation problem can be solved by a divide and conquer strategy in a superfast way. To avoid breakdowns and to stabilize the algorithm pivoting is used and a technique is applied that selects difficult points and treats them separately. The effectiveness of the approach is demonstrated by several numerical examples. Language English Source (journal) Linear algebra and its applications. - New York, N.Y. Publication New York, N.Y. : 2003 ISSN 0024-3795 Volume/pages 366(2003), p. 441-457 ISI 000182667200025 Full text (Publisher's DOI) Full text (publisher's version - intranet only) UAntwerpen Faculty/Department Research group Publication type Subject
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087086081504822, "perplexity": 1135.2688570796684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00376.warc.gz"}
http://mathhelpforum.com/calculus/120162-integral-problam.html
# Math Help - Integral problam 1. ## Integral problam(solved) Heya all I need to integrate the following function for an exercise in general relativity. The question is: Find the proper distance between two spheres R[sub]1[\sub]and R[sub]2[\sub] given the metric $ds^2=\frac{dr^2}{1-4r^2} +r^2d \theta^2+r^2sin^2 \theta d\phi^2$. So my function is $\int {\frac{1}{\sqrt{1-4r^2}}dr}$. First I thought that I could substitute $u=1-4r^2$. However, this doesn't work out. Then I thought that in general what you would actually use here would be let $r = sin \theta , dr = cos \theta$ then your $1 - 4r^2$becomes $1-4cos^2 \theta$, hoping to get $sin\theta$under the line as well. In fact your left with $sin\theta +3cos\theta.$ How would you compute it? 2. Originally Posted by Diemo Heya all I need to integrate the following function for an exercise in general relativity. The question is: Find the proper distance between two spheres R[sub]1[\sub]and R[sub]2[\sub] given the metric $ds^2=\frac{dr^2}{1-4r^2} +r^2d \theta^2+r^2sin^2 \theta d\phi^2$. So my function is $\int {\frac{1}{\sqrt{1-4r^2}}dr}$. First I thought that I could substitute $u=1-4r^2$. However, this doesn't work out. Then I thought that in general what you would actually use here would be let $r = sin \theta , dr = cos \theta$ then your $1 - 4r^2$becomes $1-4cos^2 \theta$, hoping to get $sin\theta$under the line as well. In fact your left with $sin\theta +3cos\theta.$ How would you compute it? If you're going to use trigonometric substitution, rearrange so that $\int{\frac{1}{\sqrt{1 - 4r^2}}\,dr} = \int{\frac{1}{\sqrt{4\left(\frac{1}{4} - r^2\right)}}\,dr}$ $= \frac{1}{2}\int{\frac{1}{\sqrt{\frac{1}{4} - r^2}}\,dr}$. Now make the substitution $r = \frac{1}{2}\sin{\theta}$. 3. Simple. Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9957119822502136, "perplexity": 359.21814076154385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035356.2/warc/CC-MAIN-20150601214355-00024-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.esaim-cocv.org/action/displayAbstract?fromPage=online&aid=8137338
## ESAIM: Control, Optimisation and Calculus of Variations ### Entire solutions in for a class of Allen-Cahn equations a1 Dipartimento di Scienze Matematiche, Università Politecnica delle Marche, via Brecce Bianche, 60131 Ancona, Italy; [email protected];[email protected] Abstract We consider a class of semilinear elliptic equations of the form 15.7cm - where , is a periodic, positive function and is modeled on the classical two well Ginzburg-Landau potential . We look for solutions to ([see full textsee full text]) which verify the asymptotic conditions as uniformly with respect to . We show via variational methods that if ε is sufficiently small and a is not constant, then ([see full textsee full text]) admits infinitely many of such solutions, distinct up to translations, which do not exhibit one dimensional symmetries. (Online publication September 15 2005) Key Words: • Heteroclinic solutions; • elliptic equations; • variational methods. Mathematics Subject Classification: • 34C37; • 35B05; • 35B40; • 35J20; • 35J60
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065342903137207, "perplexity": 4621.562369507746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/0807.1481/
# Is gravitational entropy quantized ? Dawood Kothawala    T.  Padmanabhan    Sudipta Sarkar IUCAA, Post Bag 4, Ganeshkhind, Pune - 411 007, India May 21, 2022 ###### Abstract In Einstein’s gravity, the entropy of horizons is proportional to their area. Several arguments given in the literature suggest that, in this context, both area and entropy should be quantized with an equally spaced spectrum for large quantum numbers. But in more general theories (like, for e.g, in the black hole solutions of Gauss-Bonnet or Lanczos-Lovelock gravity) the horizon entropy is not proportional to area and the question arises as to which of the two (if at all) will have this property. We give a general argument that in all Lanczos-Lovelock theories of gravity, it is the entropy that has equally spaced spectrum. In the case of Gauss-Bonnet gravity, we use the asymptotic form of quasi normal mode frequencies to explicitly demonstrate this result. Hence, the concept of a quantum of area in Einstein Hilbert (EH) gravity needs to be replaced by a concept of quantum of entropy in a more general context. ###### pacs: 04.62.+v, 04.60.-m It was conjectured by Bekenstein Bekenstein1 long back that, in a quantum theory, the black hole area would be represented by a quantum operator with a discrete spectrum of eigenvalues. Bekenstein showed that the area of a classical black hole behaves like an adiabatic invariant, and so, according to Ehrenfest’s theorem, the corresponding quantum operator must have a discrete spectrum. It was also known that, when a quantum particle is captured by a (non extremal) black hole its area increases by a minimum non-zero value chris ; Bekenstein1 ; comment which is independent of the black hole parameters. This argument also suggests an equidistant spacing of area levels, with a well-defined notion of a quantum of area. The fundamental constants and combine to give a quantity with the dimensions of area cm, which is quite suggestive zeropoint and sets the scale in area quantization. In Einstein’s gravity, entropy of the horizon is proportional to its area. Hence one could equivalently claim that it is the gravitational entropy which has an equidistant spectrum with a well-defined notion of quantum of entropy. But, when one considers the natural generalization of Einstein gravity by including higher derivative correction terms to the original Einstein-Hilbert action, no such trivial relationship remains valid between horizon area and associated entropy. One such higher derivative theory which has attracted a fair amount of attention is Lanczos-Lovelock (LL) gravity lovelock , of which the lowest order correction appears as a Gauss-Bonnet (GB) term in dimensions. These lagrangians have the unique feature that the field equations obtained from them are quasi linear, as a result of which, the initial value problem remains well defined. More importantly, several features related to horizon thermodynamics, which were first discovered in the context of Einstein’s theory gravtherm , continues to be valid in LL gravity models aseem-sudipta ; ayan . Black hole solutions in the LL gravity are well studied in the literature. For these spacetimes, the notion of entropy can be defined using Wald’s formalism noether , where entropy is associated with the Noether charge of the diffeomorphism invariance symmetry of the theory. The entropy calculated from this approach turns out to be no longer proportional to horizon area. The question then arises as to whether it is the quantum of area or quantum of entropy (if at all either) which arises in a natural manner in these models. We attempt to answer this question in this paper. We will first provide a very general argument which suggests that it is the entropy which is quantized with equidistant spectrum in the case of LL gravity and then provide an explicit proof for the result in the context of GB gravity. In any geometrical description of gravity that obeys the principle of equivalence and is based on a nontrivial metric, the propagation of light rays will be affected by gravity. This, in turn, leads to regions of spacetime which are causally inaccessible to classes of observers. (These two features are reasonably independent of the precise field equations which determine the metric.). The fact that any observer has a right to formulate physical theories in a given coordinate system entirely in terms of the variables that an observer using that coordinate system can access, imposes strong constraints on the nature of the action functional which can be used to describe gravity paddypatel . Suppose we divide the space-time manifold into two regions separated by a null hypersurface and choose a coordinate system such that acts as a horizon for the observer on one side (say side 1). The effective theory for the observer on side 1 (with the degrees of freedom formally denoted by ) is obtained by integrating out the variables on the inaccessible side (side 2): In the semiclassical limit, saddle-point integration leads to the exponential of the classical action evaluated on-shell. The effective theory on side 1 is thus described by the action , with exp[iAWKBeff(g1)]≃exp[i(Agrav(g1)+Agrav(class))]. (1) Since the effects of the unobserved degrees of freedom can only be encoded in the geometry of the (shared) boundary between regions 1 and 2, we get the constraint that the on-shell value of the action must be expressible in terms of the boundary geometry which could be expressible in terms of itself. That is, , and exp[iAWKBeff(g1)]=exp[i(Agrav(g1)+Asur(g1))]. (2) This is a non-trivial requirement on any geometrical theory of gravity. Further since the boundary term depends on the choice of the coordinate system (or foliation), in which acts as a one-way membrane, will in general depend on the coordinate choice for the observer. Classically, with the boundary variables held fixed, the equations of motion remain unaffected by the existence of a (total divergence) boundary term; hence the fact that the boundary term is not generally covariant is unimportant for classical theory. This is, of course, not true in semiclassical/quantum theory. But since the quantum theory is governed by rather than by , the boundary term will have no effect in the quantum theory, if the quantum processes keep single-valued. This is equivalent to demanding that the boundary term satisfies the quantization condition . (More precisely, the change in the surface term ; this is irrelevant for our purpose when we work in the semiclassical limit of large ). It is now worth noting that the lagrangian in all the LL models (of which Einstein-Hilbert action is just a special case) can be expressed ayan as a sum of a bulk and total divergence terms, with integrating to give a surface term in the action. There is a peculiar ‘holographic’ relationship between and in all these models with the same information being coded in both the bulk and surface terms [see eq.(41) of ref.ayan ]. The on-shell value of the surface term in all these action functionals is proportional to the Wald entropy of the horizon ayan ; aseementropy . We can now see how a condition like can lead to quantization of Wald entropy. In the case of Einstein-Hilbert action, is well defined and is given by the standard Gibbons-Hawking-York term. As pointed out in ref.paddypatel the surface term will give the entropy — equal to one quarter of horizon area — and both will have an equally-spaced spectrum. When we proceed to general LL gravity, the correct (surface) counterterm which should be added to the higher derivative action is unknown and the action principle, using metric as dynamical variable, is actually ill-defined. There is, however, an alternative approach we can follow to obtain meaningful results for the LL gravity. So far we did not have to specify the exact nature of the degrees of freedom in the above discussion. Interestingly enough these arguments go through unhindered, when one formulates gravity as an emergent phenomenon without treating the metric as dynamical variables in the theory emergentpaddy . In this approach one proceeds along the following lines: Around any event in spacetime one can introduce a local inertial frame and — by boosting with a uniform acceleration — a local Rindler frame. The effective long range variables in emergent gravity approach are the normals to the null surfaces which act as local Rindler horizons. (One can think of as the ‘fluid velocity’ of a virtual null fluid in the spacetime.). The total action is now emergentpaddy ; aseementropy taken to be where Agrav=−4∫VdDx√−gPabcdab∇cna∇dnb (3) is determined by a fourth rank tensor which can be expressed as a derivative of the LL lagrangian and Amatt=∫VdDx√−gTabnanb (4) Maximizing with respect to all leads to the field equations of the LL theory. (All these aspects are described in detail in ref.aseementropy and hence not repeated here.) The key result we need here is that the on-shell value of the total action is given by Atot|on−shell=4∫∂VdD−1Σa(Pabcdnc∇bnd) (5) which can be shown to be identically equal to the Wald entropy of the horizon in the LL theory ayan ; aseementropy . (In the case of lowest order LL theory — which is just Einstein gravity — the expression for will be one quarter of the transverse area of the horizon.) The emergent gravity approach is strongly motivated by thermodynamic considerations and — classically — the maximization of the action can be thought of as maximization of the entropy. In this context, Eq. (5) will also give the entropy of the local Rindler horizon for each Rindler observer, which can be interpreted as due to integrating out the inaccessible degrees of freedom behind the local Rindler horizon. In the semiclassical limit, the on-shell value of the action will be related to the phase of the semiclassical wave function This expression, of course, should be generally covariant but as it stands it explicitly depends on the Rindler observer chosen to define the horizon. Hence we can ensure observer independence of semiclassical gravity only if we assume AWald=A|on−shell=2πn (6) Note that we are again obtaining the quantization condition from the phase of the semiclassical wavefunction which is completely in accord with previous approaches to this problem. The holographic relation between surface and bulk terms underscores how the surface term captures the dynamical information contained in the bulk. While this gives a general result that in LL theories it is the entropy of the horizon that is quantized, it would be nice if the result could be reinforced by an explicit calculation within the standard context. Fortunately, this can be done for GB theory using the arguments suggested by Hod hod based on quasinormal modes of black hole oscillations. Hod started from Bekenstein’s arguments regarding quantum area spectrum of a non extremal Kerr-Newman black hole, and showed that the spacing of area eigenvalues can be fixed by associating the classical limit of the quasinormal mode frequencies, , with the large limit of the quantum area spectrum, in the spirit of Bohr’s correspondence principle ( being the quantum number). Specifically, for a Schwarzschild black hole of mass in dimensions, the absorption of a quantum of energy , (in units with ) would lead to change in the black hole area eigenvalues as, and for entropy . In the case of a Schwarzschild black hole the level spacing of both area and entropy eigenvalues were indeed found to be equidistant, allowing one to associate the notion of a minimum unit, a quantum, of area and entropy. We will use these ideas in the context of GB black holes, using the numerically known form of the quasinormal mode frequencies. We show that, the form of the highly damped quasi normal modes of these black holes suggest that it is the entropy which has a equally spaced spectrum. The Gauss-Bonnet (GB) Lagrangian in dimensions is given by lovelockblack , (16πG)L=[R+αGB(R2−4RabRab+RabcdRabcd)] Static, spherically symmetric black hole solutions in this theory is of the form, ds2 = −f(r)dt2+f(r)−1dr2+r2dΩD−2 where, f(r) = 1+r22α⎡⎢⎣1−(1+4 α ϖrD−1)1/2⎤⎥⎦ Here, and is related to the ADM mass by the relationship, ϖ = 16πG(D−2)ΣD−2 M where is the volume of unit sphere. The Hawking temperature , and entropy for this spacetime are, T = D−34πr+[r2+r2++2α+α(D−5D−3)1r2++2α] S = A4G[1+2α(D−2D−4)(AΣD−2)−2/(D−2)] where is the horizon area. The location of the horizon is found as roots of , where , and for the horizon to exist at all, one must also have . The highly damped quasi normal modes for the GB black holes (when ) has been worked out for . These QNM frequencies are given by QnmGB ω(n)⟶n→∞TlnQ+i(2πT)n (The imaginary part can be understood in terms of a scattering matrix formalism; see e.g., tptirth ). We now use the Hod conjecture to obtain the entropy spacing for this spacetime. Accordingly, we identify as the relevant frequency the real part of , i.e., we take . The entropy spacing is then given by Sn+1−Sn=∂S∂Mωc=lnQ Clearly the spacing is a constant. This result depends essentially only on the fact that leading to which is a constant. For GB black holes the area is a function of entropy, which is not linear. Hence, for the area spectrum for this class of black holes we get An+1−An=∂A∂Mωc=g(An)lnQ (7) where, is given by, g(An)=4[1−2α(AnΣ3)−2/3] (8) which is correct to . We therefore find that the entropy eigenvalues are discrete and equally spaced but the area spacing is not equidistant. Hence, for GB gravity, the notion of quantum of entropy is more natural than the quantum of area. We shall now comment on the value of which determines the actual value of quantum of entropy. Originally, in the case of Einstein gravity, the results of Bekenstein and Hod lead to the picture of a quantum black hole with the horizon built out of patches of area , the most natural choice for being the (constant) spacing of eigenvalues of the area quantum operator, . Hod further argued that one should take, for , the real part of the quasi normal mode frequencies after the imaginary part has been sent to infinity. This leads to , being some integer. The numerical, as well as later analytical results for the quasi normal mode frequencies of spherically symmetric black holes in 3+1 dimensions, give . Recently, Maggiore mag has put forth another argument which leads to identifying the transition frequency between large levels with the classical limit (rather than the real part of QNM frequencies, as was done by Hod). This gives , consistent with earlier arguments of Bekenstein. While the specific value of area spacing is important for a statistical definition of entropy, it does not seem to be absolutely essential since, in a semiclassical description, the number of microstates need not exactly come out to be an integer, as was argued by Maggiore. Recently, all these arguments have been applied for the case of more general black holes in the context of Einstein gravity and it has been argued that in all such cases, when properly analyzed, one finds an equally spaced area spectrum medvid . The suggestion by Maggiore mag to associate this classical limit with transition frequencies as leads to the replacement: , which gives (in units of ). Thus, we obtain, for the quantum of entropy, a value of in agreement with the general arguments given earlier. The broader picture which emerges from this analysis can be summarized along these lines: (a) In any theory which obeys the principle of equivalence, the gravitational field will be described at long wavelengths by a spacetime metric. (b) Around any event in spacetime, one can introduce a local inertial frame and — by boosting with an acceleration — a local Rindler frame. The observers using this coordinate system will have a local Rindler horizon with a temperature and entropy associated with the virtual deformations of the horizon. (c) Classically, we interpret in Eq. (3) as the total entropy which is maximized for all the Rindler observers to give the field equations of the theory (which are the same as the equations of LL gravity). The on-shell value of the action giving the Wald entropy of the horizon which is interpreted as due to modes which are inaccessible to the given observer. (d) In the semiclassical limit the is interpreted as an action and its value will affect the phase of the semiclassical wave function. (e) The observer independence of the semiclassical gravity requires this phase — i.e., the Wald entropy of the horizon — to be quantized in units of . (Of course, mathematically, one could have treated as the action functional even in classical theory.) When we take the lowest order LL theory, we reproduce Einstein gravity and the quantization condition becomes equivalent to area quantization of the horizon as discussed several times in the literature. At the next order, we have the GB theory for which we have explicitly demonstrated the quantization of entropy. We believe that, once we have the structure of QNM in the case of LL theory — which, as far as we know, has not yet been explicitly worked out — the analysis given above can be repeated to give an explicit demonstration of this result. Since entropy is directly related with information content, the quantum of gravitational entropy points out a new and intriguing relationship between gravity, quantum theory and thermodynamics. DK and SS are supported by the Council of Scientific & Industrial Research, India.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355379939079285, "perplexity": 391.1533472797855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00491.warc.gz"}
https://math.stackexchange.com/questions/61937/how-can-i-prove-that-all-rational-numbers-are-either-terminating-decimal-or-repe
# How can I prove that all rational numbers are either terminating decimal or repeating decimal numerals? I am trying to figure out how to prove that all rational numbers are either terminating decimal or repeating decimal numerals, but I am having a great difficulty in doing so. Any help will be greatly appreciated. HINT$\$ Consider what it means for a real $\rm\ 0\: < \: \alpha\: < 1\$ to have a periodic decimal expansion: $\rm\qquad\qquad\qquad\qquad\ \ \: \alpha\ =\ 0\:.a\:\overline{c}\ =\ 0\:.a_1a_2\cdots a_n\:\overline{c_1c_2\cdots c_k}\ \$ in radix $\rm\:10\:$ $\rm\qquad\qquad\iff\quad \beta\ :=\ 10^n\: \alpha - a\ =\ 0\:.\overline{c_1c_2\cdots c_k}$ $\rm\qquad\qquad\iff\quad 10^k\: \beta\ =\ c + \beta$ $\rm\qquad\qquad\iff\quad (10^k-1)\ \beta\ =\ c$ $\rm\qquad\qquad\iff\quad (10^k-1)\ 10^n\: \alpha\ \in\ \mathbb Z$ Thus to show that a rational $\rm\:\alpha\:$ has such a periodic expansion, it suffices to find $\rm\:k,n\:$ as above, i.e. so that $\rm\ (10^k-1)\ 10^n\:$ serves as a denominator for $\rm\:\alpha\:.\:$ Put $\rm\:\alpha\: = a/b,\:$ and $\rm\: b = 2^i\:5^j\ d,\:$ where $\rm\:2,5\:\nmid d\:.\:$ Choosing $\rm\:n\: >\: i,\ j\:$ ensures that $\rm\:10^n\:\alpha\:$ has no factors of $\rm\:2\:$ or $\rm\:5\:$ in its denominator. Hence it remains to find some $\rm\:k\:$ such that $\rm\:10^k-1\:$ will cancel the remaining factor of $\rm\:d\:$ in the denominator, i.e. such that $\rm\:d\:|\:10^k-1\:,\:$ or $\rm\:10^k\equiv 1\pmod{d}\:.\:$ Since $\rm\:10\:$ is coprime to $\rm\:d\:,\:$ by the Euler-Fermat theorem we may choose $\rm\:k = \phi(d)\:,\:$ which completes the proof sketch. For the converse, see this answer. • Although this post is old, I have a question: what if k is not unique, that is what if 10^k = 1 mod d, has more than one solution? This means there are at least two different representations of c, right? This is impossible, hence k is unique right? – Peanut Jan 16 '18 at 13:19 • Well, k is not unique since if k_1 is a solution then also 2k_1 is a solution; thus we may choose the least k that satisfy that equation – Peanut Jan 16 '18 at 13:27 • so here $\bar{c}$ is a compact form of writing $\overline{c_1 c_2 \dots c_k}$, and represents the purely periodic part? And then $a$ and it's indexed counterpart represent the possibility of a non repeating part that precedes the periodic part? So the second line of your proof eliminates the non-repeating decimal part which has length $n$. Next you form an integer $c$ by shifting left by order $k$. I'm starting to see the underlying theory behind this stuff, but I'm not quite to the point where I can apply it to find $n$ and $k$ here. I'll keep reading. Thank you. – rocksNwaves Feb 17 '20 at 1:12 • @rocksNwaves Yes, you've correctly understood it. – Bill Dubuque Feb 17 '20 at 2:24 • @BillDubuque I am sorry but I don't understand the leap from step 3 to step 4. Is it possible to elaborate it more? – Andes Lam 2 days ago Recall that in long division, one gets a remainder at each step: $$\begin{array} & & & 0 & . & 2 & 2 & 7 & 2 \\ \hline 22 & ) & 5&.&0&0&0&0&0 \\ & & 4 & & 4 \\ & & & & 6 & 0 & \leftarrow \\ & & & & 4 & 4 \\ & & & & 1 & 6 & 0 \\ & & & & 1 & 5 & 4 \\ & & & & & & 6 & 0 & \leftarrow & \text{repeating}\\ \end{array}$$ 6 is a remainder. The next remainder is 16. Then the next is 6. This brings us back to where we were at an earlier step: Dividing 60 by 22. We have to get the same answer we got the previous time. Hence we have repetition of "27". The answer is $0.2272727\overline{27}\ldots$, where "27" keeps repeating. The question then is: Why must we always return to a remainder that we saw earlier? The answer is that the only possible remainders are $0, 1, 2, 3, \ldots, 21$ (if $22$ is what we're dividing by) and there are only finitely many. If we get 0, the process terminates. If we never get 0, we have only 21 possibilities, so we can go at most 21 steps without seeing one that we've seen before. As soon as we get one that we've seen before, the repetition begins. A related question worth asking is how you know that every repeating decimal corresponds to a rational number. E.g., if you're handed $0.2272727\overline{27}\ldots$ with "27" repeating forever, how do you figure out that it's exactly $5/22$? There's a simple algorithm for that too. First, it’s clear that you need only look at proper fractions. Now look at the long division algorithm for calculating the decimal expansion of a rational number. At each stage you get a remainder. What happens if you get a remainder of $0$ at some stage? If you don’t ever get a remainder of $0$, can you keep getting different remainders forever, or must a remainder repeat at some point? What happens if you do get a repeated remainder? Remainders must be less than the divisor. So, for example, if you divide the numerator by the denominator, n, the only remainders can lie between 0 (in which case the division ends) and n-1. What happens if and when you run out of remainders (i.e. you've used all of the numbers between 1 and n-1)? You will be facing division of some remainder, r, by n again. You've been there and done that so you know what the next remainder (hence, the next dividend) will be. And so on...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565846681594849, "perplexity": 217.29002672254182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00301.warc.gz"}
http://tex.stackexchange.com/questions/51320/going-back-when-using-hyperref?answertab=active
# Going “back” when using hyperref I'm including \usepackage{hyperref} so that each instance of \ref (as well as each page number in the index and the table of contents) automatically links to that page. When I'm viewing the document (in Sumatra) and I click a link, I jump to the linked page. Is there any easy way to go "back" to where I was before I clicked the link? (And is the answer any different when using Adobe Acrobat reader?) - Such a correspondence is usually "one to many": which place of the PDF should this point to? All the PDF viewers have a "go back" facility, AFAIK. –  egreg Apr 9 '12 at 20:08 This depends on the PDF viewer - for Adobe Reader, you can find a solution in an answer to How to return to original .pdf presentation after open a .pdf linked file?. –  diabonas Apr 9 '12 at 20:14 This question is also asked at SuperUser ("Back button of Adobe PDF Reader after clicking a hyperlink whose target is on the same document"). Many PDF viewers, including Adobe Reader, use [ALT]+[Left Arrow]. –  Jess Riedel Nov 20 '13 at 8:16 Actually this has nothing to do with TeX … There’s no default, so one needs to check the viewer’s menus and shortcuts, because each application can use its own method. However, on MS Windows the keys are the same for Adobe Reader, SumatraPDF and PDF XChange Viewer (and probably some others which I can’t test now): Alt plus left cursor key for “Go back to last view” and Alt plus right cursor key for “Go to next view”. The latter is only active, when the former at least once was used. Despite the same key association the different readers behave not exactly the same. Enrico Gregorio (egreg) reported, that on Mac OS X it's Cmd + [ and Cmd + ] (except for Adobe Reader). - Of course, this depends on the operating system and, possibly, on the language (at least on Mac OS X). –  egreg Apr 9 '12 at 21:04 Something about: "one needs to check the viewer's menus and shortcuts because each application can use its own method". For example on Mac OS X it's Command+[ and Command+] (except for Adobe Reader). –  egreg Apr 9 '12 at 21:33 For the evince PDF viewer it is not so easy to find how to add this functionality. A solution involving the "back" button can be found here. I had to add the icon for "back" (which is a left arrow) to the toolbar as described above. Note that "back" means "previous viewed page" and not "previous page" which is denoted by the up-arrow icon. - To expand on what others said: This back functionality has nothing to do with TeX because you wouldn't encode the back link (from the link destination back to the link source) in the PDF document itself. Rather, the functionality is built into the PDF viewer; when you click a link, taking you to a different page in the PDF document, the PDF viewer remembers where you were and can take you back if you press the right 'back' button. As egreg pointed out, it's common to have many links to the same place so it wouldn't be clear where the back links should point. Wikipedia is somewhat of an exception in that some link destinations (such as Note #2 here) have a list of all the link sources in the entire document. For Wikipedia, they are labeled with lowercase letters ('a', 'b', ...), and you can just click them one-by-one to see ever place in the main text that references a particular note. I don't know if PDFs support this functionality or, if they do, whether you can create such a PDF with TeX . -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8143578171730042, "perplexity": 2098.2838014589647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554598.88/warc/CC-MAIN-20141224185914-00050-ip-10-231-17-201.ec2.internal.warc.gz"}
http://wilga.ise.pw.edu.pl/?q=node/2125
# Femtoscopy as a tool for studying phase transition phenonema at STAR/BES energies in context of femtoscopic analysis at NICA Femtoscopy is a tool for studying space-time evolution of hot and dense matter during high energy collisions by using two-particle correlations. Femtoscopic and flow measurements at RHIC and LHC energies are were well reproduced by hydrodynamics models that contains equation of state (EoS) with crossover type transition from Quark-Gluon Plasma to hadron gas phase. Similar studies were performed at AGS and SPS accelerators and are performed now in Bean Energy Scan (BES) program at Relativistic Heavy Ion Collider for exploring phase diagram of strongly interacting matter. I present femtoscopic observables calculated for Au-Au collisions at $\sqrt{s_{NN}} = 7.7 - 62.4 GeV from viscous hydro + cascade model \texttt{vHLLE+UrQMD} with two types of EoSs - one that correspond to 1st order phase transition (PT) and second that correspond to crossover PT. I also discuss perspectives of femtoscopic measurements at NICA energy scale$\sqrt{s_{NN}} = 4 - 11\$~GeV.. Author: Daniel Wielanek Conference: Title
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935267448425293, "perplexity": 4595.124392686416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123312.44/warc/CC-MAIN-20170823171414-20170823191414-00566.warc.gz"}
http://mathhelpforum.com/calculus/99868-finding-limit-using-algebra.html
# Thread: Finding this limit using Algebra 1. ## Finding this limit using Algebra Lim .... .... x-2 x->-2 ....x^2-4 I got 4 using algebra, but I did the thing where you put -1.999 instead of -2 into the function and got 1/2, and I tried an online limit calculator and got infinity...im so confused. 2. $\lim_{x \to 2}\frac{x-2}{x^2-4}$ $\lim_{x \to 2}\frac{x-2}{(x-2)(x+2)}$ $\lim_{x \to 2}\frac{1}{x+2} = \frac{1}{2+2}= \frac{1}{4}$ 3. Originally Posted by pickslides $\lim_{x \to 2}\frac{x-2}{x^2-4}$ $\lim_{x \to 2}\frac{x-2}{(x-2)(x+2)}$ $\lim_{x \to 2}\frac{1}{x+2} = \frac{1}{2+2}= \frac{1}{4}$ That would be right but the question is asking as x approaches -2 4. Apply Hospital's rule and you get $\frac{-1}{4}$: $\lim_{x\rightarrow-2}\frac{x-2}{x^2 - 4} = \lim_{x\rightarrow-2}\frac{\frac{d}{dx}(x-2)}{\frac{d}{dx}(x^2 - 4)} = \lim_{x\rightarrow-2}\frac{1}{2x} = \frac{1}{2(-2)} = \frac{-1}{4} $ 5. Originally Posted by eXist Apply Hospital's rule and you get $\frac{-1}{4}$: $\lim_{x\rightarrow-2}\frac{x-2}{x^2 - 4} = \lim_{x\rightarrow-2}\frac{\frac{d}{dx}(x-2)}{\frac{d}{dx}(x^2 - 4)} = \lim_{x\rightarrow-2}\frac{1}{2x} = \frac{1}{2(-2)} = \frac{-1}{4} $ Ok thanks for the answer if its not too much trouble can i ask what the d/dx thing is? 6. When you apply Hospital's rule you take the derivative of the top over the derivative of the bottom. That notation: $\frac{d}{dx}$ just means that I'm taking the derivative of those two as if they were their own monomial, and ignoring the fact that one is actually in the denominator. Just a notation I use. 7. Originally Posted by eXist When you apply Hospital's rule you take the derivative of the top over the derivative of the bottom. That notation: $\frac{d}{dx}$ just means that I'm taking the derivative of those two as if they were their own monomial, and ignoring the fact that one is actually in the denominator. Just a notation I use. Ah yes thank you very very much. 8. L'hospital's rule is. $\lim_{x\to a} f(x)=\frac{g(x)}{h(x)}$ if $\lim_{x\to a} g(x) = \lim_{x\to a} h(x) = 0$ then $\lim_{x\to a} f(x)=\frac{g(x)}{h(x)} = \lim_{x\to a} f(x)=\frac{g'(x)}{h'(x)}$ Can L'hospital's rule be used here? 9. Originally Posted by pickslides L'hospital's rule is. $\lim_{x\to a} f(x)=\frac{g(x)}{h(x)}$ if $\lim_{x\to a} g(x) = \lim_{x\to a} h(x) = 0$ then $\lim_{x\to a} f(x)=\frac{g(x)}{h(x)} = \lim_{x\to a} f(x)=\frac{g'(x)}{h'(x)}$ Can L'hospital's rule be used here? Yes it can 10. Originally Posted by usmelikchees Yes it can Actually, you can't, since the numerator doesn't approach either infinity or 0 when $x\to -2$ In fact, the limit $\lim_{x \to -2}\frac{x-2}{x^2-4}$ does not exist: $\lim_{x \to -2}\frac{x-2}{x^2-4} = \lim_{x \to -2}\frac{1}{x+2}$ But, $\lim_{x \to -2^+}\frac{1}{x+2} = +\infty$ And: $\lim_{x \to -2^-}\frac{1}{x+2} = -\infty$ Thus, the limit does not exist when $x\to -2$. 11. Originally Posted by eXist Apply Hospital's rule and you get $\frac{-1}{4}$: $\lim_{x\rightarrow-2}\frac{x-2}{x^2 - 4} = \lim_{x\rightarrow-2}\frac{\frac{d}{dx}(x-2)}{\frac{d}{dx}(x^2 - 4)} = \lim_{x\rightarrow-2}\frac{1}{2x} = \frac{1}{2(-2)} = \frac{-1}{4} $ L'Hopital's rule is not applicable here as the numerator is not indeterminate at $x=-2$, the limit does not exist (it is not $+$ or $-$ infty as the sign as $x$ approaches $-2$ from the left and right is different) CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989930272102356, "perplexity": 650.7691368167904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00659.warc.gz"}
http://d2l.ai/chapter_appendix/notation.html
# 15.1. List of Main Symbols¶ The main symbols used in this book are listed below. ## 15.1.1. Numbers¶ Symbol Type $$x$$ Scalar $$\mathbf{x}$$ Vector $$\mathbf{X}$$ Matrix $$\mathsf{X}$$ Tensor ## 15.1.2. Sets¶ Symbol Type $$\mathcal{X}$$ Set $$\mathbb{R}$$ Real numbers $$\mathbb{R}^n$$ Vectors of real numbers in $$n$$ dimensions $$\mathbb{R}^{a \times b}$$ Matrix of real numbers with $$a$$ rows and $$b$$ columns ## 15.1.3. Operators¶ Symbol Type $$\mathbf{(\cdot)} ^\top$$ Vector or matrix transposition $$\odot$$ Element-wise multiplication $$\lvert\mathcal{X }\rvert$$ Cardinality (number of elements) of the set $$\mathcal{X}$$ $$\|\cdot\|_p$$ $$L_p$$ norm $$\|\cdot\|$$ $$L_2$$ norm $$\sum$$ Series addition $$\prod$$ Series multiplication ## 15.1.4. Functions¶ Symbol Type $$f(\cdot)$$ Function $$\log(\cdot)$$ Natural logarithm $$\exp(\cdot)$$ Exponential function Symbol Type $$\frac{dy}{dx}$$ Derivative of $$y$$ with respect to $$x$$ $$\partial_{x} {y}$$ Partial derivative of $$y$$ with respect to $$x$$ $$\nabla_{\mathbf{x}} y$$ Gradient of $$y$$ with respect to $$\mathbf{x}$$ ## 15.1.6. Probability and Statistics¶ Symbol Type $$\Pr(\cdot)$$ Probability distribution $$z \sim \Pr$$ Random variable $$z$$ obeys the probability distribution $$\Pr$$ $$\Pr(x|y)$$ Conditional probability of $$x|y$$ $${\mathbf{E}}_{x} [f(x)]$$ Expectation of $$f$$ with respect to $$x$$ ## 15.1.7. Complexity¶ Symbol Type $$\mathcal{O}$$ Big O notation $$\mathcal{o}$$ Little o notation (grows much more slowly than)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679171204566956, "perplexity": 4726.114872299176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00441.warc.gz"}
http://mathhelpforum.com/number-theory/2465-amc-12b-20-a.html
# Thread: Amc 12b #20 1. ## Amc 12b #20 Let $\displaystyle x$ be chosen at random from the interval $\displaystyle (0,1)$. What is the probability that $\displaystyle [\log_{10}(4x)]-[\log_{10}(x)]=0$? Here $\displaystyle [x]$ denotes the greatest integer that is less than or equal to $\displaystyle x$. This is multiple choice, but I don't think posting the possibilities are necessary. 2. Originally Posted by Jameson Let $\displaystyle x$ be chosen at random from the interval $\displaystyle (0,1)$. What is the probability that $\displaystyle [\log_{10}(4x)]-[\log_{10}(x)]=0$? Here $\displaystyle [x]$ denotes the greatest integer that is less than or equal to $\displaystyle x$. This is multiple choice, but I don't think posting the possibilities are necessary. I do not have much time now but I was thinking maybe you can do this. $\displaystyle [x]=[x+k],0<k<1$ Then, it seems to me (I did not formally prove it) then, $\displaystyle j<x<1-k+j$ are all solutions for each integer $\displaystyle j$. Thus, $\displaystyle [\log (4x)]-[\log x]=0$ Thus, since $\displaystyle 0<\log 4<1$ $\displaystyle [\log 4+\log x]=[\log x]$ 3. Let, us solve, for $\displaystyle 0<x<1$ $\displaystyle [\log 4x]=[\log x]$ We have, $\displaystyle [\log 4+\log x]=[\log x]$ Thus, all solutions for $\displaystyle u=\log x$ are, $\displaystyle u\in \bigcup_{j\in\mathbb{Z}}(j,1-\log 4+j]$ Thus, we that, all solutions satisfies $\displaystyle j<\log x\leq 1-\log 4+j$ Iff, $\displaystyle 10^j<x\leq 2.5\cdot 10^j$ . Note, $\displaystyle j$ cannot be zero or positive because it would violate the inequality $\displaystyle 0<x<1$. Thus, $\displaystyle j=-1,-2,-3,-4,...$ all work. Thus, $\displaystyle .1<x\leq .25$ thus, length =.15 $\displaystyle .01<x\leq .025$ length=.015 $\displaystyle .001<x\leq .0025$ length=.0015 and so on "ad infinitum" (I am so cool using latin phrases). Thus, we have the total length of the solutions to be: $\displaystyle .15+.015+.0015+....=.16666=1/6$ this is a regular infinite geometric series. My point is that, we can "intuitively" think of probability as the length of the success (which are the solutions) divided by the total possibilities (which is the length of interval). Thus, the probability is $\displaystyle \frac{1}{6}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902263879776001, "perplexity": 398.2826329862718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946011.29/warc/CC-MAIN-20180423125457-20180423145457-00340.warc.gz"}
http://mathhelpforum.com/pre-calculus/118120-geometric-series.html
1. ## Geometric Series Hello, I've been trying to solve this problem: "How many generations must a person go back to have at least 1000 ancestors?" I've been trying ways to do it.. but not sure how. I know that the series goes like 2 + 4 + 8 and so on, so I know my a=2 and r=2. Don't really know where to go from there though. The question says at least 1000, so 1000 isn't a value I can plug into my equation. Not asking for anyone to solve, maybe just point me in the right direction on how to start. Thank you! 2. $S_n = \frac{a(1-r^n)}{1-r}$ $1000 \geq \frac{2(1-2^n)}{1-2}$ solve for n. 3. Originally Posted by pickslides $S_n = \frac{a(1-r^n)}{1-r}$ $1000 \geq \frac{2(1-2^n)}{1-2}$ solve for n. So I got this: $2^n \leq 501$ Is that right? I don't get how I find n. 4. Given your arithmetic is correct then $2^n \leq 501$ $n \leq \log_2(501)$ or if you don't like/know logs you can use trial and error on n.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618851900100708, "perplexity": 341.63881108805987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/6248/understanding-the-implementation-of-grovers-diffusion-operator-using-the-oracle
# Understanding the implementation of Grover's diffusion operator using the oracle qubit I have two gates here for the Grover diffusion operator. The first gate is completely understandable for me, so I implemented it myself after submitting some papers that I read. This is the first gate (understandable for me): Recently, however, I have seen an implementation of the Grover algorithm in which the diffusion operator still uses the oracle qubit. My question is, what does this circuit do? And why does it work? ## 1 Answer Let's take a look at the part of the diffusion operator between the columns of Hadamard gates. This part is supposed to perform a conditional phase shift, giving a phase of $$-1$$ to the state $$|0...0\rangle$$ and leaving the rest of the basis states unmodified. For the first circuit, the bottom 3 wires are controls, wrapped in NOT gates, i.e., they are anti-controls: the operator applied on the 4th wire from the bottom only if each of the bottom 3 wires is in the $$|0\rangle$$ state. The operator performed on the 4th wire in this case is described by this circuit (two of the three NOT gates in the middle cancel right away): Here are the transformations done by this circuit to a qubit in the starting state $$\alpha |0\rangle + \beta |1\rangle$$: $$\alpha |0\rangle + \beta |1\rangle \xrightarrow{\oplus} \beta |0\rangle + \alpha |1\rangle \xrightarrow{\text{H}} \beta |+\rangle + \alpha |-\rangle \xrightarrow{\oplus} \beta |+\rangle - \alpha |-\rangle \xrightarrow{\text{H}} \beta |0\rangle - \alpha |1\rangle \xrightarrow{\oplus} -\alpha |0\rangle + \beta |1\rangle$$ This is exactly what we're looking for - a $$-1$$ phase applied to the $$|0...0\rangle$$ state. For the second circuit, the bottom 3 wires are already anti-controls, so the circuit we need to apply to the top two wires is the following: The transformation is the following: $$(\alpha |0\rangle + \beta |1\rangle) \otimes |-\rangle = \frac{1}{\sqrt2} (\alpha |0\rangle |0\rangle - \alpha |0\rangle |1\rangle + \beta |1\rangle |0\rangle - \beta |1\rangle |1\rangle \xrightarrow{\text{CNOT}_0}$$ $$\frac{1}{\sqrt2} (\alpha |0\rangle \color{blue}{|1\rangle} - \alpha |0\rangle \color{blue}{|0\rangle} + \beta |1\rangle |0\rangle - \beta |1\rangle |1\rangle = (\color{blue}{-} \alpha |0\rangle + \beta |1\rangle) \otimes |-\rangle$$ This transformation applies the same conditional phase shift to the bottom wire and does not modify the $$|-\rangle$$ state of the top wire, so it turns out to be equivalent to the first circuit.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8996856212615967, "perplexity": 380.45261514440716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00116.warc.gz"}
http://www.phys.virginia.edu/Announcements/talk-list.asp?SELECT=SID:76
# Colloquia Colloquium Friday, February 4, 2000 4:00 PM Physics Building, Room 204 Note special time. Add to your calendar ## "Breaking a one-dimensional chain: fracture in 1 + 1 dimensions" Eugene Kolomeisky , University of Virginia [Host: Joseph Poon] ABSTRACT: The breaking rate of an atomic chain stretched at zero temperature by a constant force can be calculated in a quasiclassical approximation by finding the localized solutions ("bounces") of the equations of classical dynamics in imaginary time. We show that this theory is related to the critical cracks of stressed solids, because the world lines of the atoms in the chain form a two-dimensional crystal, and the bounce is a crack configuration in (unstable) mechanical equilibrium. Thus the tunneling time, Action, and the breaking rate in the limit of small forces are determined by the classical results of Griffith. For the limit of large forces we give an exact bounce solution that describes the quantum fracture and classical crack close to the limit of mechanical stability. This limit can be viewed as a critical phenomenon for which we establish a Levanyuk-Ginzburg criterion of weakness of fluctuations, and propose a scaling argument for the critical regime. The post-tunneling dynamics is understood by the analytic continuation of the bounce solutions to real time. To add a speaker, send an email to [email protected]. Please include the seminar type (e.g. Colloquia), date, name of the speaker, title of talk, and an abstract (if available).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742208480834961, "perplexity": 1211.227065350664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00235.warc.gz"}
https://en.wikisource.org/wiki/Translation:Attempt_of_a_Theory_of_Electrical_and_Optical_Phenomena_in_Moving_Bodies/Section_II
Translation:Attempt of a Theory of Electrical and Optical Phenomena in Moving Bodies/Section II Jump to navigation Jump to search Electric phenomena in ponderable bodies that are moving with constant velocity through the stationary aether. Transformation of the fundamental equations. § 19. From now on it will be assumed that the bodies to be considered are moving at a steady velocity of translation ${\displaystyle {\mathfrak {p}}}$, under which we will have to understand in almost all applications, the speed of the earth in its motion around the sun. It would be interesting at first to further develop the theory for stationary bodies, but for brevity's sake let us immediately turn to the more general case. Besides, it may be still set ${\displaystyle {\mathfrak {p}}=0}$. The treatment of the problems that are now coming into play is most simple, when instead of the co-ordinate system used above, we introduce another one which is rigidly connected with ponderable matter and therefore shares its displacement. While the coordinates of a point with respect to the fixed system were called x, y, z, let those, which refer to the moving system and which I call the relative coordinates, denoted by (x), (y), (z) for the time being. Until now, all the variable parameters were seen as functions of x, y, z, t; furthermore ${\displaystyle {\mathfrak {d}}_{x}}$, ${\displaystyle {\mathfrak {d}}_{y}}$, etc. shall be seen as functions of (x) (y), (z) and t. Under a fixed point, we now understand one point, that has a steady position with respect to the new axis; in the same way, by rest or motion of a physical particle we shall mean the relative rest or the relative motion in relation to ponderable matter. With ions, which move in this sense of the word, we will have to do as soon as the displaced matter is the seat of electric motions. By ${\displaystyle {\mathfrak {v}}}$ we shall not represent the real velocity, but the velocity of the previously mentioned relative motion. The real velocity is thus ${\displaystyle {\mathfrak {p}}+{\mathfrak {v}}{,}}$ and hereby ${\displaystyle {\mathfrak {v}}}$ is to be replaced in equations (4) and (V). In addition, we have, instead of the derivatives with respect to x, y, z and t, to establish such with respect to (x), (y), (z) and t. The first mentioned derivative I denote by ${\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}},\ \left({\frac {\partial }{\partial t}}\right)_{1}{,}}$ however, the latter by ${\displaystyle {\frac {\partial }{\partial (x)}},\ {\frac {\partial }{\partial (y)}},\ {\frac {\partial }{\partial (z)}},\ \left({\frac {\partial }{\partial t}}\right)_{2}.}$ Now we have, by application to an arbitrary function, ${\displaystyle {\frac {\partial }{\partial x}}={\frac {\partial }{\partial (x)}},\ {\frac {\partial }{\partial y}}={\frac {\partial }{\partial (y)}},\ {\frac {\partial }{\partial z}}={\frac {\partial }{\partial (z)}}{,}}$ ${\displaystyle \left({\frac {\partial }{\partial t}}\right)_{1}=\left({\frac {\partial }{\partial t}}\right)_{2}-{\mathfrak {p}}_{x}{\frac {\partial }{\partial (x)}}-{\mathfrak {p}}_{y}{\frac {\partial }{\partial (y)}}-{\mathfrak {p}}_{z}{\frac {\partial }{\partial (z)}}.}$ By that it follows, that we can write for ${\displaystyle Div\ {\mathfrak {A}}}$ the expression ${\displaystyle {\frac {\partial {\mathfrak {A}}_{x}}{\partial (x)}}+{\frac {\partial {\mathfrak {A}}_{y}}{\partial (y)}}+{\frac {\partial {\mathfrak {A}}_{z}}{\partial (z)}}{,}}$ and for the components of ${\displaystyle Rot\ {\mathfrak {A}}}$ ${\displaystyle {\frac {\partial {\mathfrak {A}}_{z}}{\partial (y)}}-{\frac {\partial {\mathfrak {A}}_{y}}{\partial (z)}}{\text{, etc.}}}$ The expressions ${\displaystyle Div\ {\mathfrak {A}}}$ and ${\displaystyle Rot\ {\mathfrak {A}}}$ have still the meaning given in § 4, g and h, if, after having abandoned the old coordinates one and for all, for simplification we don't indicate the new ones with (x), (y), (z), but with x, y, z. We also want, after we have passed to the new coordinates, use the sign ${\displaystyle {\tfrac {\partial }{\partial t}}}$ instead of ${\displaystyle \left({\tfrac {\partial }{\partial t}}\right)_{2}}$ for a differentiation with respect to time at constant relative coordinates, so that ${\displaystyle \left({\frac {\partial }{\partial t}}\right)_{1}={\frac {\partial }{\partial t}}-{\mathfrak {p}}_{x}{\frac {\partial }{\partial x}}-{\mathfrak {p}}_{y}{\frac {\partial }{\partial y}}-{\mathfrak {p}}_{z}{\frac {\partial }{\partial z}}}$ (18) The derivative with respect to time, which occurs in the basic equations (I) - (V), are all of the kind indicated by ${\displaystyle \left({\tfrac {\partial }{\partial t}}\right)_{1}}$. We will maintain this sign as an abbreviation for the longer term (18). In contrast, a point over a letter shall henceforth — such as ${\displaystyle \partial /\partial t}$ - indicate a differentiation with respect to time at constant relative coordinates. Thus the terms ${\displaystyle {\dot {\mathfrak {d}}}}$ and ${\displaystyle {\dot {\mathfrak {H}}}}$ in (4) and (IV) may not be left unaltered. By ${\displaystyle {\mathfrak {d}}}$, for example, we understood a vector with components ${\displaystyle \left({\frac {\partial {\mathfrak {d}}_{x}}{\partial t}}\right)_{1}{\text{, etc.,}}}$ or ${\displaystyle \left({\frac {\partial }{\partial t}}-{\mathfrak {p}}_{x}{\frac {\partial }{\partial x}}-{\mathfrak {p}}_{y}{\frac {\partial }{\partial y}}-{\mathfrak {p}}_{z}{\frac {\partial }{\partial z}}\right){\mathfrak {d}}_{x}{\text{, etc.}}}$ We can suitably write this vector ${\displaystyle \left({\frac {\partial {\mathfrak {d}}}{\partial t}}\right)_{1}{,}}$ while ${\displaystyle {\dot {\mathfrak {d}}}}$ or ${\displaystyle {\frac {\partial {\mathfrak {d}}}{\partial t}}}$ will mean the vector with components ${\displaystyle {\frac {\partial {\mathfrak {d}}_{x}}{\partial t}}{\text{, etc.}}}$ Based on the system of axes associated with ponderable matter, eventually the fundamental equations become ${\displaystyle Div\ {\mathfrak {d}}=\rho {,}}$ (Ia) ${\displaystyle {\mathfrak {S}}=\rho ({\mathfrak {p}}+{\mathfrak {v}})+\left({\frac {\partial {\mathfrak {d}}}{\partial t}}\right)_{1}{,}}$ (4a) ${\displaystyle Div\ {\mathfrak {H}}=0{,}}$ (IIa) ${\displaystyle Rot\ {\mathfrak {H}}=4\pi {\mathfrak {S}}{,}}$ (IIIa) ${\displaystyle -4\pi V^{2}Rot\ {\mathfrak {d}}=\left({\frac {\partial {\mathfrak {H}}}{\partial t}}\right)_{1}{,}}$ (IVa) ${\displaystyle {\mathfrak {E}}=4\pi V^{2}{\mathfrak {d}}+[{\mathfrak {p}}.{\mathfrak {H}}]+[{\mathfrak {v.H}}].}$ (Va) § 20. For some purposes, a different form of some equations is more appropriate. The first of the three (IV) summarized relations is namely ${\displaystyle -4\pi V^{2}\left({\frac {\partial {\mathfrak {d}}_{z}}{\partial y}}-{\frac {\partial {\mathfrak {d}}_{y}}{\partial z}}\right)={\frac {\partial {\mathfrak {H}}_{x}}{\partial t}}-{\mathfrak {p}}_{x}{\frac {\partial {\mathfrak {H}}_{x}}{\partial x}}-{\mathfrak {p}}_{y}{\frac {\partial {\mathfrak {H}}_{x}}{\partial y}}-{\mathfrak {p}}_{z}{\frac {\partial {\mathfrak {H}}_{x}}{\partial z}}{,}}$ where, by equation (IIa), we can write for the last three members ${\displaystyle \left({\mathfrak {p}}_{x}{\frac {\partial {\mathfrak {H}}_{y}}{\partial y}}-{\mathfrak {p}}_{y}{\frac {\partial {\mathfrak {H}}_{x}}{\partial y}}\right)-\left({\mathfrak {p}}_{z}{\frac {\partial {\mathfrak {H}}_{x}}{\partial z}}-{\mathfrak {p}}_{x}{\frac {\partial {\mathfrak {H}}_{z}}{\partial z}}\right){,}}$ which is nothing else than the first component of ${\displaystyle Rot\ [{\mathfrak {p.H}}].}$ Accordingly, we obtain instead of (IVa) ${\displaystyle Rot\ \left\{4\pi V^{2}{\mathfrak {d}}+[{\mathfrak {p.H}}]\right\}=-{\dot {\mathfrak {H}}}.}$ Furthermore, the current ${\displaystyle {\mathfrak {S}}}$ can be entirely eliminated. The first of equations (IIIa) becomes, when we consider (4a) and (Ia), ${\displaystyle {\frac {\partial {\mathfrak {H}}_{z}}{\partial y}}-{\frac {\partial {\mathfrak {H}}_{y}}{\partial z}}=4\pi \rho \left({\mathfrak {p}}_{x}+{\mathfrak {v}}_{x}\right)+4\pi \left({\frac {\partial {\mathfrak {d}}_{x}}{\partial t}}-{\mathfrak {p}}_{x}{\frac {\partial {\mathfrak {d}}_{x}}{\partial x}}-{\mathfrak {p}}_{y}{\frac {\partial {\mathfrak {d}}_{x}}{\partial y}}-\right.}$ ${\displaystyle \left.-{\mathfrak {p}}_{z}{\frac {\partial {\mathfrak {d}}_{x}}{\partial z}}\right)=4\pi \rho {\mathfrak {v}}_{x}+4\pi \left\{\left({\mathfrak {p}}_{x}{\frac {\partial {\mathfrak {d}}_{y}}{\partial y}}-{\mathfrak {p}}_{y}{\frac {\partial {\mathfrak {d}}_{x}}{\partial y}}\right)-\left({\mathfrak {p}}_{z}{\frac {\partial {\mathfrak {d}}_{x}}{\partial z}}-\right.\right.}$ ${\displaystyle \left.\left.-{\mathfrak {p}}_{x}{\frac {\partial {\mathfrak {d}}_{z}}{\partial z}}\right)\right\}+4\pi {\frac {\partial {\mathfrak {d}}_{x}}{\partial t}}.}$ By that it follows, if we define a new vector ${\displaystyle {\mathfrak {H}}'}$ by means of the equation ${\displaystyle {\mathfrak {H}}'={\mathfrak {H}}-4\pi [{\mathfrak {p.d}}]{,}}$ thus ${\displaystyle Rot\ {\mathfrak {H}}'=4\pi \rho {\mathfrak {v}}+4\pi {\dot {\mathfrak {d}}}.}$ If we now introduce the sign ${\displaystyle {\mathfrak {F}}}$ for the electric force-action on stationary ions, we obtain the following set of formulas ${\displaystyle Div\ {\mathfrak {d}}=\rho {,}}$ (Ib) ${\displaystyle Div\ {\mathfrak {H}}=0{,}}$ (IIb) ${\displaystyle Rot\ {\mathfrak {H}}'=4\pi \rho {\mathfrak {v}}+4\pi {\dot {\mathfrak {d}}}{,}}$ (IIIb) ${\displaystyle Rot\ {\mathfrak {F}}=-{\dot {\mathfrak {H}}}{,}}$ (IVb) ${\displaystyle {\mathfrak {F}}=4\pi V^{2}{\mathfrak {d}}+[{\mathfrak {p.H}}]{,}}$ (Vb) ${\displaystyle {\mathfrak {H}}'={\mathfrak {H}}-4\pi [{\mathfrak {p.d}}]{,}}$ (VIb) ${\displaystyle {\mathfrak {E}}={\mathfrak {F}}+[{\mathfrak {v.H}}].}$ (VIIb) § 21. From equations (Ia)—(Va) (§ 19) also some formulas can be derived, any of them contains only one of the magnitudes ${\displaystyle {\mathfrak {d}}_{x}}$, ${\displaystyle {\mathfrak {d}}_{y}}$, ${\displaystyle {\mathfrak {d}}_{z}}$, ${\displaystyle {\mathfrak {H}}_{x}}$, ${\displaystyle {\mathfrak {H}}_{y}}$, ${\displaystyle {\mathfrak {H}}_{z}}$. At first, it follows from (IVa) ${\displaystyle -4\pi V^{2}Rot\ Rot\ {\mathfrak {d}}=Rot\left({\frac {\partial {\mathfrak {H}}}{\partial t}}\right)_{1}=\left({\frac {\partial Rot\ {\mathfrak {H}}}{\partial t}}\right)_{1}.}$ If we consider here what has been said in § 4, h, as well as the relations (Ia), (IIIa) and (4a), we arrive at the three formulas ${\displaystyle V^{2}\Delta {\mathfrak {d}}_{x}-\left({\frac {\partial ^{2}{\mathfrak {d}}_{x}}{\partial t^{2}}}\right)_{1}=V^{2}{\frac {\partial \rho }{\partial x}}+\left({\frac {\partial }{\partial t}}\right)_{1}\left\{\rho \left({\mathfrak {p}}_{x}+{\mathfrak {v}}_{x}\right)\right\}{\text{, etc.}}}$ (A) Similarly, we find ${\displaystyle V^{2}\Delta {\mathfrak {H}}_{x}-\left({\frac {\partial ^{2}{\mathfrak {H}}_{x}}{\partial t^{2}}}\right)_{1}=4\pi V^{2}\left[{\frac {\partial }{\partial z}}\left\{\rho \left({\mathfrak {p}}_{y}+{\mathfrak {v}}_{y}\right)\right\}-\right.}$ ${\displaystyle \left.-{\frac {\partial }{\partial y}}\left\{\rho \left({\mathfrak {p}}_{z}+{\mathfrak {v}}_{z}\right)\right\}\right]{\text{, etc.}}}$ (B) The last members of these six equations are completely known once we know how the ions are moving. Application to electrostatics. § 22. We want to calculate by which forces the ions act on one another, when all of them are at rest with respect to ponderable matter. In this case a state occurs, where at every point ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ are independent of time. We have ${\displaystyle \left({\frac {\partial }{\partial t}}\right)_{1}=-\left({\mathfrak {p}}_{x}{\frac {\partial }{\partial x}}+{\mathfrak {p}}_{y}{\frac {\partial }{\partial y}}+{\mathfrak {p}}_{z}{\frac {\partial }{\partial z}}\right){,}}$ (19) and equations (A) and (B) will be reduced, when for brevity's sake the operation ${\displaystyle \Delta -{\frac {1}{V^{2}}}\left({\mathfrak {p}}_{x}{\frac {\partial }{\partial x}}+{\mathfrak {p}}_{y}{\frac {\partial }{\partial y}}+{\mathfrak {p}}_{z}{\frac {\partial }{\partial z}}\right)^{2}}$ is indicated by ${\displaystyle \Delta '}$, to ${\displaystyle \Delta '{\mathfrak {d}}_{x}={\frac {\partial \rho }{\partial x}}-{\frac {{\mathfrak {p}}_{x}}{V^{2}}}\left({\mathfrak {p}}_{x}{\frac {\partial \rho }{\partial x}}+{\mathfrak {p}}_{y}{\frac {\partial \rho }{\partial y}}+{\mathfrak {p}}_{z}{\frac {\partial \rho }{\partial z}}\right){\text{, etc.}}}$ (A') and ${\displaystyle \Delta '{\mathfrak {H}}_{x}=4\pi \left({\mathfrak {p}}_{y}{\frac {\partial \rho }{\partial z}}-{\mathfrak {p}}_{z}{\frac {\partial \rho }{\partial y}}\right){\text{, etc.}}}$ (B') To fulfill these conditions, we determine a function ${\displaystyle \omega }$ by ${\displaystyle \Delta '\omega =\rho }$ and put ${\displaystyle {\mathfrak {d}}_{x}={\frac {\partial \omega }{\partial x}}-{\frac {{\mathfrak {p}}_{x}}{V^{2}}}\left({\mathfrak {p}}_{x}{\frac {\partial \omega }{\partial x}}+{\mathfrak {p}}_{y}{\frac {\partial \omega }{\partial y}}+{\mathfrak {p}}_{z}{\frac {\partial \omega }{\partial z}}\right){\text{, etc.,}}}$ (20) ${\displaystyle {\mathfrak {H}}_{x}=4\pi \left({\mathfrak {p}}_{y}{\frac {\partial \omega }{\partial z}}-{\mathfrak {p}}_{z}{\frac {\partial \omega }{\partial y}}\right){\text{, etc.,}}}$ (21) i.e., values that really satisfy the fundamental equations (Ia) - (IVa). From (Va) it also follows ${\displaystyle {\mathfrak {E}}_{x}=4\pi \left(V^{2}-{\mathfrak {p}}^{2}\right){\frac {\partial \omega }{\partial x}}{\text{, etc.,}}}$ (22) so that the sought forces are found. Without prejudice to generality, we may assume that the translation happens in the direction of the x-axis. It is then ${\displaystyle {\mathfrak {p}}_{y}={\mathfrak {p}}_{z}=0}$, and the formula for the determination of ${\displaystyle \omega }$ will be transformed into ${\displaystyle \left(1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}\right){\frac {\partial ^{2}\omega }{\partial x^{2}}}+{\frac {\partial ^{2}\omega }{\partial y^{2}}}+{\frac {\partial ^{2}\omega }{\partial z^{2}}}=\rho .}$ (23) § 23. To clearly define the meaning of the above formulas, we will compare the considered system ${\displaystyle S_{1}}$ with a second one ${\displaystyle S_{2}}$. The latter should not be moved, and it arises from ${\displaystyle S_{1}}$ by increasing all the dimensions that have the direction of the x-axis (therefore the relevant dimensions of the ions as well), in the ratio ${\displaystyle \textstyle {\sqrt {V^{2}-{\mathfrak {p}}^{2}}}}$ to ${\displaystyle V}$, or: between the coordinates x, y, z of a point of ${\displaystyle S_{1}}$ and the coordinates ${\displaystyle x',y',z'}$ of the same corresponding point of ${\displaystyle S_{2}}$, we let remain the relations ${\displaystyle x=x'{\sqrt {1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}}},\ y=y',\ z=z'}$ (24) In addition, the mutually corresponding volume elements, and therefore also the ions, shall have the same charges in ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$. If we apply to all magnitudes, which are related to the second system, a prime so they can be distinguished, then ${\displaystyle \rho '=\rho {\sqrt {1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}}}{,}}$ and ${\displaystyle {\frac {\partial ^{2}\omega '}{\partial x'^{2}}}+{\frac {\partial ^{2}\omega '}{\partial y'^{2}}}+{\frac {\partial ^{2}\omega '}{\partial z'^{2}}}=\rho '=\rho {\sqrt {1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}}}.}$ Then the equation (23) can be written in the form ${\displaystyle {\frac {\partial ^{2}\omega }{\partial x'^{2}}}+{\frac {\partial ^{2}\omega }{\partial y'^{2}}}+{\frac {\partial ^{2}\omega }{\partial z'^{2}}}=\rho }$ then ${\displaystyle \omega ={\frac {\omega '}{\sqrt {1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}}}}{,}}$ and since in the second system ${\displaystyle {\mathfrak {E'}}_{x}=4\pi V^{2}{\frac {\partial \omega '}{\partial x'}}{\text{, etc.}}}$ thus ${\displaystyle {\mathfrak {E}}_{x}={\mathfrak {E'}}_{x},\ {\mathfrak {E}}_{y}={\sqrt {1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}}}{\mathfrak {E'}}_{y},\ {\mathfrak {E}}_{z}={\sqrt {1-{\frac {{\mathfrak {p}}^{2}}{V^{2}}}}}{\mathfrak {E'}}_{z}.}$ The same relations, as they exist between the components of ${\displaystyle {\mathfrak {E}}}$ and ${\displaystyle {\mathfrak {E}}'}$, also exist, since the charges in ${\displaystyle S_{1}}$ and ${\displaystyle S_{2}}$ are equal, between the force components acting on an ion. If in the second system at certain places ${\displaystyle {\mathfrak {E}}'=0}$, then ${\displaystyle {\mathfrak {E}}}$ vanishes at the corresponding points of the first system. § 24. Several implications of this theorem are obvious. From ordinary electrostatics, we know for example that an excess of positive (or negative) ions can be distributed over a conductor, namely over its surface ${\displaystyle \Sigma }$, so that in the interior no electric force is acting. If we take this distribution for the system ${\displaystyle S_{2}}$ and derive from it a system ${\displaystyle S_{1}}$ by the above-discussed transformation, then also in this one an excess of positive ions only exists at a certain surface ${\displaystyle \Sigma }$, while in all interior points the electric force ${\displaystyle {\mathfrak {E}}}$ vanishes. The fact that an electric charge is located at the surface of a conductor, won't be changed by the translation of ponderable matter. Similar considerations apply to two or more bodies. If a conductor C is confronted with a charged body K, then there exists, according to a known theorem, always a certain amount of charge on the surface of C, which together with K exerts no action on the ions in the interior of the conductor. This theorem remains valid, if the ponderable matter is moving, and it is even still allowed to assume that, under the influence of K, an "induced" charge is formed by itself upon C, which just cancels the effect of K on the interior points. Since by (22) the components of ${\displaystyle {\mathfrak {E}}}$ are proportional to the derivative of ω, we can also say that inducing and inducted charges together cause a constant ω at all points of C. It follows then by means of equations (20), (21) and (Va), that also a moving ion in the interior of C does not experience any force-action from the two charges. Finally, it should be noted that by our formulas, the distribution of a charge over a given conductor, as well as the attraction or repulsion of charged bodies by the motion of the earth, must be changed. But this influence is limited to the second order, namely if the fraction ${\displaystyle {\mathfrak {p}}/V}$ is called a magnitude of first order, and thus the fraction ${\displaystyle {\mathfrak {p}}^{2}/V^{2}}$ is called a magnitude of second order. Since ${\displaystyle {\mathfrak {p}}/V=1/10000}$, we may not hope, neglecting some very special cases, to find with respect to electrical and optical phenomena an influence of earth's motion that depends on ${\displaystyle {\mathfrak {p}}^{2}/V^{2}}$. The only thing that could be observed in relation to bodies at rest on earth, is the magnetic force (21). At first glance, we might expect a corresponding effect on the current elements. We will return to this question in § 26. Values of ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ at a stationary current. § 25. On the basis of equations (A) and (B) we again tackle the problem treated in § 11. We consider, as there, the mean values and take into account that for them the simplification (19) is permitted in stationary states; moreover, we assume at first that the conductors do not have a significant charge, so that ${\displaystyle {\bar {\rho }}=0}$. It is near at hand to interpret the vector ${\displaystyle {\overline {\rho {\mathfrak {v}}}}}$ as being a "current". We think of it as solenoidally distributed and denote it by ${\displaystyle {\bar {\mathfrak {S}}}}$, where it remains, however, temporarily undecided whether this is also the mean value of the vector occurring in (4a). We now derive from (A) and (B) ${\displaystyle V^{2}\Delta '{\bar {\mathfrak {d}}}_{x}=-\left({\mathfrak {p}}_{x}{\frac {\partial }{\partial x}}+{\mathfrak {p}}_{y}{\frac {\partial }{\partial y}}+{\mathfrak {p}}_{z}{\frac {\partial }{\partial z}}\right){\bar {\mathfrak {S}}}_{x}{\text{, etc.,}}}$ ${\displaystyle \Delta '{\overline {{\mathfrak {H}}_{x}}}=4\pi \left({\frac {\partial {\overline {\mathfrak {S}}}_{y}}{\partial z}}-{\frac {\partial {\overline {\mathfrak {S}}}_{z}}{\partial y}}\right){\text{, etc.}}}$ If we determine thus the three auxiliary magnitudes ${\displaystyle \chi _{x}}$, ${\displaystyle \chi _{y}}$, ${\displaystyle \chi _{z}}$[1] by means of the equations ${\displaystyle \Delta '\chi _{x}={\overline {\mathfrak {S}}}_{x},\ \Delta '\chi _{y}={\overline {\mathfrak {S}}}_{y},\ \Delta '\chi _{z}={\overline {\mathfrak {S}}}_{z}{,}}$ so everywhere we have ${\displaystyle {\overline {{\mathfrak {d}}_{x}}}=-{\frac {1}{V^{2}}}\left({\mathfrak {p}}_{x}{\frac {\partial }{\partial x}}+{\mathfrak {p}}_{y}{\frac {\partial }{\partial y}}+{\mathfrak {p}}_{z}{\frac {\partial }{\partial z}}\right)\chi _{x}{\text{, etc.,}}}$ (25) ${\displaystyle {\overline {{\mathfrak {H}}_{x}}}=4\pi \left({\frac {\partial \chi _{y}}{\partial z}}-{\frac {\partial \chi _{z}}{\partial y}}\right){\text{, etc.,}}}$ (26) and by (Va) the electric force acting on stationary ions, ${\displaystyle {\overline {{\mathfrak {E}}_{x}}}=-4\pi {\frac {\partial }{\partial x}}\left({\mathfrak {p}}_{x}\chi _{x}+{\mathfrak {p}}_{y}\chi _{y}+{\mathfrak {p}}_{z}\chi _{z}\right){\text{, etc.}}}$ (27) At first glance, it therefore seems as if a current that streams through a conductor, is acting on a stationary ion with a force of first order. However, on closer reflection we find that the force (27) is just being compensated by another force. The values (27) are in fact in perfect agreement with the expressions (22), if we substitute ${\displaystyle \omega =-{\frac {{\mathfrak {p}}_{x}\chi _{x}+{\mathfrak {p}}_{y}\chi _{y}+{\mathfrak {p}}_{z}\chi _{z}}{V^{2}-{\mathfrak {p}}^{2}}}}$ (28) By § 22, ω would belong to an electric charge, its density is ${\displaystyle \rho =\Delta '\omega {,}}$ or by the given formulas ${\displaystyle \rho =-{\frac {{\mathfrak {p}}_{x}{\overline {{\mathfrak {S}}_{x}}}+{\mathfrak {p}}_{y}{\overline {{\mathfrak {S}}_{y}}}+{\mathfrak {p}}_{z}{\overline {{\mathfrak {S}}_{z}}}}{V^{2}-{\mathfrak {p}}^{2}}}}$ (29) Let us imagine for a moment that the current does not exist, but there is a charge with the average density ρ. This would of course exist only in the conductor, and the total sum would be zero, as it follows from (29) and ${\displaystyle \Delta {\overline {\mathfrak {S}}}=0}$ Obviously this ion distribution would completely vanish, if it is left alone. This can also be expressed by saying that the charge will set them in motion by virtue of its action on resting ions, and that therefore eventually another charge with the average density -ρ occurs besides it, or ${\displaystyle {\frac {{\mathfrak {p}}_{x}{\overline {{\mathfrak {S}}_{x}}}+{\mathfrak {p}}_{y}{\overline {{\mathfrak {S}}_{y}}}+{\mathfrak {p}}_{z}{\overline {{\mathfrak {S}}_{z}}}}{V^{2}-{\mathfrak {p}}^{2}}}}$ Since the current that we considered initially, exactly acts on resting ions as the charge (29), it will also generate the charge A after a short time; this eliminates the effects on stationary ions, namely not only in the outer points, but also, at least with respect to the averages of the forces, in the interior of the conductor. I want to call this charge A the compensation charge. Once generated, the conductor does not cause any electricity motion in a neighboring body. A stationary current in a wire moving with the Earth therefore exerts no inductive action on a circuit which is also at rest with respect to Earth, regardless of Earth's motion[2]. It should be noted now that in the finally occurring state of the system, ρ and ${\displaystyle {\mathfrak {d}}}$ have certain values of order ${\displaystyle {\mathfrak {p}}}$. Neglecting the magnitudes of second order, then it really follows from (4a) ${\displaystyle {\overline {\mathfrak {S}}}={\overline {\rho {\mathfrak {v}}}}.}$ Interaction between a charged body K and a conductor. § 26. After the foregoing, we have to assume that in the conductor next to the current ${\displaystyle {\overline {\mathfrak {S}}}}$, a compensation charge does exist, and also (at the surface of the conductor) the electrostatic induction-charge B caused by K. For simplicity, we imagine that ${\displaystyle {\overline {\mathfrak {S}}}}$, ${\displaystyle A}$ and ${\displaystyle B}$ co-exist as independent ion systems[3]. Each of the four systems ${\displaystyle {\overline {\mathfrak {S}}}}$, ${\displaystyle A}$, ${\displaystyle B}$ and ${\displaystyle K}$ now forces a special state to the aether, and thus acts on any of the others. To shortly indicate these actions, we want to put ${\displaystyle \left({\overline {\mathfrak {S}}},K\right)}$ for those actions, which for example were exerted by ${\displaystyle {\overline {\mathfrak {S}}}}$ on ${\displaystyle K}$, where we have to notice that perhaps ${\displaystyle \left({\overline {\mathfrak {S}}},K\right)}$ and ${\displaystyle \left(K,{\overline {\mathfrak {S}}}\right)}$ are not equal and opposite, and that also actions such as ${\displaystyle \left({\overline {\mathfrak {S}}},{\overline {\mathfrak {S}}}\right)}$ may exist, namely forces which act on one of the ion systems due to condition changes in the aether, which were caused by itself. In easily understandable symbols we can now write for the total action on K ${\displaystyle (K,K)+(B,K)+({\mathfrak {\overline {S}}},K)+(A,K){,}}$ which, however, due to § 25 ${\displaystyle ({\overline {\mathfrak {S}}},K)+(A,K)=0}$ is reduced to the first two terms and thus becomes independent of the current. On the other hand, the forces which act on the conductor, can be represented by a expression consisting of 12 members, since the action of ${\displaystyle K}$, ${\displaystyle {\overline {\mathfrak {S}}}}$, ${\displaystyle A}$ and ${\displaystyle B}$ on ${\displaystyle {\overline {\mathfrak {S}}}}$, ${\displaystyle A}$ and ${\displaystyle B}$, has to be considered each time. It is now ${\displaystyle (K,{\overline {\mathfrak {S}}})+(B,{\overline {\mathfrak {S}}})=0,\ (K,A)+(B,A)=0{,}}$ so that by the aforementioned expression it only remains ${\displaystyle (K,B)+(B,B)+(A,{\overline {\mathfrak {S}}})+({\overline {\mathfrak {S}}},{\overline {\mathfrak {S}}}).}$ (30) Those forces represented by the first two members would also exist, when ${\displaystyle {\overline {\mathfrak {S}}}=0}$, and the last two members are independent of the charged body K. An action of K exerted on the conductor as such, doesn't exist. Besides, in each of the four members (30), the part that depends on ${\displaystyle {\mathfrak {p}}}$ is of second order. We already know this from ${\displaystyle (K,B)+(B,B)}$, since this represents an electrostatic effect. ${\displaystyle (A,{\overline {\mathfrak {S}}})}$ and ${\displaystyle ({\overline {\mathfrak {S}}},{\overline {\mathfrak {S}}})}$, however, represent forces acting on a current, in which the mean electric density is zero. As it can be seen from (Va), such forces are determined by the value of ${\displaystyle {\mathfrak {H}}}$, which belongs to the acting system. Inasmuch as ${\displaystyle {\mathfrak {H}}}$ (that belongs to ${\displaystyle {\overline {\mathfrak {S}}}}$) depends on ${\displaystyle {\mathfrak {p}}}$, it is of second order (§ 25), and the compensation charge A only produces by its velocity ${\displaystyle {\mathfrak {p}}}$ a magnetic force of second order, since its density already contains the factor ${\displaystyle {\mathfrak {p}}/V}$. Electrodynamic actions. § 27. The question as to how these effects are influenced by earth's motion, can now easily be answered. If we denote the currents in two conductors by ${\displaystyle {\overline {\mathfrak {S}}}}$ and ${\displaystyle {\overline {{\mathfrak {S}}'}}}$, and the corresponding compensation charges by A and ${\displaystyle A'}$, then the action exerted on the second conductor is ${\displaystyle ({\overline {\mathfrak {S}}},{\overline {{\mathfrak {S}}'}})+(A,{\overline {{\mathfrak {S}}'}})+({\overline {\mathfrak {S}}},A')+(A,A'){,}}$ in which the last two terms are mutually canceled. That ${\displaystyle (A,{\overline {{\mathfrak {S}}'}})}$ and the ${\displaystyle {\mathfrak {p}}}$-dependent part ${\displaystyle ({\overline {\mathfrak {S}}},{\overline {{\mathfrak {S}}'}})}$ are of order ${\displaystyle {\mathfrak {p}}^{2}/V^{2}}$, follows from considerations such as those communicated above. Induction in a linear conductor. § 28. A closed secondary wire from B will be displaced from ${\displaystyle B_{1}}$ into position ${\displaystyle B_{2}}$, while a primary conductor A at the same time passes from position ${\displaystyle A_{1}}$ to ${\displaystyle A_{2}}$, and the intensity of the primary current increases from ${\displaystyle i_{1}}$ to ${\displaystyle i_{2}}$. At the beginning and the end of time T, in which these processes take place, the two conductors shall be at rest and the primary current shall be constant; if no other electromotive forces acts on B, then this wire will eventually be, as before, without current. We want to determine the quantity of electricity, which has passed in time T through a cross section of the wire, and namely we will only consider the convection current at this place. After the expiry of the whole process, the surface of B has nowhere a electric charge. It follows that the quantity of electricity that streamed through is the same for all cross sections, and that the conductor can be decomposed into infinitely thin current tubes, so that in each of them and equally through all cross-sections, the same quantity of electricity is streaming. We consider in detail one of these tubes, and call ds an element of their length, ω is a vertical cross-section, Ndt the number of positive ions which pass through it during the time dt in the assumed positive direction s, N'dt the number of negative ions which move in the opposite direction, e is the charge of a positive and ${\displaystyle -e'}$ the charge of a negative ion. The total current through ω is then ${\displaystyle i=\int (N\ e+N'\ e')\ d\ t.}$ (31) Furthermore, ${\displaystyle {\mathfrak {E}}_{s}}$ and ${\displaystyle {\mathfrak {E}}'_{s}}$ are the electric forces acting in the direction of ds, which come into consideration for a positive or a negative ion. By Ohm's law we shall assume, that the motion of ions by these forces is thus determined, so that N and ${\displaystyle N'}$ are proportional to its mean value; this and the proportionality to ω, we express by ${\displaystyle N=p{\overline {{\mathfrak {E}}_{s}}}\omega ,\ N'=q{\overline {{\mathfrak {E}}'_{s}}}\omega {,}}$ where p and q are constant factors. It is now necessary to distinguish between the velocity of the considered conductor element and the relative velocity of an ion in the wire. The former shall be called ${\displaystyle {\mathfrak {v}}}$ and the latter ${\displaystyle {\mathfrak {w}}}$. From (Va) it is given ${\displaystyle {\mathfrak {E}}=4\pi V^{2}{\mathfrak {d}}+[{\mathfrak {p.H}}]+[{\mathfrak {v.H}}]+[{\mathfrak {w.H}}].}$ Yet, the velocity ${\displaystyle {\mathfrak {w}}}$ has the direction of ds; consequently we have ${\displaystyle [{\mathfrak {w.H}}]_{s}=0}$, and for positive as well as for negative ions ${\displaystyle {\mathfrak {E}}_{s}={\mathfrak {E}}'_{s}=4\pi V^{2}{\mathfrak {d}}_{s}+[{\mathfrak {p.H}}]_{s}+[{\mathfrak {v.H}}]_{s}.}$ Finally, equation (31) transforms into ${\displaystyle i=c\omega \int \left\{4\pi V^{2}{\bar {{\mathfrak {d}}_{s}}}+[{\mathfrak {p.{\bar {H}}}}]_{s}+[{\mathfrak {v.{\bar {H}}}}]_{s}\right\}\ d\ t{,}}$ ${\displaystyle c=pe+qe'.}$ Let us divide by ${\displaystyle c\ \omega }$, multiply by ds, and integrate over the whole current-line. If we consider here, that i has everywhere the same value in the current-line, and if we put ${\displaystyle \int {\frac {d\ s}{c\ \omega }}={\frac {1}{C}}{,}}$ we shall find ${\displaystyle i=C\int \left\{4\pi V^{2}\int {\bar {{\mathfrak {d}}_{s}}}\ d\ s+\int [{\mathfrak {p.{\bar {H}}}}]_{s}\ d\ s+\int [{\mathfrak {v.{\bar {H}}}}]_{s}\ d\ s\right\}\ d\ t.}$ (32) § 29. The following discussion is intended to derive the known fundamental law of induction from this formula. Imagine an area σ on which the current-line constantly is located during its motion, and consider the integral ${\displaystyle \int {\overline {{\mathfrak {H}}_{n}}}\ d\ \sigma =P{,}}$ (33) for the part that is cut by the line. This quantity, which is usually called "the number of magnetic force-lines covered by s", changes over time, namely for two reasons. First, ${\displaystyle {\bar {\mathfrak {H}}}}$ varies at each point, and second, the area of integration changes. During time dt, the first cause produces the following increase of P ${\displaystyle d\ t\int {\dot {\overline {{\mathfrak {H}}_{n}}}}\ d\ \sigma .}$ As to the second variation, it should be noted that each element ds describes an infinitely small parallelogram on the surface, and that the value of the surface integral ${\displaystyle \textstyle {\int {\dot {\overline {{\mathfrak {H}}_{n}}}}\ d\ \sigma }}$ of this parallelogram, by suitably chosen signs, goes into dP. This value is determined by the area of the parallelepiped, with ${\displaystyle d\ s}$, ${\displaystyle {\mathfrak {H}}}$ as its sides, and the distance ${\displaystyle {\mathfrak {v}}\ d\ t}$ in the direction of ${\displaystyle {\mathfrak {v}}}$. We will find for it ${\displaystyle -d\ t[{\mathfrak {v.{\bar {H}}}}]_{s}\ d\ s{,}}$ and for the whole increase of (33) ${\displaystyle d\ P=d\ t\int {\dot {\overline {{\mathfrak {H}}_{n}}}}\ d\ \sigma -d\ t\int [{\mathfrak {v.{\bar {H}}}}]_{s}\ d\ s{,}}$ or, if the relations (IVb) and (Vb), as well as the theorem stated in (1) (§ 4, h), were considered, ${\displaystyle -d\ t\int \left\{4\pi V^{2}{\bar {{\mathfrak {d}}_{s}}}+[{\mathfrak {p.{\bar {H}}}}]_{s}\right\}\ d\ s-d\ t\int [{\mathfrak {v.{\bar {H}}}}]_{s}\ d\ s.}$ Consequently, (32) transforms into ${\displaystyle i=-C\int d\ P=C\left(P_{1}+P_{2}\right){,}}$ where ${\displaystyle P_{1}}$ and ${\displaystyle P_{2}}$ belong to the beginning and the end of the considered time. The magnitude P depends on the different parts of ${\displaystyle {\mathfrak {H}}}$. Since an induced current neither exists at the beginning nor at the end of time T, we commit no mistake when we substitute into (33) for ${\displaystyle {\mathfrak {H}}}$ only the magnetic force generated by the primary current. The prime above the letter can be omitted here, and if the induced wire is very thin, we may calculate for all current-lines with the same P. Finally, if ${\displaystyle C_{1}}$ is the sum of all numbers C (i.e., the conductivity of the induced electrical circuit), then the integral-current which we wished to calculate, becomes ${\displaystyle I=C_{1}\left(P_{1}-P_{2}\right){,}}$ which is consistent with a known theorem. The motion of Earth was never overlooked during the given derivation; consequently the formula admits of a conclusion about the influence of this motion on the phenomena of induction. There, only magnitudes of second order come into account. ${\displaystyle {\mathfrak {H}}}$, which should serve to determine the magnitude P, is indeed composed of the vector specified by (26) and the magnetic force which is generated by the compensation charge. The latter magnetic force is of order ${\displaystyle {\mathfrak {p}}^{2}/V^{2}}$, and since in equations (§ 25) that serve to determine ${\displaystyle \chi _{x}}$, ${\displaystyle \chi _{y}}$, ${\displaystyle \chi _{z}}$, also just the square of ${\displaystyle {\mathfrak {p}}}$ is included, then the values (26) differ only to second order from the expressions that apply to a stationary earth. By proving, that no first-order influence may be expected from the phenomena of induction, we have achieved the explanation for the negative result of Des Coudres[4]. 1. These magnitudes are only different by a constant factor from the components of the vector potential, when ${\displaystyle {\mathfrak {p}}=0}$. 2. It should be remembered that Mr. Budde (Wied. Ann., Vol 10, p. 553, 1880), on the basis of Clausius' law, reached the same conclusions, as it was drawn here by me. His value for the density of the compensation current even completely agrees with the above-found, if ${\displaystyle {\mathfrak {p}}^{2}}$ is neglected. 3. This mode of imagination, however, is in no way necessary. To show that the considerations communicated in the texts are correct, we don't need to assume, that the ions which form the charges A and B, were remaining at rest and were altogether uninfluenced by the adjacent existing current. We can also imagine that all ions are moving, similar to an electrolyte, in a most irregular manner. But a constant, non-zero mean value ${\displaystyle {\bar {\ ho}}}$ is very well possible; because this constitutes the charges designated by A and B (i.e., ${\displaystyle {\bar {\rho }}}$ is composed of two terms of a sum ${\displaystyle {\bar {\rho _{A}}}}$ and ${\displaystyle {\bar {\rho _{B}}}}$), while the current ${\displaystyle {\overline {\mathfrak {S}}}}$ is determined by ${\displaystyle {\overline {\rho {\mathfrak {v}}}}}$. If in (A) and (B) all members are replaced by the mean values, one easily sees that each of the vectors ${\displaystyle {\overline {\mathfrak {d}}}}$ and ${\displaystyle {\overline {\mathfrak {H}}}}$ consists of two parts, where one of them only depends on ${\displaystyle {\bar {\rho }}}$ and the other one only depends on ${\displaystyle {\overline {\rho {\mathfrak {v}}}}}$. Now, as the actions to the outside were determined by those vectors, then they are just so, as if the charge and the current were not connected with each other at all. The same is true for the actions exerted on the conductor. Namely, if ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ are the variations caused by external causes in the aether, then by (Va) the force acting on a volume element is given by ${\displaystyle 4\pi V^{2}\rho {\mathfrak {d}}\ d\ \tau +\rho [{\mathfrak {p.H}}]\ d\ \tau +\rho [{\mathfrak {v.H}}]\ d\ \tau .}$ The action, to which a noticeable part of the body is subjected, can thus be calculated in a manner, by which we put as unit volume ${\displaystyle 4\pi V^{2}{\bar {\rho }}{\mathfrak {d}}+{\bar {\rho }}[{\mathfrak {p.H}}]+[{\bar {\rho {\mathfrak {v}}}}.{\mathfrak {H}}]{,}}$ which again decomposes into two parts ${\displaystyle {\bar {\rho }}}$ and ${\displaystyle {\overline {\rho {\mathfrak {v}}}}}$. Strictly taken, also a third charge would have to be taken into account. The current can not exist without a potential gradient, and this cannot exist without electric charges of the parts of the conductor. These charges, however, play in the considered questions no essential role, and could even more be left out, as we can think of them as vanishingly small if we assume a very high conductivity. 4. Actually, we would have to consider now, under consideration of the Earth's motion, the effect of the induction of a galvanometer. In the experiments of Des Coudres (Wied. Ann., Vol 88, p. 71, 1889) an induction role was located between two successive connected primary roles, which have been streamed by the current, so that its effects are just compensated. Since, whatever influence the translation may have by the way, the galvanometer must remain at rest if I disappears, thus we may infer from the theory that, neglecting magnitudes of second order, the compensation is not disturbed by Earth's motion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 247, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967861533164978, "perplexity": 372.53585598118013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00153.warc.gz"}
https://www.physicsforums.com/threads/yet-another-simple-factorizing-question.219000/
# Yet another simple factorizing question 1. Mar 1, 2008 ### alpha01 [SOLVED] yet another simple factorizing question.. I wont re-write the full question, just one line on the numerator: from the solutions, (a + 1)^2 − (a − 1)^2 factorizes to: ((a + 1) − (a − 1))((a + 1) + (a − 1)) I can see that this is just another form of: (a + 1)(a + 1) - (a - 1)(a - 1) but why is the former, and not the later used? does it make it easier to go to the next step to complete factorization process? 2. Mar 1, 2008 ### cristo Staff Emeritus Well, it's going to depend on what the question asks next! 3. Mar 1, 2008 ### alpha01 the solution continues on like this: = (a + 1 − a + 1)(a + 1 + a − 1) = 4a (the question is to factorize.. i dont know what you mean by "what does it ask next") Last edited: Mar 1, 2008 4. Mar 1, 2008 ### cristo Staff Emeritus Ok, I think I get what you mean now. Well, your first expression is in the form x^2-y^2, which is a difference of two squares. We know that the factorisation of a difference of two squares is (x+y)(x-y); it just turns out that in this case the expression simplifies further. The second expression you give in your first post is not a factorisation of (a+1)^2-(a-1)^2, but is an expansion. 5. Mar 1, 2008 ### alpha01 thank you, understood Similar Discussions: Yet another simple factorizing question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425163269042969, "perplexity": 1934.966651241956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542520.47/warc/CC-MAIN-20161202170902-00216-ip-10-31-129-80.ec2.internal.warc.gz"}
http://wiki.panotools.org/index.php?title=Partial_Panoramas_using_ROI_in_PTViewer&diff=11679&oldid=4392
# Partial Panoramas using ROI in PTViewer (Difference between revisions) If you have a panorama that is not fully 360°x180°, and you still want to use PTViewer to immerse your audience into your panorama, there are a few methods to do that. You can expand your Panorama with blank space around, and use the normal way of displaying a panorama in PTViewer. The disadvantage of this is that if you put this picture online, the download times can be significantly longer because of all the blank space. To avoid this, it is possible to use a Region Of Interest picture (ROI) to display the panorama. This will only download the partial panorama. We will have to tell PTViewer where to place the picture, and how far the user may pan left and right, and how much they can tilt up and down. Note this is not an explanation of the syntax of PTViewer, rather a tutorial on how to calculate the different parameters. For the syntax on PTViewer you can visit : PTViewer Documentation Good luck. Richard Korff ## Gathering Information From the ROI picture we need to get some basic information : • Width in pixels (ROI Width) 800 px • Height in pixels (ROI Height) 541 px • Position of the horizon from the top of the picture (Horizon pos) 227 px The picture must have equirectangular projection From the stitcher we should be able to get the Horizontal Field of View (HFOV). In this case 160° From these 4 numbers we should be able to calculate the parameters necessary for PTViewer to display a partial panorama. ## Calculating the parameters for PTViewer Since we know the ROI Width of the picture as well as the Horizontal Field of View (HFOV), we can calculate the field of view for 1 pixel. In this case $\frac{160^\circ}{800\text{px}} = \frac{0.2^\circ}{\text{px}}$ We need this number to convert from pixels to degrees and vice versa. You need an accuracy of a couple of decimals otherwise it won't work. The objective is to place the ROI picture inside the 360°x180° panorama with the horizon in the ROI image over the horizontal 0° line of the pano, and the middle of the ROI image in the middle of the panorama. ## pwidth and pheight To do that we first need to calculate the total size of the panorama image, of which the ROI image is a part of. The calculation is done by using the number of degrees per pixel. Since we know the degrees, we can calculate the number of pixels. \begin{align} \text{Panorama Width}\, pwidth & = \frac{360^\circ}{\frac{0.2^\circ}{\text{px}}} = 1800\text{px} \\ \text{Panorama Height}\, pheight & = \frac{180^\circ}{\frac{0.2^\circ}{\text{px}}} = 900\text{px} \end{align} ## x and y insertion point To calculate the x and y position of the insertion point (the point where the picture needs to be placed) we can take half of the panorama height and subtracting the horizon position in the ROI. $Y \text{position of the insertion point} = \frac{900\text{px}}{2} - 227\text{px} = 223\text{px}$ Similarly we can calculate the x offset. In most circumstances, you either don't know, or don't care about the direction the picture was taken. In that case it is good practice to place the ROI in the middle of the large pano where 0° is the middle of the picture. You can do this by taking half of the total panorama width and subtracting half the size of the picture $X \text{position of the insertion point} = \frac{1800\text{px}}{2} - \frac{800\text{px}}{2} = 500\text{px}$ ## panmin, panmax, tiltmin and tiltmax To limit the freedom the user has in moving around your pano, you want to restrict the pan and tilt angles. To calculate this is relatively easy with the information we have gathered above. The pan and tilt angles are calculated in degrees. Because the ROI is horizontally in the middle, you may pan half the width of the image to the left and right, converted to degrees. \begin{align} \text{Minimum pan} & = \frac{-800\text{px}}{2} \cdot \frac{0.2^\circ}{\text{px}} = -80^\circ\\ \text{Maximum pan} & = \frac{800\text{px}}{2} \cdot \frac{0.2^\circ}{\text{px}} = 80^\circ \end{align} The minimum tilt is calculated as the height of the ROI minus the position of the horizon, converted to degrees. $\text{Minumum tilt} = -(541\text{px} - 227\text{px}) \cdot \frac{0.2^\circ}{\text{px}} = -62.8^\circ \approx -62^\circ$ The maximum tilt is calculated as the position of the horizon, converted to degrees. $\text{Maximum tilt} = 227\text{px} \cdot \frac{0.2^\circ}{\text{px}} = 45.4^\circ \approx 45^\circ$ Because ptviewer does not take fractions of degrees, you throw away the fraction. Using these numbers in PTViewer should give you a good partial panorama. If you see blank space at the edges of the panorama, you may want to make a 1 degree change to the minimum and maximum pan and tilt untill it does not show up anymore. ## HTML code To see how these calculations translate in HTML code, the above sample could result in the following HTML code : <source lang="html"> <applet archive="ptviewer27L2.jar" code="ptviewer.class" WIDTH="300" HEIGHT="200" mayscript=true> <param name=pwidth value=1800> <param name=pheight value=900> <param name=roi0 value="i'sample.jpg' x500 y223"> <param name=panmin value=-80> <param name=panmax value=80> <param name=tiltmax value=45> <param name=tiltmin value=-62> </applet> </source> Have fun.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383676409721375, "perplexity": 1477.742348234661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00045-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/106417/solving-lyapunov-like-equation
# solving Lyapunov-like equation The following matrix equation might be a Lyapunov-like equation, but it seems hard for me to develop a simpler way to solve it. From the computation effort, I need some help for solving the special case of the following Lyapunov equation: Let $X$ be an $n\times n$ symmetric matrix, and $I$ an identity matrix, and $A$ is a matrix whose entries are all between 0 and 1, and $A$ is invertible. I need to solve $X$ in the following equation: $$AX+XA^T=I$$ Previously, I found some article discussing on using Krylov subspace to solve the following Lyapunov equation: $$AX+XA^T=b \cdot b^T$$ where $b$ is a vector. Due to $b \cdot b^T$ being a rank-one matrix, Krylov subspace appraoch is highly efficient. Now in my case it is the identity matrix $I$, but $X$ in my case is symmetric. I found that in my equation $AX$ and $XA^T$ are symmetric. So by letting $Y=AX$, my equation can be reduced to : $$Y+Y^T=I \quad \textrm{with } \ \ Y=AX$$. I don't know how to continue this. Another common way is to use tensor product to rewrite my equation as: $$(I \otimes A + A \otimes I) vec(X) = vec(I)$$ but the LHS of the above equation is $n^2 \times n^2$ size, which is too large to solve. Is there any other efficient way to solve this? Any advices are warmly welcome! - How large is your $n$ in practice? (an order of magnitude will be enough) – Federico Poloni Sep 5 '12 at 13:15 What exactly does "solve" mean for you in this contex? Do you want a numerical solution for a given $A$ or a closed-form formula that you can analyze? Or do you want to know when is the equation solvable? (That's an easy one, iff $A$ has "regular inertia", see the Ostrowski-Schneider theorem). Btw, it is a Lyapunov equation alright. – Felix Goldberg Sep 6 '12 at 19:43 As Federico Poloni pointed out, the Hessenberg-Schur algorithm, used by MATLAB's lyap.m function is a much better choice. It is a refined version of the older Bartels-Stewart algorithm (which also works pretty well). Here's the original paper for Hessenberg-Schur, by Golub-Nash-van Loan: https://www.cs.cornell.edu/cv/ResearchPDF/Hessenberg.Schur.Method.pdf In your case, since $A=B^T$, things are even a bit simpler, in that only one matrix needs to be decomposed. - If the $n\times n$ matrix $A$ is negatively stable (i.e. $\mbox{Re} \; \lambda_i <0$ for all $i=1,...,n$ where $\lambda_i$ are the eigenvalues of $A$), then for any $n\times n$ matrix $C$ there exists a unique $X$ such that $$AX+XA^T = C.$$ See Theorem 6.4.2 of Ortega ("Matrix Theory: A Second Course" 1987). - This follows from the integral representation in Suvrit's answer. – Federico Poloni Jun 14 '14 at 9:39 Let me first summarize some basic facts. It is known that the equation \begin{equation*} AX + XA^T = B, \end{equation*} has a unique solution if the matrix $A$ is positively stable (i.e., has spectrum in the right half plane). If $A$ is diagonal with entries $a_1,\ldots,a_n$, then the solution to the equation can be given in closed form \begin{equation*} X = D \circ B, \end{equation*} where [EDIT:] $D$ is a matrix with entries $1/(\bar{a}_i+a_j)$. In the more general case, for positively stable $A$, the solution to the above equation can be represented as \begin{equation*} X = \int_0^\infty e^{-tA}B(e^{-tA})^Tdt \end{equation*} But that does not seem to be computationally that nice. If $n$ is largish, one can still solve the linear system written using tensor products by using an iterative algorithm for solving the linear system, as long as the iterative algorithm (e.g., conjugate gradient, or other related methods) depends on just "matrix-vector" products. Because you would need to only compute $(A \otimes I + I \otimes A)x$ several times, and that can be done using matrix multiply without actually forming the tensor products. - In your closed-form solution, when $B=I$, can we compute $$\int_{0}^{\infty} e^{-tA} \cdot {({e^{-tA}})}^T dt$$ in an easier way ? – Hellen Sep 5 '12 at 13:02 Let me underline that $X$ is symmetric whenever $B$ is (proof: clear from the integral formula). If I am interpreting it correctly, Hellen meant that $AX$ and $XA^T$ are one the transposed of the other with "$AX$ and $XA^T$ are symmetric". So what she thinks is a special case is in fact the general behaviour for symmetric $B$. – Federico Poloni Sep 5 '12 at 21:07 Why are there two indices (i and j) in your expression for the entries of the diagonal matrix D? – Vidit Nanda Jun 13 '14 at 19:06 @ViditNanda: thanks for catching that typo! $D$ is not diagonal (because I already assumed $A$ to be diagonal). – Suvrit Jun 13 '14 at 20:04 Let $A$ be an invertible $n\times n$ matrix which is antisymmetric: $A^T=-A$, e.g. the symplectic matrix $$\begin{pmatrix} 0&1 \\\\ -1&0 \end{pmatrix}.$$ The equation $AX+XA^T=I$ cannot have a matrix solution $X$ since that would imply $$AX-XA=I,$$ which is impossible since $trace (AX-XA)=0$. Of course that does not contradict the previous answer, but shows that some further conditions should be imposed on $A$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669236540794373, "perplexity": 316.2988815648606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123840.94/warc/CC-MAIN-20160428161523-00111-ip-10-239-7-51.ec2.internal.warc.gz"}
https://iris.unife.it/handle/11392/1867517
Representing uncertain information is very important for modeling real world domains. Recently, the DISPONTE semantics has been proposed for probabilistic description logics. In DISPONTE, the axioms of a knowledge base can be annotated with a set of variables and a real number between 0 and 1. This real number represents the probability of each version of the axiom in which the specified variables are instantiated. In this paper we present the algorithm BUNDLE for computing the probability of queries from DISPONTE knowledge bases that follow the $\mathcal{ALC}$ semantics. BUNDLE exploits an underlying DL reasoner, such as Pellet, that is able to return explanations for queries. The explanations are encoded in a Binary Decision Diagram from which the probability of the query is computed. The experiments performed by applying BUNDLE to probabilistic knowledge bases show that it can handle ontologies of realistic size and is competitive with the system PRONTO for the probabilistic description logic P-$\mathcal{SHIQ}$(D). ### BUNDLE: A reasoner for probabilistic ontologies #### Abstract Representing uncertain information is very important for modeling real world domains. Recently, the DISPONTE semantics has been proposed for probabilistic description logics. In DISPONTE, the axioms of a knowledge base can be annotated with a set of variables and a real number between 0 and 1. This real number represents the probability of each version of the axiom in which the specified variables are instantiated. In this paper we present the algorithm BUNDLE for computing the probability of queries from DISPONTE knowledge bases that follow the $\mathcal{ALC}$ semantics. BUNDLE exploits an underlying DL reasoner, such as Pellet, that is able to return explanations for queries. The explanations are encoded in a Binary Decision Diagram from which the probability of the query is computed. The experiments performed by applying BUNDLE to probabilistic knowledge bases show that it can handle ontologies of realistic size and is competitive with the system PRONTO for the probabilistic description logic P-$\mathcal{SHIQ}$(D). ##### Scheda breve Scheda completa Scheda completa (DC) 2013 9783642396656 9783642396663 Probabilistic Ontologies; Probabilistic Description Logics; OWL; Probabilistic Logic Programming; Distribution Semantics File in questo prodotto: Non ci sono file associati a questo prodotto. I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/1867517 ##### Attenzione Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo • ND • 23 • ND
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458004593849182, "perplexity": 2823.2655317219273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00483.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/9783-speed-wind.html
# Math Help - Speed of wind 1. ## Speed of wind Question: An aircraft flies due north from A to B where AB=280km. The speed of the aircraft in still air is 320km/h. There is a wind blowing from the direction 300 degree. Given that the course set by the pilot is in the direction 357 degree, calculate the speed of the wind. Find also the time, in minutes of the flight. -------------------------------------------------------------------------- My question is that I think the course set by the pilot is wrong. The direction should be in the North East region since the wind is blowing 357 degree. Thus the resultant direction will be North so that the aircraft is able to reach B. By the way, this forum is awesome!! 2. Originally Posted by acc100jt Question: An aircraft flies due north from A to B where AB=280km. The speed of the aircraft in still air is 320km/h. There is a wind blowing from the direction 300 degree. Given that the course set by the pilot is in the direction 357 degree, calculate the speed of the wind. Find also the time, in minutes of the flight. -------------------------------------------------------------------------- My question is that I think the course set by the pilot is wrong. The direction should be in the North East region since the wind is blowing 357 degree. Thus the resultant direction will be North so that the aircraft is able to reach B. By the way, this forum is awesome!! The course taken by the pilot is okay. The wind is blowing from the about northwest (from 300 degrees, which is based from the north). The "horizontal" component of the wind is going eastward. To counteract that, the plane's velocity should have a horizontal component that is going westward. Hence the 357-degree direction, which is about north-northwest or almost northward but slightly westward too, is correct. Draw the figure on paper. a) First, the velocity vectors in relation to line AB. In the northwest quadrant, or in the quadrant from 270deg to 360deg, say, AB is the north axis. Draw the wind vector coming from the 300deg, or 30deg above the 270-deg axis or west axis, pointing to A. Draw the plane vector going 357deg, or 3deg west of AB, originating at A. Here, the angle between the wind and plane vectors is 90 -30 -3 = 57deg. b) Then, the closed triangle of the 3 distances: AB, wind's distance after time t hours, and plane's distance after time t hours. It is an obtuse triangle, with these: ---vertical side = AB = 280 km. ---bottom side = wt ----where w is wind's speed in kph, and t is time in hours. ---righthand side = 320t ----the plane's distance after time t. ---angle between wt and 320t = 57 deg. ---angle between wt and AB = 120 deg. ---angle between AB and 320t = 3 deg. So it is a triangle with known 3 angles and a side. Hence, the Law of Sines will find the two unknown sides wt and 320t. 280/sin(57deg) = 320t/sin(120deg) Cross multiply, 280sin(120deg) = 320t*sin(57deg) t = [280sin(120deg)] / [320sin(57deg)] t = 0.90354 hr t = 0.90354 *60 = 54.2 minutes -----the length of flight, answer. 280/sin(57deg) = wt/sin(3deg) Cross multiply, 280sin(3deg) = wt*sin(57deg) w = [280sin(3deg)] / [0.90354sin(57deg)] w = 19.34 km/hr ---------------------------wind's speed, answer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092541098594666, "perplexity": 1900.4497066885776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855845.27/warc/CC-MAIN-20150124161055-00081-ip-10-180-212-252.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/soln/
# SoLn. Our Problem can be classified as an indirect problem of kinematics. Thus from (1.3) & (1.2) we have to move towards (1.1) .It is of utmost importance in any kind of problem to know your direction (even if you don't know the path.) Using (1.3) $\cfrac { dv }{ dt } =a \\ dv=adt \\ \int _{ u }^{ v }{ dv } =\int _{ 0 }^{ t }{ adt }$ As the time changes from 0 to t the velocity changes from u to v.So on the left hand side the summation is made over v from u to v whereas on the right hand side the summation is made on time from 0 to t. Evaluating the integrals we get ${ [v] }_{ u }^{ v }=a{ [t] }_{ 0 }^{ t } \\ v-u=at \\ v=u+at$ Using (1.2) the last equation may be written as $\cfrac { dx }{ dt } =v=u+at \\ dx=(u+at)dt \\ \int _{ 0 }^{ x }{ dx } =\int _{ 0 }^{ t }{ (u+at)dt }$ At t=0 the particle is at x=0. As time changes from 0 to t the position changes from 0 to x.So on the left hand side the summation is made on position from 0 to x whereas on the right hand side the summation is made on time from 0 to t.Evaluating the integrals we get, ${ [x] }_{ 0 }^{ x }=\int _{ 0 }^{ t }{ udt } \int _{ 0 }^{ t }{ atdt } \\ x=u{ [t] }_{ 0 }^{ t }+a{ { [t }^{ 2 }/2] }_{ 0 }^{ t } \\ x=ut+(a{ t }^{ 2 })/2$ Using the above two derived expressions, ${ v }^{ 2 }={ (u+at) }^{ 2 } \\ ={ u }^{ 2 }+2uat+{ a }^{ 2 }{ t }^{ 2 } \\ ={ u }^{ 2 }+2a[ut+\cfrac { 1 }{ 2 } a{ t }^{ 2 }] \\ ={ u }^{ 2 }+2ax$ The equations $v=u+at$ $x=ut+\frac { 1 }{ 2 } a{ t }^{ 2 }$ ${ v }^{ 2 }={ u }^{ 2 }+2ax$ are used very frequently while solving problems in kinematics involving constant acceleration as one of the physical quantities.If,however,the acceleration isn't constant,these three equations are not useful.We, then, need other tools & procedures. Note by Soumo Mukherjee 3 years, 7 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905295372009277, "perplexity": 2217.073633028336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00130.warc.gz"}
http://math.stackexchange.com/questions/264168/derivative-of-compositum-function-with-log
# Derivative of compositum function with log I have the following two functions that I'm not compleately sure I'm solving correctly mainly what bugs me is $\log(x)$. 1st Function: $$f(x) = \sin(2x^2 - 3\log(x))$$ I simply treated this as composed function and used the rule for said function to solve for it this way: $$f'(x) = \cos(2x^3 - 3\log(x)) \cdot 4x - \frac{1}{3\ln(x)}$$ Would this be correctly calculated derivative of said function ? 2nd Function: $$f(x) = x\log(x^5) \cdot \cos(2x - e^x)^2$$ I am not fully sure how to solve this as a compositum first and then as two seperate functions so i did it this way: $$f'(x) = \left(\frac{1}{x\ln(x^5)} \right) \cdot \cos(2x - e^x)^2 + x\log(x^5) \cdot (-2\sin(2x -e^x))$$ - The derivative of $\log_e(x)$ is $\dfrac{1}{x}$. The derivative of $-3\log_e(x)$ is $-\dfrac{3}{x}$. You have several other basic errors –  Henry Dec 23 '12 at 16:12 Please don't completely rewrite your question –  Hurkyl Dec 26 '12 at 1:32 You have a typo, and you’re missing some required parentheses, but I think that you probably did the first differentiation correctly: if $f(x)=\sin(2x^2-3\ln x)$, then $$f\,'(x)=\cos(2x^2-3\ln x)\left(4x-\frac3x\right)=\left(4x-\frac3x\right)\cos(2x^2-3\ln x)\;.$$ (I prefer to put the $\cos$ factor second in order to avoid any possible ambiguity.) You took the right approach to the second problem, using the product rule first, but some of the details are wrong. You have $f(x) = x\ln x^5\cos(2x - e^x)^2$, where you’ve interpreted the last factor as $\cos^2(2x-e^x)$. I would interpret it as $\cos\left((2x-e^x)^2\right)$, which changes the derivative considerably. On your interpretation the derivative is \begin{align*} f\,'(x)&=\left(x\ln x^5\right)'\cos^2(2x-e^x)+x\ln x^5\left(\cos^2(2x-e^x)\right)'\\ &=\left(5x\ln x\right)'\cos^2(2x-e^x)+5x\ln x\left(\cos^2(2x-e^x)\right)'\\ &=5(1+\ln x)\cos^2(2x-e^x)-10x\ln x\sin(2x-e^x)\cdot(2-e^x)\\ &=5(1+\ln x)\cos^2(2x-e^x)-10x(2-e^x)\ln x\sin(2x-e^x)\;. \end{align*} On my interpretation the derivative is \begin{align*} f\,'(x)&=\left(5x\ln x\right)'\cos(2x-e^x)^2+5x\ln x\left(\cos(2x-e^x)^2\right)'\\ &=5(1+\ln x)\cos(2x-e^x)^2-\left(5x\ln x\sin(2x-e^x)^2\right)\big(2(2x-e^x)(2-e^x)\big)\\ &=5(1+\ln x)\cos(2x-e^x)^2-10x(2x-e^x)(2-e^x)\ln x\sin(2x-e^x)^2\;. \end{align*} - In first function its $-3log(x)$ not $-3lnx$ i used the rule $xloga(x) -> \frac{1}{xlna}$ –  kellax Dec 23 '12 at 16:23 For many authors $\log x$ is $\ln x$; what is your definition? –  Brian M. Scott Dec 23 '12 at 16:25 Hm i added the picture from my PDF thats how the example is written. –  kellax Dec 23 '12 at 16:27 @kellax: With nothing else to go on I’d assume that $\log(x)=\ln x$. I’d also interpret $\cos(2x-e^x)^2$ as having the square inside the cosine, not as $\cos^2(2x-e^x)$. –  Brian M. Scott Dec 23 '12 at 16:30 Irritatingly, at least in the US in most high school math classes $\log$ is most commonly taken to be base 10. –  user7530 Dec 26 '12 at 15:35 show 1 more comment For the first question: We want the derivative of $\sin(g(x))$. By the Chain Rule this is $g'(x)\cos(g(x))$. Apart from a typo, you have the $\cos(g(x))$ part right. The $g'(x)$ should be $\left(4x-\dfrac{3}{x}\right)$. For the second question: It is not clear what the function is. Are we dealing with $\left[\cos(2x-e^x)\right]^2$ or $\cos((2x-e^x)^2)$? The Product Rule part is handled fine. But at a certain stage you need the derivative of $x\ln(x^5)$. Probably the simplest way to handle that is to use $\log(x^5)=5\log x$. So we want the derivative of $5x\log x$. Use the Product Rule. Or else we can work directly with the expression as is, and use Product Rule and Chain Rule. For the other half, you need to use the Chain Rule. Details depend on interpretation of the question. The first step is to get the parentheses right. - Let $f(x)=\log x$. Then $$f'(x)=\lim_{h \to 0}\frac{\log(x+h)+\log x}{h}=\lim_{h \to 0}\frac{1}{h}.\log \frac{x+h}{x}=\lim_{h \to 0}\frac{1}{x}.\frac{x}{h}.\log (1+\frac{h}{x})=\frac{1}{x}\lim_{z \to 0}\frac{1}{z}.\log (1+z)=\frac{1}{x}$$where $z=\frac{h}{x}$ - As you seem to be using $\log(x)$ and $\ln{(x)}$ to mean something different, I assume $\log(x)=\log_{a}(x)$, where $a\not=\rm{e}$? Therefore we first need to find: $$\frac{d}{dx}(\log_{a}(x))=\frac{d}{dx}(\frac{1}{\ln{a}}\ln{x})=\frac{1}{\ln{a}}\frac{d}{dx}(\ln{x})=\frac{1}{x\ln{a}}$$ Therefore, in order to differentiate your first function, $f_{1}(x)\equiv\sin(2x^{2}-3\log_{a}(x))$, we use the chain rule: $$\frac{df_{1}}{dx}=\frac{d}{dx}(2x^{2}-3\log_{a}(x))\cdot\cos(2x^{2}-3\log_{a}(x))=(4x-\frac{3}{x\ln{a}})\cdot\cos(2x^{2}-3\log_{a}(x))$$ And the second function, $f_{2}(x)\equiv x\log_{a}(x^{5})\cdot\cos^{2}(2x-\mathrm{e}^{x})$ using the product rule and chain rule: $$\frac{df_{2}}{dx}=\log_{a}(x^{5})\cdot\cos^{2}(2x-\mathrm{e}^{x})+x\cdot\frac{d}{dx}(\log_{a}(x^{5}))\cdot\cos^{2}(2x-\mathrm{e}^{x})+x\log_{a}(x^{5})\cdot\frac{d}{dx}(\cos^{2}(2x-\mathrm{e}^{x})\\=\log_{a}(x^{5})\cos^{2}(2x-\mathrm{e}^{x})+\frac{4x^{4}}{\ln{a}}\cos^{2}(2x-\mathrm{e}^{x})+-2x\log_{a}(x^{5})(\mathrm{e}^{x}-2)\cos(2x-\mathrm{e}^{x})\sin(2x-\mathrm{e}^{x})$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415827393531799, "perplexity": 512.3271481771352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011355201/warc/CC-MAIN-20140305092235-00099-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/188763/fourier-mukai-functors-being-identity-on-objects
Fourier-Mukai functors being identity on objects Let $X$ be a projective variety over $\mathbb{C}$, denote by $D^b(X)$ the bounded derived category of coherent sheaves on $X$. Suppose we have a Fourier-Mukai functor $\Phi_{X\rightarrow X}^\mathcal{P}:D^b(X)\rightarrow D^b(X)$ being an auto-equivalence on $D^b(X)$, and further assume that $\Phi_{X\rightarrow X}^\mathcal{P}$ acts identically on objects, then is it possible that $\Phi_{X\rightarrow X}^\mathcal{P}$ fails to be the identity functor? How to find a simple example to illustrate this? If $X$ is smooth and projective, then any such FM functor is in fact naturally isomorphic to the identity functor. This follows immediately from Corollary 5.23 of Huybrechts' book on Fourier-Mukai tranforms. Briefly, the idea is that the hypotheses ensure that $\mathcal{P}$ is a quasi-isomorphic to a sheaf on $X\times X$ that's supported set-theoretically on the diagonal and moreover is flat over $X$ via either projection map. One then argues that $\mathcal{P}$ is of the form $\mathcal{O}_\sigma\otimes L$, where $\mathcal{O}_\sigma$ is the structure sheaf of the graph of an automorphism of $X$ and $L$ is a line bundle pulled back from $X$. When $X$ is not smooth, I'm not completely sure what happens. If $P$ is in fact a perfect complex on $X\times X$, then this reasoning still goes through. Otherwise, one needs to worry about the difference between $D^b(X)$ and $D_{perf}(X)$, the derived category of perfect complexes on $X$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912036061286926, "perplexity": 72.69366202528737}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00221.warc.gz"}
https://www.physicsforums.com/threads/scalar-function-on-a-surface.232558/
Scalar Function on a Surface 1. May 1, 2008 Eidos Hi guys and gals This is a conceptual question. Lets say I have a scalar function, $$f(x,y,z)$$ defined throughout $$\mathbb{R}^3$$. Further I have some bounded surface, S embedded in $$\mathbb{R}^3$$. How would I find the function f, defined on the surface S? Would it be the inner product of f and S, $$<f|S>$$ or a functional composition like $$f \circ S$$? 2. May 1, 2008 mathman f is a scalar, so inner product of S and f makes no sense. I don't know what you have in mind by functional composition 3. May 1, 2008 ice109 you mean you want parameterize f by s? as in restrict f to s? like for the purposes of a surface integral? 4. May 2, 2008 Eidos From what I understand the inner product <f|g> is $$\int_{-\infty}^{\infty}f(t)g^{*}(t)dt$$. The mistake I made was to think that they are scalar functions aswell even though f and g are complex functions. Sorry about that. The closest thing I've come to inner products for functions was the orthonormality of the basis functions for Fourier series. This is exactly what I had in mind. Sorry I should have been more explicit where I was going with it. I understand what we are doing if we have a vector field $$\textbf{F}$$ and want to find out how it permeates (eg. flux through a surface) a surface S but dotting it with the unit normal of the surface and integrating on the surface. This is actually what made me think of the inner product: $$\iint\textbf{F}\cdot\textbf{n}\,\mathrm{dS}$$ Thanks for the replies 5. May 2, 2008 HallsofIvy Staff Emeritus That is the "inner product" only if you are thinking of f and g as vectors in L2. You have a function, f(x,y,z), and are given a surface S. You don't say how you are "given" the surface but since it is two dimensional, it is always possible to parameterize it with two variables: on S, x= x(u,v), y= y(u,v), z= z(u,v). Replace x, y, and z in f with those: f(x(u,v),y(u,v),z(u,v). For example, suppose you have the parabolic surface z= x2+ 2y2 and some function f(x,y,z). Then you can take x and y themselves as parameters and, restricted to that surface, your function is f(x,y,x2+ 2y2). 6. May 2, 2008 Eidos Thanks HallsofIvy that cleared things up :) Similar Discussions: Scalar Function on a Surface
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883854150772095, "perplexity": 587.0152603086424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00376-ip-10-233-31-227.ec2.internal.warc.gz"}
https://nrich.maths.org/13205/clue
Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. Have You Got It? Can you explain the strategy for winning this game with any target? Pair Sums Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers? Almost One Age 11 to 14 Challenge Level: You could begin by choosing a fraction bigger than $\frac{1}{2}$ and adding on smaller fractions to get close to 1. You could approximate each fraction to fractions that you are familiar with (with small denominators) and then use your approximations to estimate possible sums. It is often easiest to add fractions when they have the same denominator...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808678388595581, "perplexity": 830.8430463341743}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209562.5/warc/CC-MAIN-20180814185903-20180814205903-00549.warc.gz"}
https://brilliant.org/discussions/thread/a-confusing-statement/
# A confusing statement If a man says 'all men are liars', is he telling the truth or is he lying? I tried this problem, but I am getting some strange answers. When I assume that he is telling the truth, it means that all men are telling the truth, so obviously, his statement 'all men are liars' is true and he would also be a liar. If he would be a liar then the statement 'all men are liar ' would be false, it means that all men would be telling the truth, and then he would also be telling the truth, and then the process goes on......... Note: for the time being we are assuming that either all the men is telling the truth or is lying, it cannot happen in this case that some of them would be liars and some of them would be truth tellers. Note by Syed Hissaan 2 years ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: I think the scenario you present is equivalent to the scenario in the classical Liar's paradox. - 1 year, 12 months ago @Calvin Lin can you help me out in this one ? - 2 years ago I believe you want to talk about the Russell's Paradox, but there are some flaws in your logic. First, the negation of the statement "all men are liars" is "there exist a truth teller", instead of "all men are truth tellers". The question is solvable, where the man is a liar. Therefore, the statement "all men are liars" is false. In fact, there exist a man who is telling the truth. - 2 years ago i am asking not the exact same question , my question is "if a man says all men are liar , is he saying true or false " - 2 years ago who is the man ,(#who is telling the truth ) - 2 years ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747029542922974, "perplexity": 1764.020651440113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00024.warc.gz"}
http://stats.stackexchange.com/questions/84106/unbiased-estimates-and-mle-of-central-moments-and-of-standardized-moments
# unbiased estimates and MLE of central moments and of standardized moments? I have heard of unbiased estimate and MLE of variance, and some about those of kurtosis. Are there general results about • unbiased estimates of k-th order central moments? • MLE of k-th order central moments? • unbiased estimates of k-th order standardized moments? • MLE of k-th order standardized moments? Thanks! - ## 1 Answer • Yes, they can be constructed, although they are not simple. See http://www.jstor.org/stable/2985201 . • Using the invariance of the MLE, it follows that the MLE of the k-th order central moments and the k-th order central moments is simply obtaining by plugging the MLE into the expression of the corresponding quantities of interest. For instance $\hat{\mu} = \int x f(x;\hat{\theta})dx$ - Thanks! For standardized moments, is its MLE also obtained by the invariance of MLE, and how about its unbiased estimates? –  Tim Feb 2 at 1:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037131428718567, "perplexity": 1570.6308790629832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449615.41/warc/CC-MAIN-20141017005729-00199-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.sarthaks.com/7704/bucket-moving-circular-path-what-minimum-velocity-which-water-inside-bucket-will-not-spill?show=7706
# A bucket is moving in a circular path and what is the minimum velocity at which the water inside the bucket will not spill out +1 vote 2.3k views in Physics +1 vote by (1.6k points) The required velocity is √(gr)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8145666718482971, "perplexity": 2627.6770168270946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00507.warc.gz"}
http://www.di.ens.fr/~feret/CMSB2017-tool-paper/
Paper Bibitem ### Kappa Nightly builds Installation from the sources Kappa handbook ### Other softwares BNGL Erode CellDesigner SBML2LaTeX ### Short tutorial Kappa model ODEs Equivalent sites Reduced system (forward bisimulation) Reduced system (backward bisimulation) LaTeX output Numerical integration ### Parametric examples Kinase/phosphatase model Several phosphorylation sites Several phospho. sites encoded with a counter ### Other examples Remark on equivalent sites Input files Benchmarks ### References Tools Formal frameworks # KaDE tool paper: supplementary resources. This website provides some supplementary resources about the following paper: KaDE: a tool to compile Kappa rules into (reduced) ODEs models accepted to the tool paper track of CMSB2017. ```@InProceedings{feret:CMSB2017, title = "KaDE: a tool to compile Kappa rules into (reduced) ODEs models", booktitle = "Fifteenth International Workshop on Static Analysis and Systems Biology (SASB'17)", series = "LNCS/LNBI", publisher = "springer", volume = "10545", note = "to appear, Supplementary information available at \url{www.di.ens.fr/~feret/CMSB2017-tool-paper}", author = "Ferdinanda Camporesi and J{\'e}r{\^o}me Feret and Kim Quy{\^e}n L{\'y}"}``` # Kappa ### Software distribution The nightly build binaries are available here. These versions are without the graphical interface. It contains binaries for ubuntu, macos, and windows. Under macOs the binaries are in the zipfile, in the repository Kappapp.app/Contents/Resources/bin. ```git clone https://github.com/Kappa-Dev/KaSim.git cd KaSim``` You need the OCaml native compiler version 4.03.0 or above as well as ocamlbuild, findlib and Yojson library. To check whether you have them, type ```ocamlfind ocamlopt -version ocamlfind query yojson``` If you use a package manager (or opam, the OCaml package manager), OCaml compilers, ocamlbuild and findlib are really likely provided by it. For instance, you may use the following instruction under opam: `opam install ocamlbuild ocamlfind yojson` Otherwise, OCaml native compilers can be downloaded on INRIA's website. The Windows bundle contains ocamlbuild and findlib. Findlib sources are available on camlcity.org . Ocamlbuild is on github. If you don't have any easier way to install it (opam, apt, rpm, cygwin, ...), Yojson sources are available here (Note that you'll need to compile and install also its dependencies (cppo, easy-format, biniou) from the same website). To create binaries, simply type: `make all` The compilation of the graphical interface requires Tk and labltk. Instructions to install Tk may be found here. If you are using opam, labltk may be installed this way: `opam install labltk` Once you have labltk, compile with the option USE_TK=1: ```make clean make USE_TK=1``` If compiled, the gui may be launched by using the following instruction (without any command line option): `./KaDE` In the following we recommend to make symbolic links into a repository in your path. For instance, assuming that KaSim repository is directly in your home, and that the repository local/bin is in your path, the following instructions: ```cd .. ln -sf ~/KaSim/bin/* ~/local/bin/``` `KaDE` ### Kappa handbook Kappa handbook may be found here. # Other softwares ## BioNetGen BioNetGen distribution may be found here. BNG2.pl is written in the Perl language, version 5.8 or above is required. Most of the Mac OS, linux machines, and windows machine under Cigwin, have to be installed. ## Erode ERODE distribution may be found here. ERODE requires Java 8. If ERODE starts, shows the LOGO, and then nothing happen. This is very likely that you do not have Java 8 properly installed. Under MacOS, Oracle may fail in binding appropriately Java. In that case, the installation will report a succes, but the new version will not be usable. MacPort may be a good alternative to install Java and bind it correctly. ## CellDesigner CellDesigner distribution may be found here. KaDE SBML output is compatible with CellDesigner. CellDesigner offers tools to visualize and simulate reaction networks. ## SBML2LaTeX SBML2LaTeX distribution may be found here. KaDE SBML output is compatible with SBML2LaTeX. SBML2LaTeX translates SBML files into LaTeX. # Examples of the paper ## Small tutorial We describe a case study. We consider the following rules : We denote as γ1, γ2, and γ3 the corrected rate constants of these rules. With the so-called Biochemist convention, which roughly speaking consists in dividing rates per the number of automorphisms on the left hand side of rules, that are preserved on the right hand side, we have : • γ1 =  ; • γ2 =  ; • γ3 = k3. Indeed, the third rule makes a difference among the two agents. The non trivial automorphism on the left hand side is not preserved on the right hand side. Rules 1 and 2 have two automorphisms, whereas rule 3 has only one. As a consequence, the rules are symmetric with respect to both sites if and only 2 = 2 = k3, that is to say k1 = k2 = k3. The model is encoded in Kappa in the following file. The code is given as follows: 1  #sym.ka 2 3  %agent: A(x,y) 4 5  %init: 100 A() 6 7  %var: k 1 8  %obs: asym |A(x!1),A(y!1)| 9 10  A(x,y),A(x,y) -> A(x!1,y),A(x!1,y) @k 11  A(x,y),A(x,y) -> A(x!1,y),A(x,y!1) @k 12  A(x,y),A(x,y) -> A(x,y!1),A(x,y!1) @k 13 14  #We use the third convention (consider only the automorphisms in the lhs 15  # that are preserved in the rhs). There are two of them in the first and 16  # in the third rule. Only one in the second one. Hence the rate of the first 18  # and third aredivided by 2. Line 3 defines the signature of the agent A, and line 5 defines its initial concentration. Line 7 defines a parameter. Any further model reduction remain valid if we change this parameter. Line 8 defines the observable : the concentration of asymmetric dimers will be tracked during the simulation. Lines from 10 to 12 define the rules of the model. Their constant rate is set to k, which is considered as an uninterpreted variable. We use the following command line to generate the ODE semantics in OCTAVE : `KaDE --rule-rate-convention Biochemist sym.ka` By default, equivalent sites are not analysed and the OCTAVE backend is used. The result dumped in the following file. Integration parameters are given from line 19 to line 26. 19  tinit=0; 20  tend=1; 21  initialstep=1e-05; 22  maxstep=0.02; 23  reltol=0.001; 24  abstol=0.001; 25  period=0.01; 26  nonnegative=false; We notice at line 29: 29  nodevar=5; that the ODEs has 5 variables : 4 for each kind of biomolecular species and 1 for time advance. Initial state is defined from line 173 to line 184: 173  function Init=ode_init() 174 175  global nodevar 176  global init 177  Init=zeros(nodevar,1); 178 179  Init(1) = init(1); % A(x, y) 180  Init(2) = init(2); % A(x!1, y), A(x!1, y) 181  Init(3) = init(3); % A(x!1, y), A(x, y!1) 182  Init(4) = init(4); % A(x, y!1), A(x, y!1) 183  Init(5) = init(5); % t 184  end At line 183 we notice that a special variable is introduced for time. Then, each instruction is annotated with some information referring to the Kappa file. Variables are annotated by the corresponding name in the Kappa file. Initial species are annotated by Kappa expression describing the corresponding bio-molecular species. The ODEs are given from line 204 to line 228: 204  dydt=zeros(nodevar,1); 205 206  % rule    : A(x,y), A(x,y) -> A(x,y!1), A(x,y!1) 207  % reaction: A(x, y) + A(x, y) -> A(x, y!1), A(x, y!1) 208 209  dydt(1)=dydt(1)-1/2*k(3)*y(1)*y(1); 210  dydt(1)=dydt(1)-1/2*k(3)*y(1)*y(1); 211  dydt(4)=dydt(4)+2/2*k(3)*y(1)*y(1); 212 213  % rule    : A(x,y), A(x,y) -> A(x!1,y), A(x,y!1) 214  % reaction: A(x, y) + A(x, y) -> A(x!1, y), A(x, y!1) 215 216  dydt(1)=dydt(1)-k(2)*y(1)*y(1); 217  dydt(1)=dydt(1)-k(2)*y(1)*y(1); 218  dydt(3)=dydt(3)+k(2)*y(1)*y(1); 219 220  % rule    : A(x,y), A(x,y) -> A(x!1,y), A(x!1,y) 221  % reaction: A(x, y) + A(x, y) -> A(x!1, y), A(x!1, y) 222 223  dydt(1)=dydt(1)-1/2*k(1)*y(1)*y(1); 224  dydt(1)=dydt(1)-1/2*k(1)*y(1)*y(1); 225  dydt(2)=dydt(2)+2/2*k(1)*y(1)*y(1); 226  dydt(5)=1; 227 228  end Each contribution is annotated with the corresponding reaction and the underlying Kappa rule. The Jacobian of the ODEs is given from line 231 to line 288: 231  function jac=ode_jacobian(t,y) 232 233  global nodevar 234  global max_stoc_coef 235  global jacvar 236  global var 237  global k 238  global kd 239  global kun 240  global kdun 241  global stoc 242 243  global jack 244  global jackd 245  global jackun 246  global jackund 247  global jacstoc 248 249  var(2)=y(3); % asym 250 251  k(1)=var(1); 252  k(2)=var(1); 253  k(3)=var(1); 254  jacvar(2,3)=1; 255 256 257  jac=sparse(nodevar,nodevar); 258 259  % rule    : A(x,y), A(x,y) -> A(x,y!1), A(x,y!1) 260  % reaction: A(x, y) + A(x, y) -> A(x, y!1), A(x, y!1) 261 262  jac(1,1)=jac(1,1)-1/2*k(3)*y(1); 263  jac(1,1)=jac(1,1)-1/2*k(3)*y(1); 264  jac(1,1)=jac(1,1)-1/2*k(3)*y(1); 265  jac(1,1)=jac(1,1)-1/2*k(3)*y(1); 266  jac(4,1)=jac(4,1)+2/2*k(3)*y(1); 267  jac(4,1)=jac(4,1)+2/2*k(3)*y(1); 268 269  % rule    : A(x,y), A(x,y) -> A(x!1,y), A(x,y!1) 270  % reaction: A(x, y) + A(x, y) -> A(x!1, y), A(x, y!1) 271 272  jac(1,1)=jac(1,1)-k(2)*y(1); 273  jac(1,1)=jac(1,1)-k(2)*y(1); 274  jac(1,1)=jac(1,1)-k(2)*y(1); 275  jac(1,1)=jac(1,1)-k(2)*y(1); 276  jac(3,1)=jac(3,1)+k(2)*y(1); 277  jac(3,1)=jac(3,1)+k(2)*y(1); 278 279  % rule    : A(x,y), A(x,y) -> A(x!1,y), A(x!1,y) 280  % reaction: A(x, y) + A(x, y) -> A(x!1, y), A(x!1, y) 281 282  jac(1,1)=jac(1,1)-1/2*k(1)*y(1); 283  jac(1,1)=jac(1,1)-1/2*k(1)*y(1); 284  jac(1,1)=jac(1,1)-1/2*k(1)*y(1); 285  jac(1,1)=jac(1,1)-1/2*k(1)*y(1); 286  jac(2,1)=jac(2,1)+2/2*k(1)*y(1); 287  jac(2,1)=jac(2,1)+2/2*k(1)*y(1); 288  end The definition of the observables are defined from line 297 to line 301: 297  t =y(5); 298  var(2)=y(3); 299 300  obs(1)tt; % [T] 301  obs(2)=var(2); % asym We now wonder whether the sites are equivalent or not. We use the following command line: `KaDE --rule-rate-convention Biochemist sym.ka --show-symmetries` The status of equivalent sites is described in the log: + compute symmetric sites... Symmetries: In rules: ************ Agent: A -Equivalence classes of sites for bindings states: {x,y} -Equivalence classes of sites (both): {x,y} ************ In rules and initial states: ************ Agent: A -Equivalence classes of sites for bindings states: {x,y} -Equivalence classes of sites (both): {x,y} ************ In rules and algebraic expression: ************ The set of rules and the initial state are symmetric with respect to the pair of sites. This is not the case of the observable. Thus, only backward bisimulation may be used to reduce the system. Indeed, if we ignore the difference between sites x and y, we can no longer express the concentration of the sites x that are free. This excludes forward bisimulation. Backward bisimulations may still be used since the concentration of each species can be computed by from the overall concentration of its equivalence class, since the concentration of two equivalent species are always inversely proportional to their number of automorphisms. The command line: ` KaDE --rule-rate-convention Biochemist sym.ka --with-symmetries Forward --output ode_with_fwd_sym --output-plot data_fwd.csv` give this octave file. There is indeed no reduction. This is because we observe the concentration of asymetric dimers (where a site x is bound to a site y). Forward bisimulation would ignore the difference between symmetric and asymmetric dimers. The command line: ` KaDE --rule-rate-convention Biochemist sym.ka --with-symmetries Backward --output ode_with_bwd_sym --output-plot data_bwd.csv` reduces the system by ignoring the difference between sites x and y. This is done by replacing in each product of reaction, every species by an arbitrary representative of its equivalence class, and in each algebraic expression, each species concentration by the product between the concentration of its representative and the relative weight of this species in its equivalence class (which is constant and inversely proportional to its number of automorphisms.) The OCTAVE output is this file. We notice at line 29, that only three variables remain: 29  nodevar=3; The meaning of these variables is given from line 173 to line 182: 173  function Init=ode_init() 174 175  global nodevar 176  global init 177  Init=zeros(nodevar,1); 178 179  Init(1) = init(1); % A(x, y) 180  Init(2) = init(2); % A(x, y!1), A(x, y!1) 181  Init(3) = init(3); % t 182  end Thus, there is one variable for time advance, one for free As, and one for dimers. The three kinds of dimers are gathered into a single equivalence class (no matter which sites are bound). KaDE has gathered the three kinds of dimers into a single equivalence class (no matter with sites are bound). For instance, from line $211$ to line $212$: 211  % rule    : A(x,y), A(x,y) -> A(x!1,y), A(x,y!1) 212  % reaction: A(x, y) + A(x, y) -> A(x, y!1), A(x, y!1) the production of an asymmetric dimer, is replaced with the production of a dimer in which the bond is on both sites y. The definition of observables is given from line $289$ to line $301$: 289  function obs=ode_obs(y) 290 291  global nobs 292  global var 293  obs=zeros(nobs,1); 294 295  t = y(3); 296  var(2)=y(2)/4; % asym 297 298  obs(1)=t; % [T] 299  obs(2)=var(2); % asym 300 301  end We are interested in asymetric dimers only. We notice that their concentration is obtained by dividing the overall quantity of dimers by $4$. To understand why, we shall have a closer look at the meaning of each variable. There exist two conventions. One variable may denote the number of occurrences of a bio-molecular species, or the number of embeddings between this bio-molecular species and the current state of the system. Both conventions are related by the fact, that the number of embeddings is equal to the number of occurrences multiplied by the number of automorphisms in the bio-molecular species. The reduction of the model replace each occurrence of a dimer into a dimer made of two proteins bond via their site y. As indicated at line 15: 15  %% variables (init(i),y(i)) denote numbers of embeddings the convention is to count in number of embeddings. Thus the total number of dimers is $y\left(2\right)$/$2$. Then half of them only is an asymetric dimer, which gives $y\left(2\right)$/$4$ SBML2LaTeX may be used to convert the output of KaDE into PDF. Firstly, we translate the different versions of the models in SBML. ```KaDE sym.ka --ode-backend SBML --output network KaDE sym.ka --ode-backend SBML --output network_fwd --with-symmetries Forward KaDE sym.ka --ode-backend SBML --output network_bwd --with-symmetries Backward ``` The following command line: `java -jar` launches the graphical interface of SBML2LaTeX. We compile the LaTeX files thanks to the following instructions: ```pdflatex network.tex pdflatex network.tex pdflatex network_fwd.tex pdflatex network_fwd.tex pdflatex network_bwd.tex pdflatex network_bwd.tex``` We obtain the following PDF files: initial model -- reduced model (fwd) -- reduced model (bwd). Let us check the soundness of our tools, by integrating both ODEs systems. ```octave ode.m octave ode_fwd.m octave ode_bwd.m``` We obtain the three following files: data.csv -- data_fwd.csv -- data_bwd.csv. We notice that the first two data sets are identical (this is expected since there is no reduction). Additionally, the third data set is almost the same. Despite that the equations have the same solutions, errors due to numerical integration may differ. The concentration of asymetric dimers may be plotted thanks to gnuplot. We use the following gnuplot files: plot.gplot -- plot_fwd.gplot -- plot_bwd.gplot. ```gnuplot plot.gplot gnuplot plot_fwd.gplot gnuplot plot_bwd.gplot``` We obtain the following plots: ## Parametric examples The paper consider three examples, with a parametric size, that we denote by n. We give the Kappa files for each of the three examples with parameter n = 2. #### kinase/phosphatase 1  %var: Stot 100 2  %var: kKS 0.01 3  %var: kdKS 1. 4  %var: kpS 0.1 5  %var: kPS 0.001 6  %var: kdPS 0.1 7  %var: kuS 0.01 8  %agent: K(s) 9  %agent: P(s) 10  %agent: S(x1~u~p,x2~u~p) 11 12  %init: Stot K(s) 13  %init: Stot P(s) 14  %init: Stot S(x1~u,x2~u) 15 16 17  K(s) , S(x1~u) <-> K(s!1) , S(x1~u!1) @ kKS,kdKS 18  K(s!1) , S(x1~u!1) -> K(s) , S(x1~p) @ kpS 19  P(s) , S(x1~p) <-> P(s!1) , S(x1~p!1) @ kPS,kdPS 20  P(s!1) , S(x1~p!1) -> P(s) , S(x1~u) @ kuS 21  K(s) , S(x2~u) <-> K(s!1) , S(x2~u!1) @ kKS,kdKS 22  K(s!1) , S(x2~u!1) -> K(s) , S(x2~p) @ kpS 23  P(s) , S(x2~p) <-> P(s!1) , S(x2~p!1) @ kPS,kdPS 24  P(s!1) , S(x2~p!1) -> P(s) , S(x2~u) @ kuS #### multiple phosphorylation sites 1  %var: kp0 3 2  %var: ku1 14 3  %var: kp1 15 4  %var: ku2 98 5  %var: kp2 75 6  %var: ku3 686 7  %agent: A(s1~u~p,s2~u~p) 8 9  %init: 100 A(s1~u,s2~u) 10 11  A(s1~p,s2~u) -> A(s1~p,s2~p) @kp1 12  A(s1~p,s2~u) -> A(s1~u,s2~u) @ku1 13 14 15  A(s1~p,s2~p) -> A(s1~p,s2~u) @ku2 16  A(s1~p,s2~p) -> A(s1~u,s2~p) @ku2 17 18 19  A(s1~u,s2~u) -> A(s1~u,s2~p) @kp0 20  A(s1~u,s2~u) -> A(s1~p,s2~u) @kp0 21 22 23  A(s1~u,s2~p) -> A(s1~u,s2~u) @ku1 24  A(s1~u,s2~p) -> A(s1~p,s2~p) @kp1 #### multiple phosphorylation sites with counter 1  %var: kp0 3 2  %var: ku1 14 3  %var: kp1 15 4  %var: ku2 98 5  %agent: A(s1~u~p,s2~u~p,p) 6  %agent: P(l,r) 7 8  %init: 100 A(p!1) , P(l!1,r) 9 10 11 12 13  A(s1~u,p!1) , P(l!1,r) -> A(s1~p,p!1) , P(l!2,r) , P(l!1,r!2) @kp0 14  A(s2~u,p!1) , P(l!1,r) -> A(s2~p,p!1) , P(l!2,r) , P(l!1,r!2) @kp0 15  A(s1~u,p!1) , P(l!1,r!2) , P(l!2,r) -> A(s1~p,p!1) , P(l!2,r!3) , P(l!3,r) , P(l!1,r!2) @kp1 16  A(s1~p,p!1) , P(l!2,r) , P(l!1,r!2) -> A(s1~u,p!1) , P(l!1,r) @ku1 17  A(s2~u,p!1) , P(l!1,r!2) , P(l!2,r) -> A(s2~p,p!1) , P(l!2,r!3) , P(l!3,r) , P(l!1,r!2) @kp1 18  A(s2~p,p!1) , P(l!2,r) , P(l!1,r!2) -> A(s2~u,p!1) , P(l!1,r) @ku1 19  A(s1~p,p!1) , P(l!2,r!3) , P(l!3,r) , P(l!1,r!2) -> A(s1~u,p!1) , P(l!1,r!2) , P(l!2,r) @ku2 20  A(s2~p,p!1) , P(l!2,r!3) , P(l!3,r) , P(l!1,r!2) -> A(s2~u,p!1) , P(l!1,r!2) , P(l!2,r) @ku2 All the files for the description of the models, both in Kappa and in BNGL may be found in this tarball. All the files, including both the input models and the reduced ones, may be found in this tarball. ### kinase/phosphatase #### Building Kappa and BNGL models Here is an OCaml source code to generate the model. The following instructions: ```ocamlopt.opt kinase_phosphatase.ml -o kinase_phosphatase mkdir generated_models mkdir generated_models/kin_phos ./kinase_phosphatase 1 10 ``` will generate the models, in the repository generated_models/kin_phos, in Kappa and in BNG for the parameter n ranging from 1 to 10. `ls generated_models/kin_phos` The Kappa model matches with the BNGL model with distinct sites. #### Tarball The following tarball contains all the input/output files for this model. #### Input/output files Each file is available individually in the following table: n Kappa file KaDE (ground system) KaDE (forward bisimulation) KaDE (backward bisimulation BNGL file (with distinct sites) Network BNGL file (with multiple sites) Network 1 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 2 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 3 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 4 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 5 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 6 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 7 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 8 Kappa DotNet DotNet DotNet BNGL BNGL DotNet 9 Kappa DotNet DotNet DotNet BNGL BNGL DotNet 10 Kappa DotNet DotNet DotNet BNGL BNGL DotNet #### Benchmarks We obtain the following benchmarks: n KaDE_wo_sym KaSa KaDE_fwd KaDE_bwd bngl bngl_sym erode_initial (FB) erode_initial (NFB) erode_initial (BB) erode_initial (NBB) erode_reduced (FB) erode_reduced (NFB) erode_reduced (BB) erode_reduced (NBB) 1 0.00542 0.002386 0.007474 0.004767 0.02 0.01 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 2 0.009713 0.003688 0.0011809 0.010309 0.04 0.02 0.001 0.001 0.001 0.001 0.001 0.003 0.001 0.001 3 0.03105 0.004322 0.010267 0.016402 0.22 0.06 0.001 0.001 0.001 0.001 0.001 0.006 0.002 0.002 4 0.159678 0.005659 0.035457 0.039075 1.23 0.15 0.004 0.008 0.004 0.005 0.002 0.007 0.003 0.003 5 1.26683 0.007863 0.079153 0.084271 7.06 0.33 0.016 0.042 0.024 0.025 0.003 0.009 0.004 0.003 6 16.1219 0.011796 0.183597 0.201491 39.71 0.63 0.063 0.156 0.088 0.085 0.005 0.010 0.004 0.003 7 344.108 0.014771 0.677966 0.672631 222.91 1.15 0.244 0.800 0.499 0.463 0.005 0.010 0.005 0.005 8 ? 0.020017 4.22518 4.17969 ? 1.97 ? ? ? ? 0.006 0.008 0.006 0.003 9 ? 0.028944 37.8643 38.0085 ? 3.17 ? ? ? ? 0.005 0.011 0.007 0.006 10 ? 0.036031 421.908 424.09 ? 4.91 ? ? ? ? 0.007 0.015 0.009 0.005 Each computation has been made with a 10 minutes time-out. Computations have been made on a MacBookPro with a 2.8 GHz Intel Core i7 CPU and a 16 Go 1600 MHz DDR3 memory. In particular we propose to compare the computation of several pipelines for several functionnalities. 1. Firstly, we compare the computation to generate the ground network with BNGL and with KaDE. We obtain the following plots: 2. Secondly, we compare the computation to generate the network reduced by forward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NFB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to prove the optimality. In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of forward bisimulation (using the faster, but potentially incomplete, algorithm FB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. ERODE can also go further by inferring the coarsest bisimulation that is specified by a parition of the set of bio-molecular species. We obtain the following plots: 3. Thirdly, we compare the computation to generate the network reduced by backward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NBB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to search for further potential reduction (since ERODE focuses on uniform bisimulation, it cannot prove the optimality of the reduction). In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of backward bisimulation (using the faster, but potentially incomplete, algorithm BB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. We obtain the following plots: ### multi-phosphorilation sites #### Building Kappa and BNGL models Here is an OCaml source code to generate the model. The following instructions: ```ocamlopt.opt kinase_phosphatase.ml -o kinase_phosphatase mkdir generated_models mkdir generated_models/kin_phos ./multi_phos 1 10 ``` will generate the models, in the repository generated_models/multi_phos, in Kappa and in BNG for the parameter n ranging from 1 to 10. `ls generated_models/multi_phos` The Kappa model matches with the BNGL model with distinct sites. #### Tarball The following tarball contains all the input/output files for this model. #### Input/output files Each file is available individually in the following table: n Kappa file KaDE (ground system) KaDE (forward bisimulation) KaDE (backward bisimulation BNGL file (with distinct sites) Network BNGL file (with multiple sites) Network 1 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 2 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 3 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 4 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 5 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 6 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 7 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 8 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 9 Kappa DotNet DotNet DotNet BNGL DotNet BNGL 10 Kappa DotNet DotNet DotNet BNGL DotNet BNGL #### Benchmarks We obtain the following benchmarks: n KaDE_wo_sym KaSa KaDE_fwd KaDE_bwd bngl bngl_sym erode_initial (FB) erode_initial (NFB) erode_initial (BB) erode_initial (NBB) erode_reduced (FB) erode_reduced (NFB) erode_reduced (BB) erode_reduced (NBB) 1 0.001394 0.000722 0.002418 0.002514 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 2 0.003474 0.002305 0.003985 0.004139 0.02 0.01 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 3 0.004863 0.004226 0.007032 0.007262 0.03 0.03 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 4 0.008634 0.008914 0.020552 0.018946 0.12 0.07 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 5 0.026137 0.025071 0.067181 0.069171 0.38 0.26 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 6 0.069118 0.073423 0.311556 0.33069 1.44 1.43 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 7 0.25431 0.224123 1.57005 1.60526 5.72 11.10 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 8 2.19546 0.708071 7.88053 7.96044 24.52 106.26 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 9 10.1247 2.29603 41.7526 43.3281 110.73 0.003 0.003 0.003 0.002 0.001 0.001 0.001 0.001 10 48.572 7.7361 250.509 253.594 518.90 0.005 0.007 0.008 0.008 0.001 0.001 0.001 0.001 Each computation has been made with a 10 minutes time-out. Computations have been made on a MacBookPro with a 2.8 GHz Intel Core i7 CPU and a 16 Go 1600 MHz DDR3 memory. In particular we propose to compare the computation of several pipelines for several functionnalities. 1. Firstly, we compare the computation to generate the ground network with BNGL and with KaDE. We obtain the following plots: 2. Secondly, we compare the computation to generate the network reduced by forward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NFB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to prove the optimality. In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of forward bisimulation (using the faster, but potentially incomplete, algorithm FB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. ERODE can also go further by inferring the coarsest bisimulation that is specified by a parition of the set of bio-molecular species. We obtain the following plots: 3. Thirdly, we compare the computation to generate the network reduced by backward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NBB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to search for further potential reduction (since ERODE focuses on uniform bisimulation, it cannot prove the optimality of the reduction). In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of backward bisimulation (using the faster, but potentially incomplete, algorithm BB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. We obtain the following plots: ### multi-phosphorilation sites (encoded with explicit counter) #### Building Kappa and BNGL models Here is an OCaml source code to generate the model. The following instructions: ```ocamlopt.opt multi_phos_with_counter.ml -o multi_phos_with_counter mkdir generated_models mkdir generated_models/multi_phos_with_counter ./multi_phos_with_counter 1 10 ``` will generate the models, in the repository generated_models/multi_phos, in Kappa and in BNG for the parameter n ranging from 1 to 10. `ls generated_models/multi_phos_with_counter` The Kappa model matches with the BNGL model with distinct sites. #### Tarball The following tarball contains all the input/output files for this model. #### Input/output files Each file is available individually in the following table: n Kappa file KaDE (ground system) KaDE (forward bisimulation) KaDE (backward bisimulation BNGL file (with distinct sites) Network BNGL file (with multiple sites) Network 1 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 2 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 3 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 4 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 5 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 6 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 7 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 8 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 9 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet 10 Kappa DotNet DotNet DotNet BNGL DotNet BNGL DotNet #### Benchmarks We obtain the following benchmarks: n KaDE_wo_sym KaSa KaDE_fwd KaDE_bwd bngl bngl_sym erode_initial (FB) erode_initial (NFB) erode_initial (BB) erode_initial (NBB) erode_reduced (FB) erode_reduced (NFB) erode_reduced (BB) erode_reduced (NBB) 1 0.00225 0.001286 0.004046 0.003873 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 2 0.004318 0.003685 0.007873 0.008793 0.02 0.01 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 3 0.008547 0.006645 0.017025 0.020627 0.06 0.02 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 4 0.018465 0.018465 0.036521 0.037319 0.14 0.04 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 5 0.041986 0.026849 0.06203 0.087131 0.37 0.08 0.001 0.002 0.001 0.001 0.001 0.001 0.001 0.001 6 0.101352 0.052174 0.106849 0.138138 1.00 0.13 0.001 0.003 0.002 0.003 0.001 0.001 0.001 0.001 7 0.255698 0.100566 0.191022 0.251259 2.58 0.23 0.003 0.006 0.002 0.002 0.001 0.001 0.001 0.001 8 0.670282 0.194196 0.324019 0.459048 6.96 0.37 0.004 0.004 0.005 0.005 0.001 0.001 0.001 0.001 9 1.76816 0.332928 0.54416 0.791958 18.89 0.62 0.004 0.005 0.007 0.005 0.001 0.001 0.001 0.001 10 5.16022 0.588872 0.899689 1.37989 54.40 1.03 0.008 0.011 0.012 0.012 0.001 0.001 0.001 0.001 Each computation has been made with a 10 minutes time-out. Computations have been made on a MacBookPro with a 2.8 GHz Intel Core i7 CPU and a 16 Go 1600 MHz DDR3 memory. In particular we propose to compare the computation of several pipelines for several functionnalities. 1. Firstly, we compare the computation to generate the ground network with BNGL and with KaDE. We obtain the following plots: 2. Secondly, we compare the computation to generate the network reduced by forward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NFB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to prove the optimality. In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of forward bisimulation (using the faster, but potentially incomplete, algorithm FB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. ERODE can also go further by inferring the coarsest bisimulation that is specified by a parition of the set of bio-molecular species. We obtain the following plots: 3. Thirdly, we compare the computation to generate the network reduced by backward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NBB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to search for further potential reduction (since ERODE focuses on uniform bisimulation, it cannot prove the optimality of the reduction). In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of backward bisimulation (using the faster, but potentially incomplete, algorithm BB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. We obtain the following plots: # Other examples We have manually translated each model of the BNGL distribution into Kappa. We have used the static analyzer KaSa to check that there is no dead code in models. We found the rule for transphosphorylation of Fyn by SH2-bound Lyn was wrong in each BNGL model, we corrected it as well as in the kappa models. #### Remark on equivalent sites It is worth noticing that the operational semantics on equivalent sites in BNGL does not match with the intuitive encoding with multiple identified sites. Let us consider an example with a protein A with a site x that may take the state u or p and two sites l that may take the state u, p, or q. We consider the following rule in BNGL: 1  A(x~u,l~u) -> A(x~p,l~u) k This means that the site x of a protein A may get the state p at rate k provided that at least one site l is in state u. An intuitive encoding with identified sites would be the following: 1  A(x~u,l1~u) -> A(x~p,l1~u) k 2  A(x~u,l2~u) -> A(x~p,l2~u) k The previous encoding is quantitatively wrong, both rules may be used to activate the site x of a protein A with both sites l1 and l2 in state u, with an overall rate of 2k (instead of k in the BNGL model with equivalent sites). A correct encoding requires to refine the states of sites l1 and l2: 1  A(x~u,l1~u,l2~u) -> A(x~p,l1~u,l2~u) k 2  A(x~u,l1~u,l2~p) -> A(x~p,l1~u,l2~p) k 3  A(x~u,l1~u,l2~q) -> A(x~p,l1~u,l2~q) k 4  A(x~u,l1~p,l2~u) -> A(x~p,l1~p,l2~u) k 5  A(x~u,l1~q,l2~u) -> A(x~p,l1~q,l2~u) k This approach can be generalised. Consider a protein with some equivalent sites. Consider a rule that tests some of these sites. For each occurrence of agent and each kind of equivalent sites, the sites of this kind may be partitioned into isomorphic classes (two distinct classes stand for two distinct properties to specify the context of the site). Let us assume that there are n equivalence classes. Then we have to consider every function mapping each of the equivalent sites to a subset of these equivalence classes. The interpretation of each function is that each site matches the context of (or the property denoted by) each equivalence class in its image. Then each function is associated with a set of rules. The rule is obtained by refining the initial ones by enforcing, for each site, the properties to be satisfied (for each equivalence class in the image of the site), and enforcing the negation of the properties that are not (for each equivalence class that is not in the image of the site). Since there is no negation, enforcing the negation of a property requires to cover all the other cases, by a set of mutually incompatible conditions. Some conditions (positive and/or negative) may not be compatible. Thus, a solver may be used to cut irrealisable refinements on the fly. #### Input files This table contains the examples that are provided in the BNGL repository. For each model, we provide the BNGL file and the Kappa model. Id Example name BNGL file (with multiples sites) BNGL file (with distinct sites) Kappa file Number of bio-molecular species in the initial model Number of bio-molecular species in the reduced model Note/current status 1 test_continue BNGL BNGL Kappa 22 22 2 Repressilator BNGL BNGL Kappa 51 16 3 egfr_net BNGL BNGL Kappa 356 356 4 egfr_net_red BNGL BNGL Kappa 40 40 5 fceri_ji BNGL BNGL Kappa 654 354 6 fceri_ji_red BNGL BNGL Kappa 654 172 7 fceri_lyn_745 BNGL BNGL Kappa 1411 745 8 fceri_fyn BNGL BNGL Kappa 2457 1281 Dead rules have been corrected 9 fceri_fyn_lig BNGL BNGL Kappa 4858 2506 Dead rules have been corrected 10 fceri_gamma2 BNGL BNGL Kappa 6646 3786 11 fceri_trimer BNGL BNGL Kappa time out 2954 12 fceri_fyn_trimer BNGL BNGL Kappa time-out time-out Dead rules have been corrected #### Benchmarks We obtain the following benchmarks: n KaDE_wo_sym KaSa KaDE_fwd KaDE_bwd bngl bngl_sym erode_initial (FB) erode_initial (NFB) erode_initial (BB) erode_initial (NBB) erode_reduced (FB) erode_reduced (NFB) erode_reduced (BB) erode_reduced (NBB) 1 0.009526 0.007193 0.02239 0.019819 0.10 0.11 0.10 0.005 0.002 0.002 0.009 0.004 2 0.008076 0.041444 0.034518 0.034876 0.08 0.06 0.001 0.003 0.001 0.001 0.001 0.002 0.001 0.001 3 0.009402 0.045191 1.1848 1.00853 7.68 8.22 0.001 0.001 0.001 0.001 0.053 0.033 4 0.009039 0.047175 0.04607 0.049943 0.16 0.20 0.001 0.001 0.001 0.001 0.001 0.004 0.002 0.002 5 13.0687 0.059046 0.824042 0.834501 5.19 0.053 0.127 0.093 0.099 0.005 0.016 0.010 0.011 6 13.1851 0.05224 0.263311 0.273658 1.85 0.052 0.134 0.094 0.095 0.002 0.007 0.002 0.004 7 13.2498 0.089625 3.48695 3.57872 13.02 0.054 0.127 0.106 0.076 0.015 0.031 0.021 0.022 8 13.1831 0.106721 6.94145 6.9275 21.36 0.066 0.138 0.092 0.088 0.035 0.064 0.043 0.044 9 13.2876 0.107225 24.8929 24.7812 46.48 0.149 0.212 0.141 0.119 0.074 0.149 0.096 0.102 10 13.2621 0.198361 151.554 151.188 121.18 0.058 0.133 0.088 0.103 0.104 0.284 0.148 0.145 11 13.1249 9.38133 119.584 118.768 128.35 0.051 0.135 0.093 0.094 0.051 0.259 0.117 0.115 12 13.232 69.7763 0.054 0.138 0.092 0.135 Each computation has been made with a 10 minutes time-out. Computations have been made on a MacBookPro with a 2.8 GHz Intel Core i7 CPU and a 16 Go 1600 MHz DDR3 memory. In particular we propose to compare the computation of several pipelines for several functionnalities. 1. Firstly, we compare the computation to generate the ground network with BNGL and with KaDE. We obtain the following plots: 2. Secondly, we compare the computation to generate the network reduced by forward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction. In the second pipeline, we skip the proof of optimality. In the third pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to prove the optimality. In the fourth pipeline, we skip the proof of optimality. In the fifth pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of forward bisimulation. It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. ERODE can also go further by inferring the coarsest bisimulation that is specified by a parition of the set of bio-molecular species. We obtain the following plots: 3. Thirdly, we compare the computation to generate the network reduced by backward bisimulation. In the first pipeline, we use KaDE to generate a reduced network, then we use ERODE to prove the optimality of the reduction (using the algorithm NBB). In the second pipeline, we specify explicitely in the BNGL model, which sites are equivalent, we use BNGL to generate the reduced model, and then we use ERODE to search for further potential reduction (since ERODE focuses on uniform bisimulation, it cannot prove the optimality of the reduction). In the third pipeline, we use BNGL to generate the ground network, and then we use ERODE to reduce this network by the means of backward bisimulation (using the faster, but potentially incomplete, algorithm BB). It is worth stressing out that equivalent sites are infered automatically by KaDE and ERODE, whereas they have to be specified explicitly by the end-used in BNGL. We obtain the following plots: # References Tools 1. Boutillier, P., Feret, J., Krivine, J., Q., Kim Ly: Kasim development homepage, http://kappalanguage.org. 2. Cardelli, L., Tribastone, M., Tschaikowski, M., Vandin, A.: ERODE: A tool for the evaluation and reduction of ordinary differential equations. In: Legay, A., Margaria, T. (eds.) Tools and Algorithms for the Construction and Analysis of Systems - 23rd International Conference (eACAS4ea6stion of ordinary differential equations. In parions. In parions. In parionlinate the Euallyis Jplet In parionse redThetat4ea6sPrein s kindSoft, as. ETAParions. Uppsiko, Swes_rtiopril 22-29,rions. memcee000"s,onlinaII. pp. 310--328 (ions) n, A.: ERODE: A tool for the evaluation and reduction of ordinary different isib hav.table /LI> 3. Cardsysma.imtlucca.it/., Q./d ali/">LI>Cardsysma.imtlucca.it/., Q./d ali/lier, A.: ERODraegomepon aPkowatnd omepHn aWouambo, D.of oSch alimepon aHuckan of oEd bimep forGI> biection of oMulbimepWf oZA ty diffSBML2LATEX: In omps, T. (eSBMLded instincthuman-GL.da>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083645701408386, "perplexity": 990.670912259601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689975.36/warc/CC-MAIN-20170924100541-20170924120541-00213.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/circuit-shown-figure-figure-1--60-rm-omega-resistor-consuming-energy-rate-230-j-s-current--q2983150
# 19.62 0 pts ended In the circuit shown in the figure (Figure 1) . the 6.0 {\rm \Omega} resistor is consuming energy at a rate of 23.0 J/s when the current through it flows as shown A. Find the current through the ammeter A. B. What are the polarity and emf of the battery {\cal E}, assuming it has negligible internal resistance?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859173595905304, "perplexity": 867.9262350395609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053883/warc/CC-MAIN-20131204131733-00022-ip-10-33-133-15.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/35674/is-time-continuous?answertab=oldest
# Is time continuous? While working on physics simulation software, I noticed that I had implemented discrete time (the only type possible on computers). By that I mean that I had an update mechanism that advanced the simulation for a fixed amount of time repeatedly, emulating a changing system. I was a bit intrigued by the concept. Is the real world advancing continuously, or in really tiny, but discrete time intervals? - Why don't you use a timer (for each item in your array) that is set to fire at the desired time? instead of firing the timer every 60 seconds. Assuming the app would be running all the time. –  user11951 Sep 5 '12 at 12:36 Hey, I appreciate the comment and all, but this doesn't really answer my question. I asked about the real, physical world, and provided my simulations only as a source of that curiosity. –  jco Sep 5 '12 at 13:09 Also, I'm not sure what do you mean. I do have timers, but I advance the world in discrete time intervals. –  jco Sep 5 '12 at 13:09 I would also add that using n timers to run a simulation is a terrible design. –  C. Lawrence Wenham Sep 5 '12 at 15:02 Time is an illusion - Lunchtime doubly so. ;) –  Wayne Werner Sep 6 '12 at 0:00 As we cannot resolve arbitrarily small time intervals, what is ''really'' the case cannot be decided. But in classical and quantum mechanics (i.e., in most of physics), time is treated as continuous. Physics would become very awkward if expressed in terms of a discrete time. Edit: If time appear discrete (or continuous) at some level, it could still be continuous (or discrete) at higher resolution. This is due to general reasons that have nothing to do with time per se. I explain it by analogy: For example, line spectra look discrete, but upon higher resolution one sees that they have a line width with a physical meaning. Thus one cannot definitely resolve the question with finitely many observations of finite accuracy, no matter how contrived the experiment. - I disagree that it's necessary impossible to distinguish discrete from continuous time: discretization even on the smallest scale can impact large scale measurements. Recall the impact of Planck's quantization of radiation energy that impacted black body radiation and the rest of Physics in the most profound way. With the discretization of time too, it's possible that it would have significant statistical implications. –  Michael Jan 14 at 19:56 @Michael: Quantization of radiation energy is, in today's terms, not a phenomenon of discretization - the energy spectrum remains continuous. And all of quantum mechnaics is based on a continuous time! –  Arnold Neumaier Jan 16 at 9:02 Given fixed radiation frequency $\nu$, the energy of that radiation is discrete, at least in QM, $E=Nh\nu$ for integer N, I believe. But that wasn't quite the point: I was just saying that it's not inconceivable that a discretization, even on the tiniest scale, would have a macroscopic effect. Therefore I disagree with the assertion that one cannot possibly tell continuous time (or any other parameter) from very finely discretized one. –  Michael Jan 16 at 17:17 @Michael: But any discrete structure can be approximated by a continuous structure, and conversely, arbitrarily well, so macroscopic evidence is always ambiguous whith respect to deciding between discrete and continuous. For example, quantum jumps, that were once successfully understood as discrete, can nowadays be continuously resolved so that one can see a gradual ''jump''. –  Arnold Neumaier Jan 22 at 15:55 Disagree with everything stated. The argument that we can't decide what is the case is unsound. The argument that physics would be awkward if expressed in discrete terms, is a curious example of academic laziness in its worst form. Most of physics as we currently understand it is based on mere statements of invariant properties at macro scale. My conjecture is that QM, GR, and SR are emergent from an underlying fully discrete super-relational theory. Such a theory may have properties of lazy evaluation, but will be impenetrable to the lazy mind. –  Halfdan Faber Jul 24 at 17:45 I'd say there's no conclusive evidence, but in quantum physics, Planck time is sometimes cited as a possible smallest unit of time. The source for my data is Quantum Gods: Creation, Chaos, and the Search for Cosmic Consciousness by Victor J. Stenger. In there, he goes into a lot of detail about this in one chapter. - From Wikipedia: Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change. So it's not necessarily the smallest unit of time, just the smallest one we're capable of using. –  Brendan Long Sep 5 '12 at 14:34 @BrendanLong - Except there's the philosophical question of "If there's no way to measure it, does it even exist?". Largely, for example, the answer for Heisenberg's uncertainty principle is that the information about a particles position and velocity don't actually physically exist simultaneously. So, if we can't measure a unit of time smaller than Planck time, if it's physically impossible, then perhaps it doesn't even exist. –  Omnifarious Sep 5 '12 at 15:17 My interpretation of the Planck Time is that it's the smallest meaningful unit of time. Time itself is continuous, i.e. intervals shorter than the Planck Time exist. But these shorter intervals are trivial, so time may as well be discrete. Additionally, if time is discrete then distance as well must be discrete. It's weird to think of the universe as pixelated... –  chharvey Sep 5 '12 at 23:25 -1. Quantum Mechanics regards spacetime as continuous, and that includes time too!. –  Dimensio1n0 Jun 22 '13 at 14:16 My understanding of the fundamental issue of time is that if we base it upon physical transactions, then we are (not only) dealing with a discretized system (e.g. quantum interactions) - but that moreover time then may have geometric properties that further confound the question. - What you are talking about is similar to the problem of quantum gravity. Since gravity is an effect of the curvature of spacetime, to have a quantum theory of it, you need to quantize the spacetime manifold. This is done with spin foams which are little units of volume in spacetime that have spins associated to them. They connect together like total angular momentum and build up into various kinds of geometry. This is just a theory, but comes from the very real problem of "what is the quantum field theory of gravity". Also, it answers the question "Higher power is needed to resolve smaller dimensions (sizes). To resolve small enough distances, the power eventually gets large enough to couple to the metric of space time. How do we talk about spacetime when the uncertainty in the injected energy transfers to uncertainty in the metric." - I think it's important to note that quantum or quantized time is not equal to discrete time. For instance, we have "quantized" space. By this we mean that it receives quantum treatment. But the underlying coordinates still form a continuum. So even if you live on a finite circle and only consider wavefunctions so that you get a countable set of basis functions from which to form all the others, you can still in principle measure incidence of particles at any point, again forming a continuum. Therefore, if we take quantum time in analogy to quantum space, we would have to conclude that quantum mechanically it would still form a continuum. Of course none of this proves how the universe really works, which is your question. The only honest answer direct to your question is "We don't know". Physical theories do not describe how the universe actually works, the only thing we know is that their predictions match experimental results we currently posses. So even if the best physical theories we currently posses use a continuum of temporal coordinates, we cannot by any means conclude that the way the universe actually works matches our description. - We have quantized space? News to me. We do have quantized angular momentum and other variables, which behaves the way you describe quantized variables as working. –  Peter Shor Jun 18 '13 at 22:28 I just mean that there exist quantum observables corresponding to position, and their outcomes in general form a continuum. Time is another issue. –  SMeznaric Jun 19 '13 at 20:38 Obviously space is continuous, so is time. What is not continuous is the conception of numbers which we are using in computers and measurments. There is no reason for something so fundamental to be descrete, since continuity is more general (and amazing) than discreteness. - Hmm, I'd need some argumentation. And personally, I like discreteness more than continuity. –  jco Sep 6 '12 at 20:14 Is your thinking process continuous? If so, so is universe and its time. There is always something inbetween descrete stuff, this emptiness makes whole system continuous. Descreteness is always embedded in continuity, since it needs a separator. –  Asphir Dom Sep 6 '12 at 20:27 That's quite contradictory. Also, you cannot objectively judge your thinking process. It might appear continuous, although neural activity does have steps, but that's not the proof that it is continuous. –  jco Sep 6 '12 at 21:47 The answer to this question is not known presently. Current physics is, as stated by other answers, based on fully continuous mathematical models, which particularly assume spacetime to be continuous. On the other hand you could argue that these models are isomorphic to discrete constructive models, with the general view that the continuous is the limit of the discrete. Some modern spacetime theories assume an underlying network/relational structure, and are fully discrete. My personal belief is that continuous structures do not exist in the physical world. This is however just a belief. - By the very fact of calling it time (i.e. assuming division), infinity appears as discrete. Otherwise, it is continuous. Same for space, since it's the other side of the same coin. - There is no continuous time or space. Only events are happening. Suppose if you are reading this answer is an event. And then looking on the roof is another event. So combine these two based on the measure of time elapse,will get the actual motion of events. same as that in the movies. - To those downvoting this, I'd like to point out that it is not completely without merit. The work 'Science without Numbers' and the resulting research efforts have successfully formulated various fields of physics without reference to any mathematical objects (numbers, functions, sets, categories, calculus etc.) and, with relevance to this post, without coordinatising space. See also Tarski's axioms (euclidean geometry without sets) and these notes goo.gl/vxYtOA from a lecture by Prof Frank Arntzenius. –  ComptonScattering Sep 11 '13 at 23:17 Was this also suggested by you yourself ? . –  Dimensio1n0 Sep 13 '13 at 1:50 @ComptonScattering: But this is non - mainstream . –  Dimensio1n0 Sep 13 '13 at 1:51 The edit was not proposed by me. Also I disagree that geometric constructions of science are in any sense fringe if that is what you are suggesting. They are the accepted works of respected scientists. It was merely an exercise to show that algebra, though useful, is not fundamental to what science does, and so one should not promote algebraic to the status of existential when attempting to interpret a theory. In other words, something that was invoked to do a calculation cannot reasonably be said to therefore exist. –  ComptonScattering Sep 13 '13 at 8:14 @ComptonScattering: I was asking wilfred, not you . –  Dimensio1n0 Sep 13 '13 at 12:15 Due to the work of Julian Barbour and others, time is defined (in a closed system) by keeping track of all the changes (of particles and so on). In this respect we would say that in a classical system (macroscopic) that time would be continuous since the motions of such objects are essentially continuous and the way that you parameterize the changes would then be continuous. In a quantum mechanical system, i think this gets trickier because the formalism is kind of set up from the POV of a "scientist in a lab" so that time is continuous classical external parameter for the macroscopic scientist. In some formulations of QM, position is a continuous variable and particles have definite (but uncertain) position, in this context you can still have a continuous time parameter. - ## protected by Qmechanic♦Sep 11 '13 at 9:52 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490568995475769, "perplexity": 833.4379479438376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646944.14/warc/CC-MAIN-20141024030046-00100-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/power-in-ac-circuit.652561/
# Power in AC Circuit 1. Nov 15, 2012 ### ResonantW In an AC circuit, the average power dissipated is given by $P=VIcos(\phi)$. Does that mean that in a highly inductive, or highly capacitave, circuit where $\phi$ approaches $\pm \pi/2$, the power can be made arbitrarily small? Even if a resistor were present? Does that mean it wouldn't heat up at all? 2. Nov 15, 2012 ### Staff: Mentor As a fraction of apparent power, real power can be small, but adding a capacitor doesn't reduce the actual value of the real power. 3. Nov 15, 2012 ### Staff: Mentor In a highly inductive element, there is only a very small component of current that is in phase with the voltage (leaving most to be in phase quadrature). But if resistance is added, then ɸ will no longer be close to Pi/2. If a current I (RMS) passes through a resistance R, the power loss is I²R. ALWAYS. Similar Discussions: Power in AC Circuit
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973536491394043, "perplexity": 1578.5810413199033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522999.27/warc/CC-MAIN-20171213104259-20171213124259-00338.warc.gz"}
https://par.nsf.gov/biblio/10282303-physics-constrained-dictionary-learning-selective-laser-melting-process-monitoring
Physics-Constrained Dictionary Learning for Selective Laser Melting Process Monitoring Compressed sensing (CS) as a new data acquisition technique has been applied to monitor manufacturing processes. With a few measurements, sparse coefficient vectors can be recovered by solving an inverse problem and original signals can be reconstructed. Dictionary learning methods have been developed and applied in combination with CS to improve the sparsity level of the recovered coefficient vectors. In this work, a physics-constrained dictionary learning approach is proposed to solve both of reconstruction and classification problems by optimizing measurement, basis, and classification matrices simultaneously with the considerations of the application-specific restrictions. It is applied in image acquisitions in selective laser melting (SLM). The proposed approach includes the optimization in two steps. In the first stage, with the basis matrix fixed, the measurement matrix is optimized by determining the pixel locations for sampling in each image. The optimized measurement matrix only includes one non-zero entry in each row. The optimization of pixel locations is solved based on a constrained FrameSense algorithm. In the second stage, with the measurement matrix fixed, the basis and classification matrices are optimized based on the K-SVD algorithm. With the optimized basis matrix, the coefficient vector can be recovered with CS. The original signal can be more » Authors: ; Award ID(s): Publication Date: NSF-PAR ID: 10282303 Journal Name: Proceedings of 2021 IISE Annual Conference & Expo National Science Foundation More Like this 1. Abstract: Coded aperture X-ray computed tomography (CT) has the potential to revolutionize X-ray tomography systems in medical imaging and air and rail transit security - both areas of global importance. It allows either a reduced set of measurements in X-ray CT without degrada- tion in image reconstruction, or measure multiplexed X-rays to simplify the sensing geometry. Measurement reduction is of particular interest in medical imaging to reduce radiation, and airport security often imposes practical constraints leading to limited angle geometries. Coded aperture compressive X-ray CT places a coded aperture pattern in front of the X-ray source in order to obtain patterned projections onto a detector. Compressive sensing (CS) reconstruction algorithms are then used to recover the image. To date, the coded illumination patterns used in conventional CT systems have been random. This paper addresses the code optimization prob- lem for general tomography imaging based on the point spread function (PSF) of the system, which is used as a measure of the sensing matrix quality which connects to the restricted isom- etry property (RIP) and coherence of the sensing matrix. The methods presented are general, simple to use, and can be easily extended to other imaging systems. Simulations are presented wheremore » 2. Abstract: Coded aperture X-ray computed tomography (CT) has the potential to revolutionize X-ray tomography systems in medical imaging and air and rail transit security - both areas of global importance. It allows either a reduced set of measurements in X-ray CT without degrada- tion in image reconstruction, or measure multiplexed X-rays to simplify the sensing geometry. Measurement reduction is of particular interest in medical imaging to reduce radiation, and airport security often imposes practical constraints leading to limited angle geometries. Coded aperture compressive X-ray CT places a coded aperture pattern in front of the X-ray source in order to obtain patterned projections onto a detector. Compressive sensing (CS) reconstruction algorithms are then used to recover the image. To date, the coded illumination patterns used in conventional CT systems have been random. This paper addresses the code optimization prob- lem for general tomography imaging based on the point spread function (PSF) of the system, which is used as a measure of the sensing matrix quality which connects to the restricted isom- etry property (RIP) and coherence of the sensing matrix. The methods presented are general, simple to use, and can be easily extended to other imaging systems. Simulations are presented wheremore » 3. This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components: (1) the vector representations of local contents of images and (2) the matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Thus the vector representation is equivariant as it varies according to the local displacements. Our experiments show that our model can learn Gabor-like filter pairs of quadrature phases. The profiles of the learned filters match those of simple cells in Macaque V1. Moreover, we demonstrate that the model can learn to infer local motions in either a supervised or unsupervised manner. With such a simple model, we achieve competitive results on optical flow estimation. 4. This paper is concerned with the estimation of time-varying networks for high-dimensional nonstationary time series. Two types of dynamic behaviors are considered: structural breaks (i.e., abrupt change points) and smooth changes. To simultaneously handle these two types of time-varying features, a two-step approach is proposed: multiple change point locations are first identified on the basis of comparing the difference between the localized averages on sample covariance matrices, and then graph supports are recovered on the basis of a kernelized time-varying constrained L 1 -minimization for inverse matrix estimation (CLIME) estimator on each segment. We derive the rates of convergence for estimating the change points and precision matrices under mild moment and dependence conditions. In particular, we show that this two-step approach is consistent in estimating the change points and the piecewise smooth precision matrix function, under a certain high-dimensional scaling limit. The method is applied to the analysis of network structure of the S&P 500 index between 2003 and 2008. 5. Glow discharge optical emission spectroscopy elemental mapping (GDOES EM), enabled by spectral imaging strategies, is an advantageous technique for direct multi-elemental analysis of solid samples in rapid timeframes. Here, a single-pixel, or point scan, spectral imaging system based on compressed sensing image sampling, is developed and optimized in terms of matrix density, compression factor, sparsifying basis, and reconstruction algorithm for coupling with GDOES EM. It is shown that a 512 matrix density at a compression factor of 30% provides the highest spatial fidelity in terms of the peak signal-to-noise ratio (PSNR) and complex wavelet structural similarity index measure (cw-SSIM) while maintaining fast measurement times. The background equivalent concentration (BEC) of Cu I at 510.5 nm is improved when implementing the discrete wavelet transform (DWT) sparsifying basis and Two-step Iterative Shrinking/Thresholding Algorithm for Linear Inverse Problems (TwIST) reconstruction algorithm. Utilizing these optimum conditions, a GDOES EM of a flexible, etched-copper circuit board was then successfully demonstrated with the compressed sensing single-pixel spectral imaging system (CSSPIS). The newly developed CSSPIS allows taking advantage of the significant cost-efficiency of point-scanning approaches (>10× vs. intensified array detector systems), while overcoming (up to several orders of magnitude) their inherent and substantial throughput limitations. Ultimately, itmore »
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191912770271301, "perplexity": 976.3210205389292}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00688.warc.gz"}
http://radio.kasi.re.kr/kvn/status_report_2014/observing_mode.html
## Multi-frequency observations Simultaneous multi-frequency observation is a unique capability of KVN with which we can calibrate out the short-term phase fluctuations at higher frequency data by referencing the phase solution obtained from lower frequency data. This phase referencing technique allows us to integrate the data for the time scale much longer than the coherent time scale of atmospheric phase fluctuation and so to observe weak sources at mm wavelength efficiently. For multi-frequency observations, we can select no more than 4 IFs among 8 IF signals ( = 4 receivers x 2 polarizations). ## Fast position switching observations The slewing speed and acceleration rate of the KVN antenna are 3 deg/sec and 3deg/sec^2, respectively. Due to this high speed and acceleration rate, the KVN antenna can switch its pointing from target to calibrator at a short period. Korean VLBI Network (KVN)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225608706474304, "perplexity": 2997.8075207030483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00247.warc.gz"}
https://www.educator.com/mathematics/ap-calculus-ab/hovasapian/example-problems-for-the-fundamental-theorem.php
× INSTRUCTORS Raffi Hovasapian John Zhu Raffi Hovasapian Example Problems for the Fundamental Theorem Slide Duration: Section 1: Limits and Derivatives Overview & Slopes of Curves 42m 8s Intro 0:00 Overview & Slopes of Curves 0:21 Differential and Integral 0:22 Fundamental Theorem of Calculus 6:36 Differentiation or Taking the Derivative 14:24 What Does the Derivative Mean and How do We Find it? 15:18 Example: f'(x) 19:24 Example: f(x) = sin (x) 29:16 General Procedure for Finding the Derivative of f(x) 37:33 More on Slopes of Curves 50m 53s Intro 0:00 Slope of the Secant Line along a Curve 0:12 Slope of the Tangent Line to f(x) at a Particlar Point 0:13 Slope of the Secant Line along a Curve 2:59 Instantaneous Slope 6:51 Instantaneous Slope 6:52 Example: Distance, Time, Velocity 13:32 Instantaneous Slope and Average Slope 25:42 Slope & Rate of Change 29:55 Slope & Rate of Change 29:56 Example: Slope = 2 33:16 Example: Slope = 4/3 34:32 Example: Slope = 4 (m/s) 39:12 Example: Density = Mass / Volume 40:33 Average Slope, Average Rate of Change, Instantaneous Slope, and Instantaneous Rate of Change 47:46 Example Problems for Slopes of Curves 59m 12s Intro 0:00 Example I: Water Tank 0:13 Part A: Which is the Independent Variable and Which is the Dependent? 2:00 Part B: Average Slope 3:18 Part C: Express These Slopes as Rates-of-Change 9:28 Part D: Instantaneous Slope 14:54 Example II: y = √(x-3) 28:26 Part A: Calculate the Slope of the Secant Line 30:39 Part B: Instantaneous Slope 41:26 Part C: Equation for the Tangent Line 43:59 Example III: Object in the Air 49:37 Part A: Average Velocity 50:37 Part B: Instantaneous Velocity 55:30 Desmos Tutorial 18m 43s Intro 0:00 Desmos Tutorial 1:42 Desmos Tutorial 1:43 Things You Must Learn To Do on Your Particular Calculator 2:39 Things You Must Learn To Do on Your Particular Calculator 2:40 Example I: y=sin x 4:54 Example II: y=x³ and y = d/(dx) (x³) 9:22 Example III: y = x² {-5 <= x <= 0} and y = cos x {0 < x < 6} 13:15 The Limit of a Function 51m 53s Intro 0:00 The Limit of a Function 0:14 The Limit of a Function 0:15 Graph: Limit of a Function 12:24 Table of Values 16:02 lim x→a f(x) Does not Say What Happens When x = a 20:05 Example I: f(x) = x² 24:34 Example II: f(x) = 7 27:05 Example III: f(x) = 4.5 30:33 Example IV: f(x) = 1/x 34:03 Example V: f(x) = 1/x² 36:43 The Limit of a Function, Cont. 38:16 Infinity and Negative Infinity 38:17 Does Not Exist 42:45 Summary 46:48 Example Problems for the Limit of a Function 24m 43s Intro 0:00 Example I: Explain in Words What the Following Symbols Mean 0:10 Example II: Find the Following Limit 5:21 Example III: Use the Graph to Find the Following Limits 7:35 Example IV: Use the Graph to Find the Following Limits 11:48 Example V: Sketch the Graph of a Function that Satisfies the Following Properties 15:25 Example VI: Find the Following Limit 18:44 Example VII: Find the Following Limit 20:06 Calculating Limits Mathematically 53m 48s Intro 0:00 Plug-in Procedure 0:09 Plug-in Procedure 0:10 Limit Laws 9:14 Limit Law 1 10:05 Limit Law 2 10:54 Limit Law 3 11:28 Limit Law 4 11:54 Limit Law 5 12:24 Limit Law 6 13:14 Limit Law 7 14:38 Plug-in Procedure, Cont. 16:35 Plug-in Procedure, Cont. 16:36 Example I: Calculating Limits Mathematically 20:50 Example II: Calculating Limits Mathematically 27:37 Example III: Calculating Limits Mathematically 31:42 Example IV: Calculating Limits Mathematically 35:36 Example V: Calculating Limits Mathematically 40:58 Limits Theorem 44:45 Limits Theorem 1 44:46 Limits Theorem 2: Squeeze Theorem 46:34 Example VI: Calculating Limits Mathematically 49:26 Example Problems for Calculating Limits Mathematically 21m 22s Intro 0:00 Example I: Evaluate the Following Limit by Showing Each Application of a Limit Law 0:16 Example II: Evaluate the Following Limit 1:51 Example III: Evaluate the Following Limit 3:36 Example IV: Evaluate the Following Limit 8:56 Example V: Evaluate the Following Limit 11:19 Example VI: Calculating Limits Mathematically 13:19 Example VII: Calculating Limits Mathematically 14:59 Calculating Limits as x Goes to Infinity 50m 1s Intro 0:00 Limit as x Goes to Infinity 0:14 Limit as x Goes to Infinity 0:15 Let's Look at f(x) = 1 / (x-3) 1:04 Summary 9:34 Example I: Calculating Limits as x Goes to Infinity 12:16 Example II: Calculating Limits as x Goes to Infinity 21:22 Example III: Calculating Limits as x Goes to Infinity 24:10 Example IV: Calculating Limits as x Goes to Infinity 36:00 Example Problems for Limits at Infinity 36m 31s Intro 0:00 Example I: Calculating Limits as x Goes to Infinity 0:14 Example II: Calculating Limits as x Goes to Infinity 3:27 Example III: Calculating Limits as x Goes to Infinity 8:11 Example IV: Calculating Limits as x Goes to Infinity 14:20 Example V: Calculating Limits as x Goes to Infinity 20:07 Example VI: Calculating Limits as x Goes to Infinity 23:36 Continuity 53m Intro 0:00 Definition of Continuity 0:08 Definition of Continuity 0:09 Example: Not Continuous 3:52 Example: Continuous 4:58 Example: Not Continuous 5:52 Procedure for Finding Continuity 9:45 Law of Continuity 13:44 Law of Continuity 13:45 Example I: Determining Continuity on a Graph 15:55 Example II: Show Continuity & Determine the Interval Over Which the Function is Continuous 17:57 Example III: Is the Following Function Continuous at the Given Point? 22:42 Theorem for Composite Functions 25:28 Theorem for Composite Functions 25:29 Example IV: Is cos(x³ + ln x) Continuous at x=π/2? 27:00 Example V: What Value of A Will make the Following Function Continuous at Every Point of Its Domain? 34:04 Types of Discontinuity 39:18 Removable Discontinuity 39:33 Jump Discontinuity 40:06 Infinite Discontinuity 40:32 Intermediate Value Theorem 40:58 Intermediate Value Theorem: Hypothesis & Conclusion 40:59 Intermediate Value Theorem: Graphically 43:40 Example VI: Prove That the Following Function Has at Least One Real Root in the Interval [4,6] 47:46 Derivative I 40m 2s Intro 0:00 Derivative 0:09 Derivative 0:10 Example I: Find the Derivative of f(x)=x³ 2:20 Notations for the Derivative 7:32 Notations for the Derivative 7:33 Derivative & Rate of Change 11:14 Recall the Rate of Change 11:15 Instantaneous Rate of Change 17:04 Graphing f(x) and f'(x) 19:10 Example II: Find the Derivative of x⁴ - x² 24:00 Example III: Find the Derivative of f(x)=√x 30:51 Derivatives II 53m 45s Intro 0:00 Example I: Find the Derivative of (2+x)/(3-x) 0:18 Derivatives II 9:02 f(x) is Differentiable if f'(x) Exists 9:03 Recall: For a Limit to Exist, Both Left Hand and Right Hand Limits Must Equal to Each Other 17:19 Geometrically: Differentiability Means the Graph is Smooth 18:44 Example II: Show Analytically that f(x) = |x| is Nor Differentiable at x=0 20:53 Example II: For x > 0 23:53 Example II: For x < 0 25:36 Example II: What is f(0) and What is the lim |x| as x→0? 30:46 Differentiability & Continuity 34:22 Differentiability & Continuity 34:23 How Can a Function Not be Differentiable at a Point? 39:38 How Can a Function Not be Differentiable at a Point? 39:39 Higher Derivatives 41:58 Higher Derivatives 41:59 Derivative Operator 45:12 Example III: Find (dy)/(dx) & (d²y)/(dx²) for y = x³ 49:29 More Example Problems for The Derivative 31m 38s Intro 0:00 Example I: Sketch f'(x) 0:10 Example II: Sketch f'(x) 2:14 Example III: Find the Derivative of the Following Function sing the Definition 3:49 Example IV: Determine f, f', and f'' on a Graph 12:43 Example V: Find an Equation for the Tangent Line to the Graph of the Following Function at the Given x-value 13:40 Example VI: Distance vs. Time 20:15 Example VII: Displacement, Velocity, and Acceleration 23:56 Example VIII: Graph the Displacement Function 28:20 Section 2: Differentiation Differentiation of Polynomials & Exponential Functions 47m 35s Intro 0:00 Differentiation of Polynomials & Exponential Functions 0:15 Derivative of a Function 0:16 Derivative of a Constant 2:35 Power Rule 3:08 If C is a Constant 4:19 Sum Rule 5:22 Exponential Functions 6:26 Example I: Differentiate 7:45 Example II: Differentiate 12:38 Example III: Differentiate 15:13 Example IV: Differentiate 16:20 Example V: Differentiate 19:19 Example VI: Find the Equation of the Tangent Line to a Function at a Given Point 12:18 Example VII: Find the First & Second Derivatives 25:59 Example VIII 27:47 Part A: Find the Velocity & Acceleration Functions as Functions of t 27:48 Part B: Find the Acceleration after 3 Seconds 30:12 Part C: Find the Acceleration when the Velocity is 0 30:53 Part D: Graph the Position, Velocity, & Acceleration Graphs 32:50 Example IX: Find a Cubic Function Whose Graph has Horizontal Tangents 34:53 Example X: Find a Point on a Graph 42:31 The Product, Power & Quotient Rules 47m 25s Intro 0:00 The Product, Power and Quotient Rules 0:19 Differentiate Functions 0:20 Product Rule 5:30 Quotient Rule 9:15 Power Rule 10:00 Example I: Product Rule 13:48 Example II: Quotient Rule 16:13 Example III: Power Rule 18:28 Example IV: Find dy/dx 19:57 Example V: Find dy/dx 24:53 Example VI: Find dy/dx 28:38 Example VII: Find an Equation for the Tangent to the Curve 34:54 Example VIII: Find d²y/dx² 38:08 Derivatives of the Trigonometric Functions 41m 8s Intro 0:00 Derivatives of the Trigonometric Functions 0:09 Let's Find the Derivative of f(x) = sin x 0:10 Important Limits to Know 4:59 d/dx (sin x) 6:06 d/dx (cos x) 6:38 d/dx (tan x) 6:50 d/dx (csc x) 7:02 d/dx (sec x) 7:15 d/dx (cot x) 7:27 Example I: Differentiate f(x) = x² - 4 cos x 7:56 Example II: Differentiate f(x) = x⁵ tan x 9:04 Example III: Differentiate f(x) = (cos x) / (3 + sin x) 10:56 Example IV: Differentiate f(x) = e^x / (tan x - sec x) 14:06 Example V: Differentiate f(x) = (csc x - 4) / (cot x) 15:37 Example VI: Find an Equation of the Tangent Line 21:48 Example VII: For What Values of x Does the Graph of the Function x + 3 cos x Have a Horizontal Tangent? 25:17 28:23 Example IX: Evaluate 33:22 Example X: Evaluate 36:38 The Chain Rule 24m 56s Intro 0:00 The Chain Rule 0:13 Recall the Composite Functions 0:14 Derivatives of Composite Functions 1:34 Example I: Identify f(x) and g(x) and Differentiate 6:41 Example II: Identify f(x) and g(x) and Differentiate 9:47 Example III: Differentiate 11:03 Example IV: Differentiate f(x) = -5 / (x² + 3)³ 12:15 Example V: Differentiate f(x) = cos(x² + c²) 14:35 Example VI: Differentiate f(x) = cos⁴x +c² 15:41 Example VII: Differentiate 17:03 Example VIII: Differentiate f(x) = sin(tan x²) 19:01 Example IX: Differentiate f(x) = sin(tan² x) 21:02 More Chain Rule Example Problems 25m 32s Intro 0:00 Example I: Differentiate f(x) = sin(cos(tanx)) 0:38 Example II: Find an Equation for the Line Tangent to the Given Curve at the Given Point 2:25 Example III: F(x) = f(g(x)), Find F' (6) 4:22 Example IV: Differentiate & Graph both the Function & the Derivative in the Same Window 5:35 Example V: Differentiate f(x) = ( (x-8)/(x+3) )⁴ 10:18 Example VI: Differentiate f(x) = sec²(12x) 12:28 Example VII: Differentiate 14:41 Example VIII: Differentiate 19:25 Example IX: Find an Expression for the Rate of Change of the Volume of the Balloon with Respect to Time 21:13 Implicit Differentiation 52m 31s Intro 0:00 Implicit Differentiation 0:09 Implicit Differentiation 0:10 Example I: Find (dy)/(dx) by both Implicit Differentiation and Solving Explicitly for y 12:15 Example II: Find (dy)/(dx) of x³ + x²y + 7y² = 14 19:18 Example III: Find (dy)/(dx) of x³y² + y³x² = 4x 21:43 Example IV: Find (dy)/(dx) of the Following Equation 24:13 Example V: Find (dy)/(dx) of 6sin x cos y = 1 29:00 Example VI: Find (dy)/(dx) of x² cos² y + y sin x = 2sin x cos y 31:02 Example VII: Find (dy)/(dx) of √(xy) = 7 + y²e^x 37:36 Example VIII: Find (dy)/(dx) of 4(x²+y²)² = 35(x²-y²) 41:03 Example IX: Find (d²y)/(dx²) of x² + y² = 25 44:05 Example X: Find (d²y)/(dx²) of sin x + cos y = sin(2x) 47:48 Section 3: Applications of the Derivative Linear Approximations & Differentials 47m 34s Intro 0:00 Linear Approximations & Differentials 0:09 Linear Approximations & Differentials 0:10 Example I: Linear Approximations & Differentials 11:27 Example II: Linear Approximations & Differentials 20:19 Differentials 30:32 Differentials 30:33 Example III: Linear Approximations & Differentials 34:09 Example IV: Linear Approximations & Differentials 35:57 Example V: Relative Error 38:46 Related Rates 45m 33s Intro 0:00 Related Rates 0:08 Strategy for Solving Related Rates Problems #1 0:09 Strategy for Solving Related Rates Problems #2 1:46 Strategy for Solving Related Rates Problems #3 2:06 Strategy for Solving Related Rates Problems #4 2:50 Strategy for Solving Related Rates Problems #5 3:38 Example I: Radius of a Balloon 5:15 12:52 Example III: Water Tank 19:08 Example IV: Distance between Two Cars 29:27 Example V: Line-of-Sight 36:20 More Related Rates Examples 37m 17s Intro 0:00 0:14 Example II: Particle 4:45 Example III: Water Level 10:28 Example IV: Clock 20:47 Example V: Distance between a House and a Plane 29:11 Maximum & Minimum Values of a Function 40m 44s Intro 0:00 Maximum & Minimum Values of a Function, Part 1 0:23 Absolute Maximum 2:20 Absolute Minimum 2:52 Local Maximum 3:38 Local Minimum 4:26 Maximum & Minimum Values of a Function, Part 2 6:11 Function with Absolute Minimum but No Absolute Max, Local Max, and Local Min 7:18 Function with Local Max & Min but No Absolute Max & Min 8:48 Formal Definitions 10:43 Absolute Maximum 11:18 Absolute Minimum 12:57 Local Maximum 14:37 Local Minimum 16:25 Extreme Value Theorem 18:08 Theorem: f'(c) = 0 24:40 Critical Number (Critical Value) 26:14 Procedure for Finding the Critical Values of f(x) 28:32 Example I: Find the Critical Values of f(x) x + sinx 29:51 Example II: What are the Absolute Max & Absolute Minimum of f(x) = x + 4 sinx on [0,2π] 35:31 Example Problems for Max & Min 40m 44s Intro 0:00 Example I: Identify Absolute and Local Max & Min on the Following Graph 0:11 Example II: Sketch the Graph of a Continuous Function 3:11 Example III: Sketch the Following Graphs 4:40 Example IV: Find the Critical Values of f (x) = 3x⁴ - 7x³ + 4x² 6:13 Example V: Find the Critical Values of f(x) = |2x - 5| 8:42 Example VI: Find the Critical Values 11:42 Example VII: Find the Critical Values f(x) = cos²(2x) on [0,2π] 16:57 Example VIII: Find the Absolute Max & Min f(x) = 2sinx + 2cos x on [0,(π/3)] 20:08 Example IX: Find the Absolute Max & Min f(x) = (ln(2x)) / x on [1,3] 24:39 The Mean Value Theorem 25m 54s Intro 0:00 Rolle's Theorem 0:08 Rolle's Theorem: If & Then 0:09 Rolle's Theorem: Geometrically 2:06 There May Be More than 1 c Such That f'( c ) = 0 3:30 Example I: Rolle's Theorem 4:58 The Mean Value Theorem 9:12 The Mean Value Theorem: If & Then 9:13 The Mean Value Theorem: Geometrically 11:07 Example II: Mean Value Theorem 13:43 Example III: Mean Value Theorem 21:19 Using Derivatives to Graph Functions, Part I 25m 54s Intro 0:00 Using Derivatives to Graph Functions, Part I 0:12 Increasing/ Decreasing Test 0:13 Example I: Find the Intervals Over Which the Function is Increasing & Decreasing 3:26 Example II: Find the Local Maxima & Minima of the Function 19:18 Example III: Find the Local Maxima & Minima of the Function 31:39 Using Derivatives to Graph Functions, Part II 44m 58s Intro 0:00 Using Derivatives to Graph Functions, Part II 0:13 Concave Up & Concave Down 0:14 What Does This Mean in Terms of the Derivative? 6:14 Point of Inflection 8:52 Example I: Graph the Function 13:18 Example II: Function x⁴ - 5x² 19:03 Intervals of Increase & Decrease 19:04 Local Maxes and Mins 25:01 Intervals of Concavity & X-Values for the Points of Inflection 29:18 Intervals of Concavity & Y-Values for the Points of Inflection 34:18 Graphing the Function 40:52 Example Problems I 49m 19s Intro 0:00 Example I: Intervals, Local Maxes & Mins 0:26 Example II: Intervals, Local Maxes & Mins 5:05 Example III: Intervals, Local Maxes & Mins, and Inflection Points 13:40 Example IV: Intervals, Local Maxes & Mins, Inflection Points, and Intervals of Concavity 23:02 Example V: Intervals, Local Maxes & Mins, Inflection Points, and Intervals of Concavity 34:36 Example Problems III 59m 1s Intro 0:00 Example I: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 0:11 Example II: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 21:24 Example III: Cubic Equation f(x) = Ax³ + Bx² + Cx + D 37:56 Example IV: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 46:19 L'Hospital's Rule 30m 9s Intro 0:00 L'Hospital's Rule 0:19 Indeterminate Forms 0:20 L'Hospital's Rule 3:38 Example I: Evaluate the Following Limit Using L'Hospital's Rule 8:50 Example II: Evaluate the Following Limit Using L'Hospital's Rule 10:30 Indeterminate Products 11:54 Indeterminate Products 11:55 Example III: L'Hospital's Rule & Indeterminate Products 13:57 Indeterminate Differences 17:00 Indeterminate Differences 17:01 Example IV: L'Hospital's Rule & Indeterminate Differences 18:57 Indeterminate Powers 22:20 Indeterminate Powers 22:21 Example V: L'Hospital's Rule & Indeterminate Powers 25:13 Example Problems for L'Hospital's Rule 38m 14s Intro 0:00 Example I: Evaluate the Following Limit 0:17 Example II: Evaluate the Following Limit 2:45 Example III: Evaluate the Following Limit 6:54 Example IV: Evaluate the Following Limit 8:43 Example V: Evaluate the Following Limit 11:01 Example VI: Evaluate the Following Limit 14:48 Example VII: Evaluate the Following Limit 17:49 Example VIII: Evaluate the Following Limit 20:37 Example IX: Evaluate the Following Limit 25:16 Example X: Evaluate the Following Limit 32:44 Optimization Problems I 49m 59s Intro 0:00 Example I: Find the Dimensions of the Box that Gives the Greatest Volume 1:23 Fundamentals of Optimization Problems 18:08 Fundamental #1 18:33 Fundamental #2 19:09 Fundamental #3 19:19 Fundamental #4 20:59 Fundamental #5 21:55 Fundamental #6 23:44 Example II: Demonstrate that of All Rectangles with a Given Perimeter, the One with the Largest Area is a Square 24:36 Example III: Find the Points on the Ellipse 9x² + y² = 9 Farthest Away from the Point (1,0) 35:13 Example IV: Find the Dimensions of the Rectangle of Largest Area that can be Inscribed in a Circle of Given Radius R 43:10 Optimization Problems II 55m 10s Intro 0:00 Example I: Optimization Problem 0:13 Example II: Optimization Problem 17:34 Example III: Optimization Problem 35:06 Example IV: Revenue, Cost, and Profit 43:22 Newton's Method 30m 22s Intro 0:00 Newton's Method 0:45 Newton's Method 0:46 Example I: Find x2 and x3 13:18 Example II: Use Newton's Method to Approximate 15:48 Example III: Find the Root of the Following Equation to 6 Decimal Places 19:57 Example IV: Use Newton's Method to Find the Coordinates of the Inflection Point 23:11 Section 4: Integrals Antiderivatives 55m 26s Intro 0:00 Antiderivatives 0:23 Definition of an Antiderivative 0:24 Antiderivative Theorem 7:58 Function & Antiderivative 12:10 x^n 12:30 1/x 13:00 e^x 13:08 cos x 13:18 sin x 14:01 sec² x 14:11 secxtanx 14:18 1/√(1-x²) 14:26 1/(1+x²) 14:36 -1/√(1-x²) 14:45 Example I: Find the Most General Antiderivative for the Following Functions 15:07 Function 1: f(x) = x³ -6x² + 11x - 9 15:42 Function 2: f(x) = 14√(x) - 27 4√x 19:12 Function 3: (fx) = cos x - 14 sinx 20:53 Function 4: f(x) = (x⁵+2√x )/( x^(4/3) ) 22:10 Function 5: f(x) = (3e^x) - 2/(1+x²) 25:42 Example II: Given the Following, Find the Original Function f(x) 26:37 Function 1: f'(x) = 5x³ - 14x + 24, f(2) = 40 27:55 Function 2: f'(x) 3 sinx + sec²x, f(π/6) = 5 30:34 Function 3: f''(x) = 8x - cos x, f(1.5) = 12.7, f'(1.5) = 4.2 32:54 Function 4: f''(x) = 5/(√x), f(2) 15, f'(2) = 7 37:54 Example III: Falling Object 41:58 Problem 1: Find an Equation for the Height of the Ball after t Seconds 42:48 Problem 2: How Long Will It Take for the Ball to Strike the Ground? 48:30 Problem 3: What is the Velocity of the Ball as it Hits the Ground? 49:52 Problem 4: Initial Velocity of 6 m/s, How Long Does It Take to Reach the Ground? 50:46 The Area Under a Curve 51m 3s Intro 0:00 The Area Under a Curve 0:13 Approximate Using Rectangles 0:14 Let's Do This Again, Using 4 Different Rectangles 9:40 Approximate with Rectangles 16:10 Left Endpoint 18:08 Right Endpoint 25:34 Left Endpoint vs. Right Endpoint 30:58 Number of Rectangles 34:08 True Area 37:36 True Area 37:37 Sigma Notation & Limits 43:32 When You Have to Explicitly Solve Something 47:56 Example Problems for Area Under a Curve 33m 7s Intro 0:00 Example I: Using Left Endpoint & Right Endpoint to Approximate Area Under a Curve 0:10 Example II: Using 5 Rectangles, Approximate the Area Under the Curve 11:32 Example III: Find the True Area by Evaluating the Limit Expression 16:07 Example IV: Find the True Area by Evaluating the Limit Expression 24:52 The Definite Integral 43m 19s Intro 0:00 The Definite Integral 0:08 Definition to Find the Area of a Curve 0:09 Definition of the Definite Integral 4:08 Symbol for Definite Integral 8:45 Regions Below the x-axis 15:18 Associating Definite Integral to a Function 19:38 Integrable Function 27:20 Evaluating the Definite Integral 29:26 Evaluating the Definite Integral 29:27 Properties of the Definite Integral 35:24 Properties of the Definite Integral 35:25 Example Problems for The Definite Integral 32m 14s Intro 0:00 Example I: Approximate the Following Definite Integral Using Midpoints & Sub-intervals 0:11 Example II: Express the Following Limit as a Definite Integral 5:28 Example III: Evaluate the Following Definite Integral Using the Definition 6:28 Example IV: Evaluate the Following Integral Using the Definition 17:06 Example V: Evaluate the Following Definite Integral by Using Areas 25:41 Example VI: Definite Integral 30:36 The Fundamental Theorem of Calculus 24m 17s Intro 0:00 The Fundamental Theorem of Calculus 0:17 Evaluating an Integral 0:18 Lim as x → ∞ 12:19 Taking the Derivative 14:06 Differentiation & Integration are Inverse Processes 15:04 1st Fundamental Theorem of Calculus 20:08 1st Fundamental Theorem of Calculus 20:09 2nd Fundamental Theorem of Calculus 22:30 2nd Fundamental Theorem of Calculus 22:31 Example Problems for the Fundamental Theorem 25m 21s Intro 0:00 Example I: Find the Derivative of the Following Function 0:17 Example II: Find the Derivative of the Following Function 1:40 Example III: Find the Derivative of the Following Function 2:32 Example IV: Find the Derivative of the Following Function 5:55 Example V: Evaluate the Following Integral 7:13 Example VI: Evaluate the Following Integral 9:46 Example VII: Evaluate the Following Integral 12:49 Example VIII: Evaluate the Following Integral 13:53 Example IX: Evaluate the Following Graph 15:24 Local Maxs and Mins for g(x) 15:25 Where Does g(x) Achieve Its Absolute Max on [0,8] 20:54 On What Intervals is g(x) Concave Up/Down? 22:20 Sketch a Graph of g(x) 24:34 More Example Problems, Including Net Change Applications 34m 22s Intro 0:00 Example I: Evaluate the Following Indefinite Integral 0:10 Example II: Evaluate the Following Definite Integral 0:59 Example III: Evaluate the Following Integral 2:59 Example IV: Velocity Function 7:46 Part A: Net Displacement 7:47 Part B: Total Distance Travelled 13:15 Example V: Linear Density Function 20:56 Example VI: Acceleration Function 25:10 Part A: Velocity Function at Time t 25:11 Part B: Total Distance Travelled During the Time Interval 28:38 Solving Integrals by Substitution 27m 20s Intro 0:00 Table of Integrals 0:35 Example I: Evaluate the Following Indefinite Integral 2:02 Example II: Evaluate the Following Indefinite Integral 7:27 Example IIII: Evaluate the Following Indefinite Integral 10:57 Example IV: Evaluate the Following Indefinite Integral 12:33 Example V: Evaluate the Following 14:28 Example VI: Evaluate the Following 16:00 Example VII: Evaluate the Following 19:01 Example VIII: Evaluate the Following 21:49 Example IX: Evaluate the Following 24:34 Section 5: Applications of Integration Areas Between Curves 34m 56s Intro 0:00 Areas Between Two Curves: Function of x 0:08 Graph 1: Area Between f(x) & g(x) 0:09 Graph 2: Area Between f(x) & g(x) 4:07 Is It Possible to Write as a Single Integral? 8:20 Area Between the Curves on [a,b] 9:24 Absolute Value 10:32 Formula for Areas Between Two Curves: Top Function - Bottom Function 17:03 Areas Between Curves: Function of y 17:49 What if We are Given Functions of y? 17:50 Formula for Areas Between Two Curves: Right Function - Left Function 21:48 Finding a & b 22:32 Example Problems for Areas Between Curves 42m 55s Intro 0:00 Instructions for the Example Problems 0:10 Example I: y = 7x - x² and y=x 0:37 Example II: x=y²-3, x=e^((1/2)y), y=-1, and y=2 6:25 Example III: y=(1/x), y=(1/x³), and x=4 12:25 Example IV: 15-2x² and y=x²-5 15:52 Example V: x=(1/8)y³ and x=6-y² 20:20 Example VI: y=cos x, y=sin(2x), [0,π/2] 24:34 Example VII: y=2x², y=10x², 7x+2y=10 29:51 Example VIII: Velocity vs. Time 33:23 Part A: At 2.187 Minutes, Which care is Further Ahead? 33:24 Part B: If We Shaded the Region between the Graphs from t=0 to t=2.187, What Would This Shaded Area Represent? 36:32 Part C: At 4 Minutes Which Car is Ahead? 37:11 Part D: At What Time Will the Cars be Side by Side? 37:50 Volumes I: Slices 34m 15s Intro 0:00 Volumes I: Slices 0:18 Rotate the Graph of y=√x about the x-axis 0:19 How can I use Integration to Find the Volume? 3:16 Slice the Solid Like a Loaf of Bread 5:06 Volumes Definition 8:56 Example I: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 12:18 Example II: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 19:05 Example III: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 25:28 Volumes II: Volumes by Washers 51m 43s Intro 0:00 Volumes II: Volumes by Washers 0:11 Rotating Region Bounded by y=x³ & y=x around the x-axis 0:12 Equation for Volumes by Washer 11:14 Process for Solving Volumes by Washer 13:40 Example I: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 15:58 Example II: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 25:07 Example III: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 34:20 Example IV: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 44:05 Volumes III: Solids That Are Not Solids-of-Revolution 49m 36s Intro 0:00 Solids That Are Not Solids-of-Revolution 0:11 Cross-Section Area Review 0:12 Cross-Sections That Are Not Solids-of-Revolution 7:36 Example I: Find the Volume of a Pyramid Whose Base is a Square of Side-length S, and Whose Height is H 10:54 Example II: Find the Volume of a Solid Whose Cross-sectional Areas Perpendicular to the Base are Equilateral Triangles 20:39 Example III: Find the Volume of a Pyramid Whose Base is an Equilateral Triangle of Side-Length A, and Whose Height is H 29:27 Example IV: Find the Volume of a Solid Whose Base is Given by the Equation 16x² + 4y² = 64 36:47 Example V: Find the Volume of a Solid Whose Base is the Region Bounded by the Functions y=3-x² and the x-axis 46:13 Volumes IV: Volumes By Cylindrical Shells 50m 2s Intro 0:00 Volumes by Cylindrical Shells 0:11 Find the Volume of the Following Region 0:12 Volumes by Cylindrical Shells: Integrating Along x 14:12 Volumes by Cylindrical Shells: Integrating Along y 14:40 Volumes by Cylindrical Shells Formulas 16:22 Example I: Using the Method of Cylindrical Shells, Find the Volume of the Solid 18:33 Example II: Using the Method of Cylindrical Shells, Find the Volume of the Solid 25:57 Example III: Using the Method of Cylindrical Shells, Find the Volume of the Solid 31:38 Example IV: Using the Method of Cylindrical Shells, Find the Volume of the Solid 38:44 Example V: Using the Method of Cylindrical Shells, Find the Volume of the Solid 44:03 The Average Value of a Function 32m 13s Intro 0:00 The Average Value of a Function 0:07 Average Value of f(x) 0:08 What if The Domain of f(x) is Not Finite? 2:23 Let's Calculate Average Value for f(x) = x² [2,5] 4:46 Mean Value Theorem for Integrate 9:25 Example I: Find the Average Value of the Given Function Over the Given Interval 14:06 Example II: Find the Average Value of the Given Function Over the Given Interval 18:25 Example III: Find the Number A Such that the Average Value of the Function f(x) = -4x² + 8x + 4 Equals 2 Over the Interval [-1,A] 24:04 Example IV: Find the Average Density of a Rod 27:47 Section 6: Techniques of Integration Integration by Parts 50m 32s Intro 0:00 Integration by Parts 0:08 The Product Rule for Differentiation 0:09 Integrating Both Sides Retains the Equality 0:52 Differential Notation 2:24 Example I: ∫ x cos x dx 5:41 Example II: ∫ x² sin(2x)dx 12:01 Example III: ∫ (e^x) cos x dx 18:19 Example IV: ∫ (sin^-1) (x) dx 23:42 Example V: ∫₁⁵ (lnx)² dx 28:25 Summary 32:31 Tabular Integration 35:08 Case 1 35:52 Example: ∫x³sinx dx 36:39 Case 2 40:28 Example: ∫e^(2x) sin 3x 41:14 Trigonometric Integrals I 24m 50s Intro 0:00 Example I: ∫ sin³ (x) dx 1:36 Example II: ∫ cos⁵(x)sin²(x)dx 4:36 Example III: ∫ sin⁴(x)dx 9:23 Summary for Evaluating Trigonometric Integrals of the Following Type: ∫ (sin^m) (x) (cos^p) (x) dx 15:59 #1: Power of sin is Odd 16:00 #2: Power of cos is Odd 16:41 #3: Powers of Both sin and cos are Odd 16:55 #4: Powers of Both sin and cos are Even 17:10 Example IV: ∫ tan⁴ (x) sec⁴ (x) dx 17:34 Example V: ∫ sec⁹(x) tan³(x) dx 20:55 Summary for Evaluating Trigonometric Integrals of the Following Type: ∫ (sec^m) (x) (tan^p) (x) dx 23:31 #1: Power of sec is Odd 23:32 #2: Power of tan is Odd 24:04 #3: Powers of sec is Odd and/or Power of tan is Even 24:18 Trigonometric Integrals II 22m 12s Intro 0:00 Trigonometric Integrals II 0:09 Recall: ∫tanx dx 0:10 Let's Find ∫secx dx 3:23 Example I: ∫ tan⁵ (x) dx 6:23 Example II: ∫ sec⁵ (x) dx 11:41 Summary: How to Deal with Integrals of Different Types 19:04 Identities to Deal with Integrals of Different Types 19:05 Example III: ∫cos(5x)sin(9x)dx 19:57 More Example Problems for Trigonometric Integrals 17m 22s Intro 0:00 Example I: ∫sin²(x)cos⁷(x)dx 0:14 Example II: ∫x sin²(x) dx 3:56 Example III: ∫csc⁴ (x/5)dx 8:39 Example IV: ∫( (1-tan²x)/(sec²x) ) dx 11:17 Example V: ∫ 1 / (sinx-1) dx 13:19 Integration by Partial Fractions I 55m 12s Intro 0:00 Integration by Partial Fractions I 0:11 Recall the Idea of Finding a Common Denominator 0:12 Decomposing a Rational Function to Its Partial Fractions 4:10 2 Types of Rational Function: Improper & Proper 5:16 Improper Rational Function 7:26 Improper Rational Function 7:27 Proper Rational Function 11:16 Proper Rational Function & Partial Fractions 11:17 Linear Factors 14:04 15:02 Case 1: G(x) is a Product of Distinct Linear Factors 17:10 Example I: Integration by Partial Fractions 20:33 Case 2: D(x) is a Product of Linear Factors 40:58 Example II: Integration by Partial Fractions 44:41 Integration by Partial Fractions II 42m 57s Intro 0:00 Case 3: D(x) Contains Irreducible Factors 0:09 Example I: Integration by Partial Fractions 5:19 Example II: Integration by Partial Fractions 16:22 Case 4: D(x) has Repeated Irreducible Quadratic Factors 27:30 Example III: Integration by Partial Fractions 30:19 Section 7: Differential Equations Introduction to Differential Equations 46m 37s Intro 0:00 Introduction to Differential Equations 0:09 Overview 0:10 Differential Equations Involving Derivatives of y(x) 2:08 Differential Equations Involving Derivatives of y(x) and Function of y(x) 3:23 Equations for an Unknown Number 6:28 What are These Differential Equations Saying? 10:30 Verifying that a Function is a Solution of the Differential Equation 13:00 Verifying that a Function is a Solution of the Differential Equation 13:01 Verify that y(x) = 4e^x + 3x² + 6x + e^π is a Solution of this Differential Equation 17:20 General Solution 22:00 Particular Solution 24:36 Initial Value Problem 27:42 Example I: Verify that a Family of Functions is a Solution of the Differential Equation 32:24 Example II: For What Values of K Does the Function Satisfy the Differential Equation 36:07 Example III: Verify the Solution and Solve the Initial Value Problem 39:47 Separation of Variables 28m 8s Intro 0:00 Separation of Variables 0:28 Separation of Variables 0:29 Example I: Solve the Following g Initial Value Problem 8:29 Example II: Solve the Following g Initial Value Problem 13:46 Example III: Find an Equation of the Curve 18:48 Population Growth: The Standard & Logistic Equations 51m 7s Intro 0:00 Standard Growth Model 0:30 Definition of the Standard/Natural Growth Model 0:31 Initial Conditions 8:00 The General Solution 9:16 Example I: Standard Growth Model 10:45 Logistic Growth Model 18:33 Logistic Growth Model 18:34 Solving the Initial Value Problem 25:21 What Happens When t → ∞ 36:42 Example II: Solve the Following g Initial Value Problem 41:50 Relative Growth Rate 46:56 Relative Growth Rate 46:57 Relative Growth Rate Version for the Standard model 49:04 Slope Fields 24m 37s Intro 0:00 Slope Fields 0:35 Slope Fields 0:36 Graphing the Slope Fields, Part 1 11:12 Graphing the Slope Fields, Part 2 15:37 Graphing the Slope Fields, Part 3 17:25 Steps to Solving Slope Field Problems 20:24 Example I: Draw or Generate the Slope Field of the Differential Equation y'=x cos y 22:38 Section 8: AP Practic Exam AP Practice Exam: Section 1, Part A No Calculator 45m 29s Intro 0:00 0:10 Problem #1 1:26 Problem #2 2:52 Problem #3 4:42 Problem #4 7:03 Problem #5 10:01 Problem #6 13:49 Problem #7 15:16 Problem #8 19:06 Problem #9 23:10 Problem #10 28:10 Problem #11 31:30 Problem #12 33:53 Problem #13 37:45 Problem #14 41:17 AP Practice Exam: Section 1, Part A No Calculator, cont. 41m 55s Intro 0:00 Problem #15 0:22 Problem #16 3:10 Problem #17 5:30 Problem #18 8:03 Problem #19 9:53 Problem #20 14:51 Problem #21 17:30 Problem #22 22:12 Problem #23 25:48 Problem #24 29:57 Problem #25 33:35 Problem #26 35:57 Problem #27 37:57 Problem #28 40:04 AP Practice Exam: Section I, Part B Calculator Allowed 58m 47s Intro 0:00 Problem #1 1:22 Problem #2 4:55 Problem #3 10:49 Problem #4 13:05 Problem #5 14:54 Problem #6 17:25 Problem #7 18:39 Problem #8 20:27 Problem #9 26:48 Problem #10 28:23 Problem #11 34:03 Problem #12 36:25 Problem #13 39:52 Problem #14 43:12 Problem #15 47:18 Problem #16 50:41 Problem #17 56:38 AP Practice Exam: Section II, Part A Calculator Allowed 25m 40s Intro 0:00 Problem #1: Part A 1:14 Problem #1: Part B 4:46 Problem #1: Part C 8:00 Problem #2: Part A 12:24 Problem #2: Part B 16:51 Problem #2: Part C 17:17 Problem #3: Part A 18:16 Problem #3: Part B 19:54 Problem #3: Part C 21:44 Problem #3: Part D 22:57 AP Practice Exam: Section II, Part B No Calculator 31m 20s Intro 0:00 Problem #4: Part A 1:35 Problem #4: Part B 5:54 Problem #4: Part C 8:50 Problem #4: Part D 9:40 Problem #5: Part A 11:26 Problem #5: Part B 13:11 Problem #5: Part C 15:07 Problem #5: Part D 19:57 Problem #6: Part A 22:01 Problem #6: Part B 25:34 Problem #6: Part C 28:54 • ## Transcription 1 answerLast reply by: Professor HovasapianMon Feb 8, 2016 1:56 AMPost by nathan lau on January 31, 2016at 13:30, isn't it 2pi/3, not just pi/3? ### Example Problems for the Fundamental Theorem Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. • Intro 0:00 • Example I: Find the Derivative of the Following Function 0:17 • Example II: Find the Derivative of the Following Function 1:40 • Example III: Find the Derivative of the Following Function 2:32 • Example IV: Find the Derivative of the Following Function 5:55 • Example V: Evaluate the Following Integral 7:13 • Example VI: Evaluate the Following Integral 9:46 • Example VII: Evaluate the Following Integral 12:49 • Example VIII: Evaluate the Following Integral 13:53 • Example IX: Evaluate the Following Graph 15:24 • Local Maxs and Mins for g(x) • Where Does g(x) Achieve Its Absolute Max on [0,8] • On What Intervals is g(x) Concave Up/Down? • Sketch a Graph of g(x) ### Transcription: Example Problems for the Fundamental Theorem Hello, welcome back to www.educator.com, welcome back to AP Calculus.0000 Today, we are going to be doing some example problems for the fundamental theorem of calculus that we just covered.0004 These are going to be extraordinarily straightforward and simple, almost surprisingly.0010 Our first example is find the derivative of the following function.0017 The function is g(x) and it is defined as the integral from 1 to x, whatever x might be.0021 5/ t³ + 9 dt, nice and simple.0027 Since the upper limit is x, we would literally just put x where we have the t.0032 Our final answer is just 5/ x³ + 9, and we are done.0039 That is the beauty of the fundamental theorem.0046 When a function is defined as an integral or the upper integral is a variable,0048 this particular variable right here xx, we just literally put that in there.0054 Let me correct this, a little bit of a notational issue here.0064 g’ because we are looking for the derivative.0071 g’(x), that is a little more clear, I think.0075 We have 5/ x³ + 9.0079 Literally, when we take the derivative of this, we are just getting rid of this integral sign0084 because differentiation and integration or anti-differentiation are inverse processes.0090 That is the whole idea.0095 That is it, nice, straightforward, and simple.0097 Find the derivative of the following function.0102 g(x), upper variable is x, we do the same thing.0105 Therefore, the derivative of this g’(x), all we have to do is we put an x where we see a t.0110 We get x² sin³ x, that is it, very nice.0117 As long as this upper limit of integration is the variable that is inside the function defining it,0128 we can just put it straight into the function itself.0144 Let us try this one, find the derivative of the following function.0150 g(x) = the integral from 1 to x⁴ sin(t) dt.0155 Now this is x, this time, we have not x but we have a function of x.0161 The only thing we do know is we do the same thing.0168 We are just going to be putting in the x here.0171 g’(x) is equal to sin(x).0173 But because this is not x but it is a function of x, by the chain rule,0177 we are just going to multiply by the derivative of this function.0181 Let us write that out.0186 When the upper limit is not just x but a function of x, some more complicated function of x, we do the following.0188 This is the long process, the easy part is just multiply by the derivative of the upper limit, whatever the function happens to be.0220 Essentially, what you are doing is you are letting x⁴, the upper limit integration equals some letter u.0227 Essentially, you are just doing a substitution.0239 And then, du dx is going to equal 4x³.0240 And then, you rewrite this g(x) = the integral from 1 to u of sin(t) dt.0250 Then, when you do this, when you put this into here, in other words, when you do g’(x) is equal to the sin(u),0263 because u is a function of x, by the chain rule, the derivative is sin(u) du dx.0277 You are multiplying by du dx.0285 What you end up getting is g’(x) is equal to sin(x)⁴ × 4x³.0292 This part is exactly the same, you are just putting the upper limit into this variable to get the sin x⁴.0303 And then, you are just multiplying by the derivative of that because now it is a function of x, it is not just x itself.0311 I hope that makes sense.0321 Essentially, what you are doing is you are doing the same thing as you did before.0322 If this were just an x, you would put it in and it would be sin(x).0325 The derivative of x is just 1, you were technically doing du dx.0331 The derivative of x is just 1, which is why it just ends up staying that way.0336 But if it is a function, just multiply by its derivative.0342 Find the derivative of the following functions, same thing.0356 Now π/3, the upper limit is cos(x).0359 We are just going to put this into the t's and then multiply everything by the derivative of cos(x).0363 g’(x), we would literally just read it off.0372 It is going to be 3 × cos(x) + 7 × cos² (x) × the derivative of that function cos(x) which is –sin(x).0375 We are done, that is all.0391 Just do what the first fundamental theorem of calculus says which is just put the upper limit into those things.0399 Then, multiply by ddx of the upper limit.0415 That is all, moving along nicely.0431 Evaluate the following integral.0435 We want to evaluate a definite integral.0437 We have a lower limit, an upper limit, they are both numbers.0440 We are going to find the antiderivative, the integral.0443 And then, we are going to put in 5, we are going to put in 1,0446 and we are going to subtract the value of what we get from 1, from the value of the upper limit 5.0450 Let us go ahead and do that.0456 The integral of this, let us go ahead and actually multiply this out first because we have x³ - 4x².0457 This is going to be the integral from 1 to 5 of x⁶ – 4x × that is going to be 4x⁴, then another 4x⁴.0468 It is -8x⁴ + 16x², I just multiply this out, dx.0484 And that is going to equal x⁷/7 - 8x⁵/5 + 16x³/3.0495 I'm going to evaluate this from 1 to 5.0512 In other words, I’m going to put 5 in for all of these and get a number, then I'm going to subtract by 1.0514 Putting them into all the x here.0520 When I put 5 into this, I get 78,000, 5⁷ is 78,125/7.0522 When I put 5 into here, multiply by 8 and divide by 5.0535 I get - 2500/5, I hope you are going to be confirming my arithmetic.0539 I’m notorious for arithmetic mistakes.0544 + 2000/3 that takes care of the 5.0546 I’m going to subtract by putting 1 in.0553 This is going to be 1/7 - 8/5 + 16/3.0556 Find the antiderivative of the integrand, and then evaluate.0565 Put the upper number into this, you are going to get a value.0570 Put the lower number into the x values, you are going to get this and then you subtract.0573 The rest is just arithmetic.0577 I'm just going to go ahead and leave it like this.0579 You can go ahead and do the arithmetic, if you want, to get a single answer.0581 Evaluate the following integral, once again, we are using the second fundamental theorem of calculus.0588 The integrals of f from 2 to 8 is going to be the f(8) - f(2), F(8) – F(2), that is what the second fundamental theorem says.0594 Remember, it is this one, if from a to b of f(x) dx is just equal to f(b) – f(a), where F is the antiderivative of f.0611 This is going to be the integral from 2 to 8, I’m going to separate these out.0629 I’m going to write this as x²/3, it is going to be x/ x²/3 - 3/ x²/3.0638 x/ x²/3 dx - the integral from 2 to 8 of 3/ x²/3 dx.0646 That is going to equal, the integral from 2 to 8 of x¹/3 dx - the integral from 2 to 8 of 3 × x⁻²/3 dx.0662 I can go ahead and integrate these, not a problem.0678 1/3 + 1 is 4/3, I get x⁴/3/ 4/3 - 3 × x – 2/3 + 1 is 1/3/ 1/3.0681 That is going to equal ¾ x⁴/3 - 9x¹/3.0700 I'm going to evaluate that from 2 to 8.0712 When I put 8 in for here, I'm going to get 12.0716 When I put 8 into here, it is going to be -18, that is the first one.0721 -, when I put 2 into here, I end up with 1.89.0727 And then, - when I put 2 into here, I end up with 11.34.0734 Once you have the antiderivative, you put the upper limit into the x value to evaluate, that gives you the first term.0742 You put the lower limit into the x, that gives you the second term.0753 You subtract the second term from the first term, f(b) - f(a).0758 Nice and straightforward.0765 Another good example, the integral from π/6 to 2π/ 3 of cosθ dθ.0772 This is going to equal where the integral of cosθ dθ is going to be sin(θ).0780 We are going to evaluate sin(θ) from π/6 to π/3.0786 Not a problem, that is going to be the sin(π/3) - sin(π/6).0796 We are just putting in upper and lower limit into the actual variable of integration.0809 The sin(π/3) is √3/2 – sin(π/6) which is ½.0815 The integral from 1 to 3 of 5/ x² + 1.0835 You should recognize that the integral of 1/ x² + 1 is equal to the inv tan.0841 Remember, the integral or the antiderivative, whatever you want to call it,0847 of 1 + x² dx that was equal to the inv tan(x).0852 Therefore, this is, that is fine, I will go ahead and separate it out.0860 This is just a constant, it is 5 integral from 1 to 3, 1/ x² + 1 dx = 5 × inv tan(x), evaluated from 1 to 3.0866 That is it, that is all it is.0885 This is just 5 × inv tan(3) – inv tan(1).0887 That is all, you can go ahead and multiply the 5 through.0901 You can do 5 or you can evaluate this from here to here.0904 This - this and then multiply the constant through.0911 The order actually does not matter.0914 I’m just somebody who tends to like to see the constant on the outside, personal choice.0916 g(x) is equal to the integral from 0 to x of f(t) dt.0928 f being the function whose graph is given below.0934 This graph right here, this is a function of f.0937 We take a look at this graph, it looks like some sort of a modified sin or cos graph, definitely periodic.0945 We are taking a look real quickly, we notice that we have about a 3. something, then we have a 6.2 something.0952 It turns out that these are actually just π/2, π.0961 I’m going to go ahead and label them that way.0970 I left the labels just as is, but it is nice to recognize the numbers for what it is that they are.0972 Especially, since you are talking about a periodic function, then more than likely the graphs that you are going to be dealing with0980 are going to pass through the major points, π/2, π, π/6, π/3, π/4, things like that.0985 In this case, let me mark these off.0992 3.14, this is π.1003 This over here looks like it is π/2.1010 Over here, this looks like 2π and this looks like 3π/ 2, 4π/ 2.1017 It looks like we have got 5π/ 2.1028 At least that way we have some numbers that we are accustomed to seeing,1030 instead of just referring them as 3.1, 1.5, things like that.1033 Give the values of x for which g(x) achieves its local maxes and mins.1040 You have to be very careful here, this is a graph of f.1047 g is the integral of f from 0 to x.1052 Therefore, g is actually the area under the graph.1059 The graph actually falls below the x axis, the area is going to become negative.1066 Give the values of x for which g(x) achieves its local maxes and mins.1075 As we start moving from 0 x, moving forward g(x), as x gets bigger and bigger,1080 this value is just the area under the graph itself.1091 It is going to hit a high point, when I hit π/2.1095 And then, at π/2, because the graph itself falls below the axis, as I keep going,1101 as x keeps getting bigger and bigger, the integral itself is going to be negative.1108 The area is going to hit a maximum and then the integral is going to start to go negative.1115 The area is going to diminish.1120 At this point, the area is going to go down.1123 At some point, it is going to hit 0.1127 The area is going to rise to a certain number, that is up to here.1130 We are going to start subtracting from the area, at some point it is going to hit 0.1134 Over here, it is going to hit a minimum because now this area under the x axis is a lot bigger than the area above the x axis.1138 The integral from 0 to π is going to be a negative number.1147 It is actually going to hit a maximum.1154 One of the maximums that it is going to hit is right there.1157 And then, it is going to hit a minimum right there.1161 After that, it is going to climb again, the area under the graph.1164 That is what we are doing, g(x) is the area of f, the area under the graph of f because this is the graph of f.1169 The area is going to get bigger again.1178 At this point, it is going to hit another maximum.1180 And then at this point, the area is going to be negative.1185 We are going to hit another minimum, 2π.1187 We are going to increase the area again.1191 It is going to pass this and it is going to hit another maximum.1194 That is what is going to happen.1198 Give the values of x for which g(x) achieves its local maxes and mins.1199 It is going to achieve its local maxes and mins where the area under the graph hits a maximum and a minimum.1205 It is a maximum at π/2, a minimum at π.1215 Maximum at 3π/ 2, a local minimum at 2π, local max and min at 5π/ 2.1218 We have π/2, π, 3π/ 2, 2π, and 5π/ 2.1225 The reason is because g(x) is the net area under the graph of f.1241 This red is the graph of f.1248 Where does g(x) achieves its absolute max, between 0 and 8, this point right here.1255 Let us go ahead and draw this out.1266 Basically, what we are going to have is this.1267 Let me do this in black.1270 g(x) is going to look like this, it is going to rise.1272 It is going to hit a maximum here and it is going to hit a minimum.1277 It is going to start rising again.1282 It is going to hit a maximum and it is going to hit a minimum, and it is going to hit a maximum.1286 It is going to start coming down to there.1296 The area keeps getting bigger and bigger.1300 The absolute value, the area keeps getting bigger and bigger.1304 Our highest point is going to be this point right here, which is going to be our 5π/ 2.1307 Again, because g(x), that is what we want, where does g(x) achieve its absolute max.1316 g(x) is the net area under the graph.1322 Positive, if it is above the x axis, negative if it is below.1326 This is the graph of g.1330 On what intervals is g(x) concave up or concave down?1335 For concave up or concave down, we want to see the sin(g”).1341 g’ is equal to f.1356 By the fundamental theorem of calculus, if I take the derivative of this, I get rid of this.1360 I just have f(x), this is g’ that mean g“ is equal to f’.1364 f’ is just the derivative of the red graph.1373 The derivative is the slope.1379 It is going to be concave up where the slope is positive.1383 It is going to be concave down where the slope is negative.1387 If the slope is positive, that is f’.1392 f’ happens to be g”.1395 Therefore, we have from here to here, is concave up.1400 From here to here, it is concave down.1415 From here to here, concave up.1424 From here to here, concave down.1429 From here to here, concave up.1433 And from here to 8, we got concave down which is exactly what we see.1437 Concave up graph, this is g that we are looking at.1443 Now it is concave down, it starts to be concave up again up to here.1445 From here to here, it is concave down, it matches.1450 I hope that makes sense.1457 You are given a graph f, if g is defined as the integral from 0 to x of f.1460 It is the net area under that particular graph.1467 Let us take a look at what it actually looks like.1475 This one, this is the actual g(x).1484 This was our f(x).1488 It is a maximum here, it is a minimum here.1494 Local, hits a local max here, it is right there.1502 It is a local min here, it is right there.1507 Hits a local max here, that is right there.1511 It is exactly what we said before.1515 Thank you so much for joining us here at www.educator.com.1518 We will see you next time, bye.1520 OR ### Start Learning Now Our free lessons will get you started (Adobe Flash® required).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383413195610046, "perplexity": 3366.1561462595046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00219.warc.gz"}