url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.maa.org/publications/periodicals/loci/joma/guidelines-for-joma-authors-mathematical-notation
# Guidelines for JOMA Authors - Mathematical Notation Author(s): David Smith and Kyle Siegrist Mathematical notation has always presented a special problem for authors of web articles, since HTML provides only very limited mark-up for mathematics (mainly the tag for a variable, and the and tags for subscripts and superscripts. Here is a sample mathematical expression rendered with basic HTML: y = a0 + a1x + a2x2 For an article with relatively simple mathematical notation, these basic HTML tags may be sufficient. If not, there are a couple of approaches that JOMA authors can use: MathML, the Mathematics Markup Language is an XML language that provides a very complete specification of mathematical notation. MathML is an open source, W3C standard and is now supported by the Mozilla Firefox browser and by the latest versions of Internet Explorer on the Windows platform via the free the MathPlayer plug-in. Moreover, in keeping with the best practices discussed previously, MathML encodes the structure of the mathematics much more completely than previous mark-up languages (such as TeX). Because of this, mathematical expressions in MathML can be imported from one MathML-aware program (an HTML document, for example) to another (Maple, for example). On the downside, MathML is difficult to author without special editing tools (precisely because so much information is encoded), and MathML is not supported on older browsers. Without a doubt, however, MathML is the future of mathematics on the web. In spite of the difficulties, a major goal of JOMA is to promote and encourage its use. Another approach is to convert mathematical expressions into small graphics (typically in the PNG or GIF format). The graphics can be created with special tools (such as MathType) or with special converters (such as TeX to HTML). Remember, however, that "best practices" would require alternate text-based descriptions of the mathematical expressions also (in TeX or MathML, for example), attached via the alt or title attributes of the tag. For more information on writing mathematical expressions in HTML and in MathML, see Mathematics with Structure and Style David Smith and Kyle Siegrist, "Guidelines for JOMA Authors - Mathematical Notation," Loci (May 2006) ## JOMA Journal of Online Mathematics and its Applications
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508806824684143, "perplexity": 2807.4724220005696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931013466.18/warc/CC-MAIN-20141125155653-00090-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/141019-choose-y-balls-how-many-distinct-colors-can-you-get.html
Math Help - choose y balls, how many distinct colors can you get 1. choose y balls, how many distinct colors can you get We have y urns of balls. In each urn, we have x balls in the same color. But balls in different urns have difference color. Now, without replacement, we draw y balls randomly. What is the chance that we drawn from z different urns or more? 2. Let's see. I'll assume there are enough balls in each urn that it can be picked from indefinitely. (that is to say each urn would have y balls) If y = 1, then the chance is 0% (trivial, since you can't choose a different urn if you only have one) If y = 2, then the possibilities are (let numbers be the colors) 1 1 1 2 2 1 2 2 so the probability is 1/2 If y = 3, then there are 27 possibilities. I counted 18 that weren't all a single color are possible, so the probability would be 2/3. At this point, I smell an induction proof, and I'm sure there's an easier way with permutations and combinations. 3. Hi thanks!! We have y urns of balls. In each urn, we have x balls in the same color. But balls in different urns have difference colors. Now, without replacement, we draw y balls randomly. What is the chance that we get z(z>=2) different colors or more?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561421275138855, "perplexity": 836.0945255053406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461848.26/warc/CC-MAIN-20150226074101-00119-ip-10-28-5-156.ec2.internal.warc.gz"}
https://em.geosci.xyz/content/maxwell2_static/governing_equations/solving_dc_equations.html
# Solving DC Equations¶ (338)$\boldsymbol{\nabla} \cdot \sigma\boldsymbol{\nabla}\phi = \boldsymbol{\nabla}\cdot\mathbf{j}_{source}.$ ## Numeric¶ For an arbitrary conductivity model, equation (338) cannot be solved exactly. In order to simulate a geophysical survey over an earth with a complicated conductivity distribution we need to solve an approximate discrete form of this equation. The equation can be discretized directly using, for example, standard finite difference, finite element, or finite volume methods. However if we use a mimetic discretization of the full Maxwell equations, we can derive a discretization of the DC equation from the discrete Maxwell equations. A brief overview of this approach can be found in Solving Maxwell’s Equations. The following notation for the discrete system in this section comes from that page. The discrete potential field condition is $$\tilde{\mathbf{e}} = \mathbf{G}\tilde{\phi}$$. Substituting that into the discrete time-domain quasi-static Ampere equation gives $\mathbf{C}^T \mathbf{M}_{\mu^{-1}}^f \tilde{\mathbf{b}} - \mathbf{M}_{\sigma}^e\mathbf{G}\tilde{\phi} = \tilde{\mathbf{s}},$ where the tilde symbol denotes a grid function. Using the fact that the discrete divergence operator is equal to $$-\mathbf{G}^T$$, we take the discrete divergence of Ampere’s law to get (339)$-\mathbf{G}^T\mathbf{C}^T \mathbf{M}_{\mu^{-1}}^f \tilde{\mathbf{b}} + \mathbf{G}^T\mathbf{M}_{\sigma}^e\mathbf{G}\tilde{\phi} = - \mathbf{G}^T\tilde{\mathbf{s}}.$ Since we used a mimetic discretization method, $$\mathbf{G}^T\mathbf{C}^T$$ is identically zero, which corresponds the vector calculus identity $$\boldsymbol{\nabla\cdot}\left(\boldsymbol{\nabla\times}\mathbf{b}\right) = 0$$. Hence the first term of equation (339) vanishes, which yields the discrete DC potential equation (340)$\mathbf{G}^T\mathbf{M}_{\sigma}^e\mathbf{G} \tilde{\phi} = -\mathbf{G}^T\tilde{\mathbf{s}}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700717329978943, "perplexity": 345.4676183969443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255071.27/warc/CC-MAIN-20190519161546-20190519183546-00233.warc.gz"}
https://www.open.edu/openlearn/ocw/mod/oucontent/view.php?id=68098&section=2.2
Forensic psychology Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available. Free course # 2.2 Re-evaluating the statements Figure 8 You know now that a question can suggest a response, that the co-witnessing effect tends to make evidence from two witnesses very similar, and that people tend to judge height and weight badly as the judgements are made relative to their own build. To help you evaluate the statements taken by DI Bullet, in the next activity you look at potential ways that evidence can become biased. For each question, you will be provided with a specific statement that was obtained by DI Bullet in the audio you heard. Your job is to decide whether the statement is likely to have been biased by one of these three factors: • Biased as it was suggested by the question asked – for example, did DI Bullet ask a question in a way that suggested what the answer should be, or suggest information to the witness that they had not mentioned before? • Biased because of the co-witnessing effect – did the witness remember the information themselves, or had they heard another witness say something similar? • Biased as witnesses tend to over and underestimate height, weight, distance and time. Alternatively, the evidence might not have been influenced by any of the above factors, in which case you should select: • Potentially unbiased. The statements that appear in the quiz are all taken from the recording of DI Bullet you heard in DI Bullet: initial statements. Do feel free to re-listen to this audio as many times as you like. ## Activity 2 Evaluating bias (DI Bullet) Q1. ‘Lila states that the suspects drove up and jumped out of a car.’ This piece of evidence is: a. Biased as it was suggested by the question asked. b. Biased because of the co-witnessing effect. c. Biased as witnesses tend to over and underestimate height, weight, distance and time. d. Potentially unbiased. a. Well done! DI Bullet asks ‘I understand you mentioned that the suspects drove up and jumped out of a car, could either of you describe it?’, so his question suggested that the suspects drove up and jumped out of a car. Do re-listen to the audio of DI Bullet taking the statements, and consider how this piece of evidence was obtained. Q2. ‘Seth states that the car was large and silver.’ This piece of evidence is: a. Biased as it was suggested by the question asked. b. Biased because of the co-witnessing effect. c. Biased as witnesses tend to over and underestimate height, weight, distance and time. d. Potentially unbiased. b. Well done! This piece of evidence was provided only after the witness had heard it stated by another witness. Do re-listen to the audio of DI Bullet taking the statements, and consider how this piece of evidence was obtained. Q3. ‘Lila states that the kidnapper was the driver.’ This piece of evidence is: a. Biased as it was suggested by the question asked. b. Biased because of the co-witnessing effect. c. Biased as witnesses tend to over and underestimate height, weight, distance and time. d. Potentially unbiased. b. This is correct. This piece of evidence was provided only after the witness had heard it stated by another witness. Do re-listen to the audio of DI Bullet taking the statements, and consider how this piece of evidence was obtained. Q4. ‘Lila states that the two masked perpetrators aimed guns at the guards, while the unmasked perpetrator cut free a case chained to one of the guards.’ This piece of evidence is: a. Biased as it was suggested by the question asked. b. Biased because of the co-witnessing effect. c. Biased as witnesses tend to over and underestimate height, weight, distance and time. d. Potentially unbiased. d. This is correct. This evidence does not involve any estimation about height etc., and the witness provided the statement without the information first being suggested in a question or having heard it first stated by another witness. Do re-listen to the audio of DI Bullet taking the statements, and consider how this piece of evidence was obtained. Q5. ‘Lila states that the driver was about 6 foot 4.’ This piece of evidence is: a. Biased as it was suggested by the question asked. b. Biased because of the co-witnessing effect. c. Biased as witnesses tend to over and underestimate height, weight, distance and time. d. Potentially unbiased. c. Well done! The witness is estimating the height of a suspect, and is likely to be biased by their own height. Do re-listen to the audio of DI Bullet taking the statements, and consider how this piece of evidence was obtained. Q6. ‘Seth states that the perpetrators looked stocky and like bouncers.’ This piece of evidence is: a. Biased as it was suggested by the question asked. b. Biased because of the co-witnessing effect. c. Biased as witnesses tend to over and underestimate height, weight, distance and time. d. Potentially unbiased. b. This is correct. This piece of evidence was provided only after the witness had heard it stated by another witness. Do re-listen to the audio of DI Bullet taking the statements, and consider how this piece of evidence was obtained. FP_1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094682693481445, "perplexity": 2417.218282142005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00019.warc.gz"}
https://nuit-blanche.blogspot.com/2014/07/phase-retrieval-via-wirtinger-flow.html
## Wednesday, July 16, 2014 ### Phase Retrieval via Wirtinger Flow: Theory and Algorithms - implementation - We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complex-valued signal x of C^n about which we have phaseless samples of the form y_r = |< a_r,x >|^2, r = 1,2,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a non-convex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of non-convex optimization schemes that may have implications for computational problems beyond phase retrieval. The attendant project page with code and data is on Mahdi's code page. Join the CompressiveSensing subreddit or the Google+ Community and post there ! Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858547568321228, "perplexity": 775.840880574117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00269.warc.gz"}
https://eprints.soton.ac.uk/397114/
The University of Southampton University of Southampton Institutional Repository Leptogenesis and residual CP symmetry Cheng, Peng, Ding, Gui-Jun and King, Stephen F. (2016) Leptogenesis and residual CP symmetry. Journal of High Energy Physics, 2016 (206), 1-36. Record type: Article Abstract We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis. Text __soton.ac.uk_ude_PersonalFiles_Users_pbs1c15_mydocuments_EPrints_P&A_theoretical physics_Stephen F King_Leptogenesis and residual CP symmetry.pdf - Version of Record Accepted/In Press date: 21 March 2016 Published date: 30 March 2016 Organisations: Theoretical Partical Physics Group Identifiers Local EPrints ID: 397114 URI: http://eprints.soton.ac.uk/id/eprint/397114 PURE UUID: bb010e8a-bfda-460e-9dc7-10fe12993ce7 Catalogue record Date deposited: 28 Jun 2016 08:08 Contributors Author: Peng Cheng Author: Gui-Jun Ding Author: Stephen F. King
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820055365562439, "perplexity": 3708.579093243552}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00597.warc.gz"}
https://robjlow.blogspot.com/2017/06/finding-greatest-common-divisors-and.html
# Greatest Common Divisor The problem is simple: given two positive integers, $$a$$ and $$b$$, how do we find $$\gcd(a,b)$$, the greatest common divisor of $$a$$ and $$b$$ (i.e. the largest integer that $$a$$ and $$b$$ are both multiples of)? Just about everybody learns how to do this at school. You break each of $$a$$ and $$b$$ down into a product of prime factors (where a prime number is one with exactly two factors, itself and $$1$$) and then multiply together all the common prime factors to get the biggest number that each is a multiple of. But this actually relies on quite a deep result. The procedure works because The Fundamental Theorem of Arithmetic says that any positive integer can be expressed as a product of primes, and it can only be done in one way (except for the order of the factors). So, there are two issues here. The first is in plain sight. It is the question of why you should believe the fundamental theorem of arithmetic. This might not seem like an issue. In fact, after a lot of experience in factorizing integers you may well feel that it is pretty obvious. But it isn't, and part of the intention of this post is to persuade you that it isn't. The other is hidden, but comes into view when the numbers $$a$$ and $$b$$ get larger. It is the problem that working out the prime factorization of a number is really quite hard when the number is large. There is, fortunately, an answer for both of these: Euclid's algorithm gives us a different way to compute the greatest common divisor of two integers, and, as a bonus, we can use it to prove that every integer has a unique prime factorization. This algorithm is to be found in Euclid's Elements, and is probably the oldest non-trivial algorithm in significant current use, being a vital ingredient in the RSA public key cryptosystem. ## Euclid's Algorithm The algorithm relies on a familiar fact from basic arithmetic, which goes by the appallingly inappropriate name of the division algorithm. It isn't an algorithm: an algorithm is a set of step-by-step instructions which tell you how to do something. This just tells you that a certain type of question has a particular type of answer. A couple of sources call it the division theorem because of this, but, alas, the terminology is very well established. I would rather be right than popular, though, so I will call it the division theorem. But enough whining. What does this result say? The Division Theorem If $$a$$ and $$b$$ are positive integers, then there is a unique positive integer $$q$$, and a unique non-negative integer $$r \lt b$$ such that $$a=qb+r$$. In other words, if you divide $$a$$ by $$b$$ you get a quotient and remainder, and there's only one right answer. This is something so entrenched in our experience that we take it very much for granted, and that's just what I'm going to do. But it has consequences. If the remainder, $$r$$, is zero, then $$a$$ is a multiple of $$b$$: we also say that $$b$$ divides $$a$$. Back to the original question: I have two positive integers, $$a$$ and $$b$$, and I want to know the biggest number that divides into both of them. Let's suppose that $$a \gt b$$. (If not, I can swap them over). Now, by the division theorem, I know that I can write $$a$$ as $$qb+r$$, where $$0\le r \lt b$$. What's more, if $$d$$ is any number that divides $$a$$ and $$b$$, since $$a-qb=r$$, $$d$$ divides $$r$$. And since $$a=qb+r$$, if $$d$$ divides into $$b$$ and $$r$$, it also divides $$a$$. That means that $$b$$ and $$r$$ have exactly the same common divisors as $$a$$ and $$b$$. But something really useful has happened. I've replaced the problem of finding the greatest common divisor of $$a$$ and $$b$$ by finding the greatest common divisor of $$b$$ and $$r$$, and $$r$$ is smaller than $$b$$, which is smaller than $$a$$. I've made the problem easier. There are two possibilities: if $$r=0$$, then $$b$$ divides into $$a$$, so it is the greatest common divisor of $$a$$ and $$b$$. If not, I can repeat the trick. Eventually the remainder will be $$0$$, and I will have the greatest common divisor of my original $$a$$ and $$b$$. This procedure is the essence of Euclid's algorithm. Let's see how it works with an example. What is $$\gcd(60,25)$$? We calculate as follows: $\begin{split} 60 &= 2 \times 25 + 10\\ 25 &= 2 \times 10 + 5\\ 10 &= 2 \times 5 \end{split}$ This tells us that $$\gcd(60,25)=5$$. But we get something not at all obvious from it. We can rewind the calculation like this: $\begin{split} 5 &= 25 - 2 \times 10\\ &= 25-2 \times (60-2 \times 25)\\ &= 5 \times 25 - 2 \times 60 \end{split}$ This tells us that we can express $$5$$ as a combination of $$60$$ and $$25$$. But there's nothing special about these numbers: I could have rewound any appliation of Euclid's algorithm in just the same way. So, given any $$a$$ and $$b$$, we can use Euclid's algorithm to find $$\gcd(a,b)$$, and also to find integers $$m,n$$ with the property that $$\gcd(a,b)=ma+nb$$. This wasn't at all obvious how to do from the prime factorization approach. In fact, with the prime factorization approach it would never have occurred to me that this could even be done. Thinking about this approach to finding the gcd of two numbers, we can see that $$\gcd(a,b)$$ isn't just the biggest number that divides into both $$a$$ and $$b$$: it is actually a multiple of any other divisor. The reasoning is just what was used above: any divisor of $$a$$ and $$b$$ is also a divisor of $$r$$, the remainder. Repeating this, we eventually find that it must be a divisor of the last non-zero remainder, i.e. it must be a divisor of $$\gcd(a,b)$$. You might wonder who cares? I think there are two immediate reasons to care, namely that this addresses both of the issues I raised earlier. The first reason is that you don't have to take it on trust that prime factorization works as advertised, since you can see just why this works. The second is more practical. It is that for large numbers, this algorithm is very fast: enormously fast compared to the best known algorithms for factorizing large numbers, which is important. It is the fact that factorizing large integers is (as far as we know) expensive while finding the greatest common divisor is cheap that makes RSA public key cryptography work. There are other, less immediate reasons too. Using this, we can actually prove that prime factorization works as advertised. But before getting there, we need to see something useful that this tells us about prime numbers. ## Prime numbers Suppose that $$p$$ is a prime number, and $$a$$ is any number which is not a multiple of $$p$$. Then we must have $$\gcd(a,p)=1$$, which tells us that there are integers $$m,n$$ such that $$ma+np=1$$. Now comes the clever bit. Let $$p$$ be a prime number, and suppose that $$a$$ and $$b$$ are integers such that $$ab$$ is a multiple of $$p$$. Then either $$a$$ or $$b$$ must be a multiple of $$p$$. "But that's obvious!" you may be tempted to object. But if you think it's obvious, think about why: it's almost certainly because you're so accustomed to how prime factorization works. Unfortunately that's exactly the thing that I want to prove. So, let's suppose that $$ab$$ is a multiple of $$p$$. If $$a$$ is a multiple of $$p$$, we're done. But what if $$a$$ is not a multiple of $$p$$? Well, in that case $$\gcd(a,p)=1$$, so there are integers $$m,n$$ such that $$ma+np=1$$. Now multiply this equation by $$b$$. We get $b=mab+npb.$ But $$ab$$ is a multiple of $$p$$ and $$npb$$ is a multiple of $$p$$, so their sum - which is $$b$$ - must also be a multiple of $$p$$. That's what we needed. Now we have everything we need to prove the # Fundamental Theorem of Arithmetic ## At last, the proof We can now consider an integer, $$n$$. There are two possibilities: either $$n$$ is a prime, or $$n$$ is the product of two smaller factors. Then we think about each of them in turn. Eventually, we must reach a point where each of our factors is a prime. But this is only part of the problem, and the easy part at that. How do we know that there is only one way to express $$n$$ as a product of primes? This is where we use that last fact. Suppose we have two prime factorizations of $$n$$, say $n=p_1p_2 \ldots p_k = q_1q_2 \ldots q_l$ Then $$p_1$$ is a factor of $$q_1q_2 \ldots q_l$$. If $$p_1$$ is not a factor of $$q_1$$ (in which case $$p_1=q_1$$), then $$p_1$$ is a factor of $$q_2\ldots q_l$$, and by repeating this argument we find that $$p_1$$ must be one of the $$q_i$$. Then divide $$n$$ by $$p_1$$ and repeat the argument. Eventually we find that each of the prime factors in either factorization must also be a factor in the other factorization, so the two are actually the same (except possibly in a different order). ## What is a prime number again? I rushed past this idea up at the top, with the brief statement that a prime number is one with exactly two factors, itself and $$1$$. This isn't quite satisfactory if we allow negative numbers, so let's be a little more general than that, and try to give a definition which will work in more generality. We say that a number is a unit if it has a multiplicative inverse, and then that a factorization is proper if no factor is a unit. Then a prime is a number with no proper factorization. So this works when we allow negative numbers, since although we can write $$3=1\times 3 = -1 \times -3$$, in each case one of the factors is a unit. But this property wasn't really what we used to prove the fundamental theorem of arithmetic. We used this to show that any integer has a prime factorization, but not to show that it was unique. For that part of the argument, the important fact was that if a product is a multiple of a prime, then at least one of the factors in the product must be a multiple of the prime. In fact there two ways of characterizing prime numbers. One characterization is that $$p$$ is prime if it has no proper factorization. The other is that $$p$$ is prime if, whenever $$ab$$ is a multiple of $$p$$ then at least one of $$a$$ and $$b$$ must be a multiple of $$p$$. We saw above that every prime has this property: we just have to make sure that any number with this property is prime. But if $$n$$ has a proper factorization into $$ab$$ then $$n$$ divides into $$ab$$, but since $$n$$ is larger than both $$a$$ and $$b$$ neither is a multiple of $$n$$, so no number with a proper factorization can satisfy this property. But it doesn't have to be like that. ## Another number system The integers aren't the only number system we could work with. For example, we might consider the Gaussian integers $$\mathbb{Z}[i]$$, i.e. the complex numbers whose real and imaginar parts are both integers. More interestingly, we could, and we will, do something rather more exotic. We consider numbers of the form $$m+n\sqrt{-5}$$, where $$m$$ and $$n$$ are both integers, and call this collection $$\mathbb{Z}[\sqrt{-5}]$$. Note that as for the integers, the units here are just $$\pm 1$$. We now get something happening which cannot happen with the integers. First, notice that $$|m+n\sqrt{-5}| = \sqrt{m^2+5n^2}$$, and since these objects are complex numbers, if $$M,N \in \mathbb{Z}[\sqrt{-5}]$$, $$|MN|=|M||N|$$. Then in this number system, just as in $$\mathbb{Z}$$, neither $$2$$ nor $$3$$ has a proper factorization; and neither do $$1 \pm \sqrt{-5}$$. You can check this in each by writing down all the numbers with modulus less than the target value, and checking that you can't get the target by multiplying any collection of them together (unless one is a unit). But $2 \times 3 = (1+\sqrt{-5})(1-\sqrt{-5}) = 6.$ So in this number system we see that $$1+\sqrt{-5}$$ is a factor of $$2 \times 3$$ but it is not a factor or either $$2$$ or $$3$$. So in this case we do not get a unique factorization. It can't really be obvious that there is only one way to factorize a number into a product of terms which can't be factorized any further, because there are number systems where it doesn't happen! ## Prime versus irreducible In this more general context, it is necessary to split up our two notions of prime, because they no longer coincide. The standard way to do this is to call a number with no proper factorization an irreducible number, and to define a prime to be a number which has the property that if it is a factor of $$ab$$ then it must be a factor of at least one of $$a$$ or $$b$$. (This is why on your first encounter with abstract algebra, the definition of prime can seem to have nothing to do with the familiar prime numbers.) Then any prime number must be irreducible, but a number can be irreducible without being prime. For each number to have a unique prime factorization, it must also be the case that every irreducible number is prime. # Moving on This may have already been more than enough for you. But if not, there are lots of extenstions to play with here. We could play the game again with the Gaussian integers, and it turns out that again there is a unique factorization into irreducibles. And we could consider variations, such as numbers of the form $$m+n\sqrt{-k}$$ where $$k$$ is a positive integer. How does the choice of $$k$$ affect the behaviour? We already know that when $$k=5$$, unique factorization fails. But is this typical, or does it happen for other vaues? Is there a property of $$k$$ that we can calculate that tells us? We could try with polynomials with real coefficients: it turns out that division of polynomials is similar enough to division of integers that we have a division theorem and a Euclidean algorithm, that prime and irreducible coincide, and that any polynomial can be expressed uniquely as a product of primes (which here means linear factors or quadratics with no roots). We could ask how things change if we take different types of coefficients, say complex, or rational, or integer. These two alone give a large toy box to play with and explore these ideas, but they're only a start. Explore, investigate, make up your own variations. If all else fails, look in a book on abstract algebra or ask the internet. # Calculating gcd redux So, after all that, how should you actually calculate the gcd of two integers? The answer is, as it almost always is, "it depends". If the numbers are small enough to be factorized easily, using the prime factorization is probably the quicker of the two. If they are large enough that factorization is a pain, then Euclid's algorithm will do the job with less pain. And I think it's worth calculating a few both ways just to see how the same result is arrived at by such very different routes. 1. How come that r−qb=a ? 2. Incompetence, sir or madam, sheer incompetence. Fixed now, and thanks for picking it up. 3. No problem, thanks for nice article. ;) BTW, I think here is another typo: whenever ab is a multiple of p then at least one of a and b must be a multiple of b. Cheers. Tomas 4. Right again, of course, and also fixed. Glad you enjoyed it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026583433151245, "perplexity": 110.08233327554154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578624217.55/warc/CC-MAIN-20190424014705-20190424040705-00321.warc.gz"}
http://mathhelpforum.com/algebra/273268-prove-9-divides-n-6-17-when-gcd-9-n-1-a.html
# Thread: prove that 9 divides n^6+17 when gcd(9,n)=1 2. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 Well you should be able to show what you did for the FIRST step of an induction proof. Hard to tell if you are on the right path if you don't show even your first step. EDIT: Are you required to use induction? 3. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 The divisors of 9 are 1, 3, and 9. Saying that "gcd(9, n)= 1" means that n is not a multiple of 3. So n= 3k+ 1 or 3k+ 2. If n= 3k+ 1, you can write this as $(3k+ 1)^6+ 17$ and if n= 3k+ 2 as $(3k+ 2)^6+ 17$ and then do two inductions on k. 4. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 Originally Posted by HallsofIvy The divisors of 9 are 1, 3, and 9. Saying that "gcd(9, n)= 1" means that n is not a multiple of 3. So n= 3k+ 1 or 3k+ 2. If n= 3k+ 1, you can write this as $(3k+ 1)^6+ 17$ and if n= 3k+ 2 as $(3k+ 2)^6+ 17$ And this. 5. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 And here I did that by hand! (Whenever I see the instructions "using technology", I can't help but think "isn't a pencil technology?" They don't grow on trees!) 6. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 Do you really need the inductions? \displaystyle \begin{align*} \left( 3\,n + 1 \right) ^6 + 17 &= \left( 3\,n \right) ^6 + 6\,\left( 3\,n \right) ^5 \left( 1 \right) ^1 + 15\,\left( 3\,n \right) ^4 \left( 1 \right) ^2 + 20\,\left( 3\,n \right) ^3\left( 1 \right) ^3 + 15\,\left( 3\,n \right) ^2\left( 1 \right) ^4 + 6\,\left( 3\,n \right) ^1\,\left( 1 \right) ^5 + 1^6 + 17 \end{align*} Upon simplification you can clearly see the factor of 9... 7. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 Originally Posted by HallsofIvy And here I did that by hand! (Whenever I see the instructions "using technology", I can't help but think "isn't a pencil technology?" They don't grow on trees!) That is known as the cellulose graphite method. 8. ## Re: prove that 9 divides n^6+17 when gcd(9,n)=1 Originally Posted by plato that is known as the cellulose graphite method. rofl
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938370585441589, "perplexity": 2301.273960943246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00337-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/dont-understand-the-taylor-expansion.581845/
# Don't understand the taylor expansion? 1. Feb 27, 2012 ### Lengalicious Not sure under which forum this should have gone under, anyway can someone who really understands it explain it to me in as simple terms as they can, from what i'm getting its approximates something for a function or something? No idea. 2. Feb 27, 2012 ### zhermes 3. Feb 27, 2012 ### mathman It belongs in either general math or calculus. To understand Taylor series you need to understand calculus at least on an elementary level. It is very hard to explain otherwise. 4. Mar 1, 2012 ### Claude Bile A Taylor series basically calculates a bunch of derivatives at some point in a parameter space and then extrapolates them to other points close to the initial point in the space. The more derivatives you figure out the better you can predict what some function (of which said derivatives are taken) will be at some distance from the initial point. Claude. Similar Discussions: Don't understand the taylor expansion?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440546631813049, "perplexity": 1131.9452646890838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687324.6/warc/CC-MAIN-20170920142244-20170920162244-00442.warc.gz"}
https://astarmathsandphysics.com/a-level-maths-notes/s2/3772-the-poisson-distribution.html
## The Poisson Distribution The Poisson distribution models a situation in whichevents happen at a certain rate, so many accidents in this stretch ofroad per month, or so many misprints in this book per page. It iswritten ,whereisthe mean number of events in a certain time period. The Poissondistribution has the very useful feature that it is scalable so thatif you double the time period, you double the expected number ofevents. Since the Poisson distribution has only one parameter, theexpected or mean number of events,this is a very useful feature. The Poisson distribution is defined byItmay be used as in the following examples. Example: On a stretch of motorwayaccidents occur at a rate of 0.9 per month. a) Show that the probability of noaccidents in the next month is 0.407, to 3 significant figures. b) Find the probability of exactly 2 accidents occuring in thenext 6 month period. c)Find the probability of at least two accidents in the next sixmonths. a)to3sf. b)In 1 month we expect 0.9 accidents, so in 6 months weexpect 6*0.9=5.4 accidents. The distribution becomesUsingthis distribution we findto4sf. c) to4 dp. Sometimes two distributions are combined. Theprobability of no accidents in a month is 0.407. Suppose then we needto find the probability of having exactly 3 months in the next yearwith no accidents. The probability 0.407 is fixed. The number ofmonths n, is 12. Of course now it is a binomial distribution,Fora binomial distribution, to4dp. This sort of thing is actually quite common, and meansthat every situation should be analysed carefully. It is not alwaysthe case that a single distribution should be used throughout foreach question. Refresh
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906919360160828, "perplexity": 1176.216482162445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00430.warc.gz"}
https://www.physicsforums.com/threads/1st-order-differential-equation.164490/
# 1st order differential equation • Start date • #1 59 0 ## Homework Statement (Can someone please check my work? Bear with me, this is my first time using LaTex on this forum...) Find the general solution to the first oder ODE $$y'-y=e^x$$ by substituting a series $$y= \sum_{n=0}^\infty a_n x^n$$ about $$x_0=0$$, finding the recurrence relation for a_n, and solving to find an expression for the general term $$a_n$$ in terms of $$a_0$$. What is the radius of convergence of the solution? ## Homework Equations $$e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$$ ## The Attempt at a Solution I plugged y into the DE: $$\sum_{n=1}^\infty a_n n x^{n-1} - \sum_{n=0}^\infty a_n x^n = \sum_{n=0}^\infty \frac{x^n}{n!}$$ Then I made all series start at n=0: $$\sum_{n=0}^\infty a_{n+1} (n+1) x^n - \sum_{n=0}^\infty a_n x^n - \sum_{n=0}^\infty \frac{x^n}{n!} = 0$$ Bringing together like terms: $$\sum_{n=0}^\infty x^n (a_{n+1} (n+1) - a_n - \frac{1}{n!}) = 0$$ Set coefficients equal to zero: $$a_{n+1} (n+1) - a_n - \frac{1}{n!} = 0$$ Solving for recurrence relation: $$a_{n+1} = \frac{n!a_n +1}{(n+1)!}$$ After plugging in n=0,1,2,3,4... the pattern is: $$a_n = \frac{a_0 + n}{n!}$$ Therefore, the solution is: $$y = a_0 + (a_0 + 1)x+(\frac{a_0 + 2}{2})x^2 + (\frac{a_0 + 3}{6})x^3 +...$$ Last edited: Related Calculus and Beyond Homework Help News on Phys.org • #2 HallsofIvy Homework Helper 41,808 933 ## Homework Statement (Can someone please check my work? Bear with me, this is my first time using LaTex on this forum...) Find the general solution to the first oder ODE $$y'-y=e^x$$ by substituting a series $$y= \sum_{n=0}^\infty a_n x^n$$ about $$x_0=0$$, finding the recurrence relation for a_n, and solving to find an expression for the general term $$a_n$$ in terms of $$a_0$$. What is the radius of convergence of the solution? ## Homework Equations $$e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$$ ## The Attempt at a Solution I plugged y into the DE: $$\sum_{n=1}^\infty a_n n x^{n-1} - \sum_{n=0}^\infty a_n x^n = \sum_{n=0}^\infty \frac{x^n}{n!}$$ Then I made all series start at n=0: $$\sum_{n=0}^\infty a_{n+1} (n+1) x^n - \sum_{n=0}^\infty a_n x^n - \sum_{n=0}^\infty \frac{x^n}{n!} = 0$$ Bringing together like terms: $$\sum_{n=0}^\infty x^n (a_{n+1} (n+1) - a_n - \frac{1}{n!}) = 0$$ Set coefficients equal to zero: $$a_{n+1} (n+1) - a_n - \frac{1}{n!} = 0$$ Solving for recurrence relation: $$a_{n+1} = \frac{n!a_n +1}{(n+1)!}$$After plugging in n=0,1,2,3,4... the pattern is: $$a_n = \frac{a_0 + n}{n!}$$ Therefore, the solution is: $$y = a_0 + (a_0 + 1)x+(\frac{a_0 + 2}{2})x^2 + (\frac{a_0 + 3}{6})x^3 +...$$ You've done remarkably well so far. The standard way to determine the radius of convergence of a power series is to use the "ratio" test. If $a_n= (a_0+ n)/n!$, then $a_{n+1}= (a_0+ n+1)/(n+1)!$. The ratio $a_{n+1}x^{n+1}/a_nx^n$ is $$\frac{a_0+ n+ 1}{(n+1)!}\frac{n!}{a_0+ n}x= \frac{a_0+ n+ 1}{n(a_0+ n)}x$$ what is the limit of that as n goes to infinity? What values of x make it less than 1? • #3 Dick Homework Helper 26,258 618 You can easily check your result by realizing that the differential equation is pretty easy to solve. It's y=x*e^x+a0*e^x. Expand this and compare with your solution. For radius of convergence, how about trying a ratio test? • #4 59 0 HallsofIvy, the limit of the ratio test as n goes to infinity is zero, so x can be infinite; the radius of convergence is infinite. Could you say the radius of convergence is infinite, as all parts of the DE have infinite radii of convergence? (i.e., 1*y'-1*y=e^x; 1 and e^x have an infinite radii of convergence)? Dick, what method did you use to solve the differential equation? By looking at the equation, I could've guessed y=x*e^x + c, but I can't figure out how to do it with any of the methods we've learned in my intro to DE class. • #5 Dick Homework Helper 26,258 618 You can't say y has a infinite radius of convergence until you know what y is. x*y'+y=0 has a solution 1/x which is not going to have infinite radius of convergence solutions - even though all coefficients do. And yes, you caught me. I guessed. The homogeneous part is easy enough (it's linear with constant coefficents) - and I always try guessing the inhomogeneous part before going to more general solutions like variation of parameters. • #6 HallsofIvy Homework Helper 41,808 933 I'm sorry, Dick. Where did you get xy'+ y = 0 ? The given equation is y'- y= ex which has soluion, as you said, y(x)= Cex+ xex. Wriitten as a power series, that has, of course, radius of convergence infinity- which was what smithg86 said. • #7 59 0 I think Dick was using xy' + y = 0 as a counterexample to what I said about guessing about the radius of convergence of the solution based on the coefficients of the original equation. I looked in my textbook, and I found something about the radius of convergence for a 2nd order DE. I was trying to do something similar with this 1st order DE. Here's what my textbook says: "If $$x_0$$ is an ordinary point of the differential equation: $$P(x)y'' + Q(x)y' + R(x)y = 0$$ that is, if p=Q/P and q = R/P are analytic at $$x_0$$, then the general solution of the differential equation is y = (summation that I don't want to write out) = $$a_0 y_1 (x) + a_1 y_2 (x)$$. The radius of convergence for each of the series solutions $$y_1$$ and $$y_2$$ is at least as large as the minimum of the radii of convergence of the series for p and q." Now that I think about it, does the above property not hold for 1st order DE's because P(x)=0, and therefore p and q are not analytic at $$x_0$$? Last edited: • #8 Dick Homework Helper 26,258 618 Yes, it was just intended as a counterexample. I don't think it's special because it's first order. It's just because the leading coefficient is vanishing. It does make good sense that if the leading function is nonzero and the subleading functions don't have singularities in some radius R then you can trust the solution out to that radius. Just think about integrating it numerically. So you don't necessarily have to know y before you can conclude something about it's radius of convergence. I stand corrected. • Last Post Replies 3 Views 611 • Last Post Replies 1 Views 871 • Last Post Replies 12 Views 2K • Last Post Replies 4 Views 1K • Last Post Replies 5 Views 6K • Last Post Replies 14 Views 2K • Last Post Replies 1 Views 6K • Last Post Replies 1 Views 332 • Last Post Replies 1 Views 1K • Last Post Replies 11 Views 4K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942146897315979, "perplexity": 523.8772254887102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00089.warc.gz"}
https://www.physicsforums.com/threads/cross-product-proof-1.584454/
# Cross Product Proof #1 • #1 49 0 The cross product between two vectors $\vec{A}$x$\vec{B}$ and $\vec{C}$ is given by the following equation: ($\vec{A}$x$\vec{B}$)x$\vec{C}$=($\vec{A}$.$\vec{C}$)$\vec{B}$-($\vec{B}$.$\vec{C}$)$\vec{A}$ Well, as I'm sure you know, proving something is true is different than proving how something is true. In this proof, I will not only prove it holds up but I also will demonstrate how it came to be. What is given is $\vec{A}$,$\vec{B}$ and $\vec{C}$, which are vectors belonging to ℝ3, or mathematically: $\vec{A}$,$\vec{B}$,$\vec{C}$$\in$ℝ3 Let's define the coordinates of $\vec{A}$,$\vec{B}$ and $\vec{C}$ as the following: $\vec{A}$=(a1,a2,a3) $\vec{B}$=(b1,b2,b3) $\vec{C}$=(c1,c2,c3) Note that by the definition of cross product, $\vec{A}$x$\vec{B}$ is simultaneously perpendicular to $\vec{A}$ and $\vec{B}$, or: ($\vec{A}$x$\vec{B}$).$\vec{A}$=0 ($\vec{A}$x$\vec{B}$).$\vec{B}$=0 Let $\vec{A}$ and $\vec{B}$ be linearly independent, and so they both define a certain plane $\Omega$. Using the same chain of thought, ($\vec{A}$x$\vec{B}$)x$\vec{C}$ is simultaneously perpendicular to $\vec{A}$x$\vec{B}$ and $\vec{C}$, or: (($\vec{A}$x$\vec{B}$)x$\vec{C}$).($\vec{A}$x$\vec{B}$)=0 (($\vec{A}$x$\vec{B}$)x$\vec{C}$).$\vec{C}$=0 Hence, we can conclude that ($\vec{A}$x$\vec{B}$)x$\vec{C}$ is a linear combination of $\vec{A}$ and $\vec{B}$, and so: ($\vec{A}$x$\vec{B}$)x$\vec{C}$=λ$\vec{A}$+μ$\vec{B}$ $\in$$\Omega$ We can start by noting the resemblance of this equation with the one given, in which λ is a scalar given by -$\vec{B}$.$\vec{C}$ and μ by $\vec{A}$.$\vec{C}$, as we'll soon show. For the next step, let's write the coordinates of ($\vec{A}$x$\vec{B}$)x$\vec{C}$ in respect to λ and μ on the LHS/RHS and in respect to the coordinates of $\vec{A}$,$\vec{B}$ and $\vec{C}$ on the RHS/LHS, like so (skipping intermediary steps): (a1λ+b1μ,a2λ+b2μ,a3λ+b3μ)=(c3(a3b1-a1b3)-c2(a1b2-a2b1),c1(a1b2-a2b1)-c3(a2b3-a3b2),c2(a2b3-a3b2)-c1(a3b1-a1b3)) Now we have three equations with only 2 unknowns (λ,μ), which algebraically means that there's no degree of freedom in the system. These equations may not be easy on the eyes but with strong motivation, they're feesable. And they go as follows (from x and y coordinates): λ=c3(a3b1/a1-b3)-c2(b2-a2b1/a1)-μ(b1/a1) $\Rightarrow$ c1(a1b2-a2b1)-c3(a2b3-a3b2)=a2c3(a3b1/a1-b3)-a2c2(b2-a2b1/a1)-μ(a2b1/a1)+μb2$\Leftrightarrow$μ(b2-a2b1/a1)=-c3(a2a3b1/a1-b3a2+a2b3-a3b2)+a2c2(b2-a2b1/a1)+c1(a1b2-a2b1)$\Rightarrow$μ=a1c1+a2c2+a3c3=$\vec{A}$.$\vec{C}$ Replacing μ in the first line, we get: λ=c3(a3b1/a1-b3)-c2(b2-a2b1/a1)-(a1c1+a2c2+a3c3)(b1/a1) Which computed gives: λ=-(b1c1+b2c2+b3c3)=-$\vec{B}$.$\vec{C}$ Note that initially we thought of $\vec{A}$ and $\vec{B}$ as linearly independent but if we hadn't, that would be saying that $\vec{A}$=κ$\vec{B}$ and such would mean that: κ($\vec{B}$x$\vec{B}$)x$\vec{C}$=(0,0,0) And so: $\vec{A}$=(μ/λ)$\vec{B}$ With κ=μ/λ. If you find any incongruence in this resolution (or any doubt), let me know. Last edited: ## Answers and Replies • #2 tiny-tim Homework Helper 25,832 251 Welcome to PF! Hi Mathoholic!! Welcome to PF! Yes, that's fine … you're saying that it has to be in the plane of A and B, and (if they're not parallel) that means it has to be a linear combination of A and B. But it also has to be perpendicular to C, so (aA + bB).C = 0, so a/b = -(B.C)/(A.C). • #3 49 0 Yes, I understood your simplification. But there's a reason I chose the first equation and not the second one, mainly because the second equation doesn't quite tell you what's λ and μ, it only shows that: λ/μ=-($\vec{B}$.$\vec{C}$)/($\vec{A}$.$\vec{C}$) The factor (-1) could either be from $\vec{B}$.$\vec{C}$ or $\vec{A}$.$\vec{C}$ and that's significant to the result. I only assumed that it belong to the plane $\Omega$ because that vector is equipolent. Finally, when you have: α/β=4 You cannot assume that α is 4 and that β is 1. There's ∞1 possibilities of that ratio happening (degree of freedom: 1). So, in the first case there's a possibility that: λ=±$\vec{A}$.$\vec{C}$±1 $\wedge$ μ=±$\vec{B}$.$\vec{C}$±1 (alternately) • Last Post Replies 21 Views 57K • Last Post Replies 12 Views 8K • Last Post Replies 16 Views 3K • Last Post Replies 4 Views 1K • Last Post Replies 6 Views 4K • Last Post Replies 2 Views 2K • Last Post Replies 5 Views 2K • Last Post Replies 10 Views 33K • Last Post Replies 2 Views 3K • Last Post Replies 3 Views 25K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139704704284668, "perplexity": 1349.0802399337497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00428.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-roots-real-and-imaginary-of-y-2x-2-7x-2-using-the-quadratic-
Algebra Topics # How do you find the roots, real and imaginary, of y= 2x^2 - 7x + 2 using the quadratic formula? Aug 29, 2017 #### Answer: See a solution process below: #### Explanation: The quadratic formula states: For $\textcolor{red}{a} {x}^{2} + \textcolor{b l u e}{b} x + \textcolor{g r e e n}{c} = 0$, the values of $x$ which are the solutions to the equation are given by: $x = \frac{- \textcolor{b l u e}{b} \pm \sqrt{{\textcolor{b l u e}{b}}^{2} - \left(4 \textcolor{red}{a} \textcolor{g r e e n}{c}\right)}}{2 \cdot \textcolor{red}{a}}$ Substituting: $\textcolor{red}{2}$ for $\textcolor{red}{a}$ $\textcolor{b l u e}{- 7}$ for $\textcolor{b l u e}{b}$ $\textcolor{g r e e n}{2}$ for $\textcolor{g r e e n}{c}$ gives: $x = \frac{- \textcolor{b l u e}{\left(- 7\right)} \pm \sqrt{{\textcolor{b l u e}{\left(- 7\right)}}^{2} - \left(4 \cdot \textcolor{red}{2} \cdot \textcolor{g r e e n}{2}\right)}}{2 \cdot \textcolor{red}{2}}$ $x = \frac{7 \pm \sqrt{49 - 16}}{4}$ $x = \frac{7 \pm \sqrt{33}}{4}$ ##### Impact of this question 159 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351008892059326, "perplexity": 1727.6729709949827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00156.warc.gz"}
https://www.arxiv-vanity.com/papers/1909.12852/
# Fermionic neural-network states for ab-initio electronic structure Kenny Choo Department of Physics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland    Antonio Mezzacapo IBM T.J. Watson Research Center, Yorktown Heights, NY, USA    Giuseppe Carleo Center for Computational Quantum Physics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA ###### Abstract Neural-network quantum states have been successfully used to study a variety of lattice and continuous-space problems. Despite a great deal of general methodological developments, representing fermionic matter is however still early research activity. Here we present an extension of neural-network quantum states to model interacting fermionic problems. Borrowing techniques from quantum simulation, we directly map fermionic degrees of freedom to spin ones, and then use neural-network quantum states to perform electronic structure calculations. For several diatomic molecules in a minimal basis set, we benchmark our approach against widely used coupled cluster methods, as well as many-body variational states. On the test molecules, we recover almost the entirety of the correlation energy. We systematically improve upon coupled cluster methods and Jastrow wave functions, reaching levels of chemical accuracy or better. Finally, we discuss routes for future developments and improvements of the methods presented. ###### pacs: 03.65.Aa,03.67.-a,03.67.Ac ### Introduction.- Predicting the physical and chemical properties of matter from the fundamental principles of quantum mechanics is a central problem in modern electronic structure theory. In the context of ab-initio Quantum Chemistry (QC), a commonly adopted strategy to solve for the electronic wave-function is to discretize the problem on finite basis functions, expanding the full many-body state in a basis of anti-symmetric Slater determinants. Because of the factorial scaling of the determinant space, exact approaches systematically considering all electronic configurations, such as the full configuration interaction (FCI) method, are typically restricted to small molecules and basis sets. A solution routinely adopted in the field is to consider systematic corrections over mean-field states. For example, in the framework of the coupled cluster (CC) method Coester and Kümmel (1960); Čížek,Jiří (1966), higher level of accuracy can be obtained considering electronic excitations up to doublets, in CCSD, and triplets in CCSD(T). CC techniques are routinely adopted in QC electronic calculations, and they are often considered the "gold standard" in ab-initio electronic structure. Despite this success, the accuracy of CC is intrinsically limited in the presence of strong quantum correlations, in turn restricting the applicability of the method to regimes of relative weak correlations. For strongly correlated molecules and materials, alternative, non-perturbative approaches have been introduced. Most notably, both stochastic and non-stochastic methods based on variational representations of many-body wave-functions have been developed and constantly improved in the past decades of research. Notable variational classes for QC are Jastrow-Slater wave-functions Jastrow (1955), correlated geminal wave-functions Casula and Sorella (2003), and matrix product states White (1992); White and Martin (1999); Chan and Sharma (2011). Stochastic projection methods systematically improving upon variational starting points are for example the fixed-node Green’s function Monte Carlo Anderson (1975) and constrained-path auxiliary field Monte Carlo Zhang and Krakauer (2003). Main limitations of these methods stem, directly or indirectly, from the choice of the variational form. For example, matrix-product states are extremely efficient in quasi one-dimensional systems, but suffer from exponential scaling when applied to larger dimensions. On the other hand, variational forms considered so-far for higher dimensional systems typically rely on rigid variational classes and do not provide a systematic and computationally efficient way to increase their expressive power. To help overcome some of the limitations of existing variational representations, ideas leveraging the power of artificial neural networks (ANN) have recently emerged in the more general context of interacting many-body quantum matter. These approaches are typically based on compact, variational parameterizations of the many-body wave-function in terms of ANN Carleo and Troyer (2017). These approaches to fermionic problems are however comparatively less explored than for lattice spin systems. Two conceptually different implementations have been put forward. In the first, fermionic symmetry is encoded directly at the mean field level, and ANNs are used as a positive-definite correlator function Nomura et al. (2017). Main limitation of this ansatz is that the nodal structure of the wave function is fixed, and the exact ground state cannot, in principle, be achieved, even in the limit of infinitely large ANN. The second method is to use ANNs to parametrize permutation symmetric many-body fermionic orbitals Ruggeri et al. (2018); Luo and Clark (2019), in the spirit of "backflow" many-body variational wave functions Feynman and Cohen (1956); Tocchio et al. (2008), and only very recently applied to electronic structure Pfau et al. (2019); Hermann et al. (2019). In this Article we provide an alternative representation of fermionic many-body quantum systems based on a direct encoding of electronic configurations. This task is achieved by mapping the fermionic problem onto an equivalent spin problem, and then solving the latter with spin-based neural-network quantum states. Using techniques from quantum information, we analyze different model agnostic fermion-to-spin mappings. We show results for several diatomic molecules in minimal Gaussian basis sets, where our approach reaches chemical accuracy () or better. The current challenges in extending the method to larger basis sets and molecules are also discussed. ### Electronic structure on spin systems.- We consider many-body molecular fermionic Hamiltonians in second quantization formalism, H=∑i,jtijc†icj+∑i,j,k,muijkmc†ic†kcmcj, (1) where we have defined fermionic annihilation and creation operators with the anticommutation relation on fermionic modes, and one- and two-body integrals and . The Hamiltonian (1) can be mapped to interacting spin models via the Jordan-Wigner Wigner and Jordan (1928) mapping, or the more recent parity or Bravyi-Kitaev Bravyi and Kitaev (2002) encodings, which have been developed in the context of quantum simulations. These three encodings can all be expressed in the compact form cj→12∏i∈U(j)σxi×⎛⎝σxj∏i∈P(j)σzi−iσyj∏i∈R(j)σzi⎞⎠c†j→12∏i∈U(j)σxi×⎛⎝σxj∏i∈P(j)σzi+iσyj∏i∈R(j)σzi⎞⎠, (2) where we have defined an update , parity and remainder sets of spins, which depend on the particular mapping considered Seeley et al. (2012); Tranter et al. (2015), and denote Pauli matrices acting on site . In the familiar case of the Jordan-Wigner transformation, the update, parity and remainder sets become , , , and the mapping takes the simple form cj→(j−1∏i=0σzi)σ−jc†j→(j−1∏i=0σzi)σ+j, (3) where . For all the spin encodings considered, the final outcome is a spin Hamiltonian with the general form Hq=r∑j=1hjσj, (4) defined as a linear combination with real coefficients of , -fold tensor products of single-qubit Pauli operators . Additionally, under such mappings, there is a one to one correspondence between spin configuration and the original particle occupations . In the following, we will consider the interacting spin Hamiltonian (4) as a starting point for our variational treatment. ### Neural-network quantum states.- Once the mapping is performed, we use neural-network quantum states (NQS) introduced in Carleo and Troyer (2017) to parametrize the ground state of the Hamiltonian (4). One conceptual interest of NQS is that, because of the flexibility of the underlying non-linear parameterization, they can be adopted to study both equilibrium Choo et al. (2018); Ferrari et al. (2019) and out-of-equilibrium Czischek et al. (2018); Fabiani and Mentink (2019); Hartmann and Carleo (2019); Nagy and Savona (2019); Vicentini et al. (2019); Yoshioka and Hamazaki (2019) properties of diverse many-body quantum systems. In this work we adopt a simple neural-network parameterization in terms of a complex-valued, shallow restricted Boltzmann machine (RBM) Smolensky (1986); Carleo and Troyer (2017). For a system of spins, the many-body amplitudes take the compact form ΨM(→σ;W) =e∑iaiσziM∏j=12coshθj(→σ),where (5) θj(→σ) =bj+N∑iWijσzi. (6) Here, are complex-valued network parameters , and the expressivity of the network is determined by the hidden unit density defined by where is number of hidden units. The simple RBM ansatz can efficiently support volume-law entanglement Deng et al. (2017); Huang and Moore (2017); Chen et al. (2018); Levine et al. (2019), and it has been recently used in several applications Melko et al. (2019). One can then train the ansatz Eq.(5) with a variational learning approach known as Variational Monte Carlo (VMC), by minimizing the energy expectation value E(W)=⟨ΨM|Hq|ΨM⟩⟨ΨM|ΨM⟩. (7) This expectation value can be evaluated using Monte Carlo sampling using the fact that the energy (and, analogously, any other observable) can be written as E(W)=∑→σEloc(→σ)|ΨM(→σ)|2∑→σ|ΨM(→σ)|2, (8) where we have defined the local energy Eloc(→σ)=∑→σ′ΨM(→σ′)Ψ∗M(→σ)⟨→σ′|Hq|→σ⟩. (9) Given samples drawn from the distribution , the average over the samples gives an unbiased estimator of the energy. Note that the computational cost of evaluating the local energy depends largely on the sparsity of the Hamiltonian . In a generic QC problems, this cost scales in the worst case with , as compared to the linear scaling in typical condensed matter systems with local interaction. Sampling from is performed using Markov chain Monte Carlo (MCMC), with a Markov chain constructed using the Metropolis-Hastings algorithm Hastings (1970). Specifically, at each iteration, a configuration is proposed and accepted with probability P(→σk+1=→σprop)=min(1,∣∣∣ΨM(→σprop)ΨM(→σk)∣∣∣2). (10) The sample then corresponds to the configurations of the Markov chain downsampled at an interval , i.e. . For the simulations done in this work, we typically use with sample size of approximately . Since the Hamiltonians we are interested in have an underlying particle conservation law, it is helpful to perform this sampling in the particle basis rather than the corresponding spin basis . The proposed configuration at each iteration, then corresponds to a particle hopping between orbitals. Once a stochastic estimate of the expectation values is available, as well as its derivatives w.r.t. the parameters , the ansatz can be optimized using the stochastic reconfiguration method Sorella (1998); Sorella et al. (2007), closely related to the natural-gradient method used in machine learning applications Amari (1998); Carleo and Troyer (2017). ### Potential Energy surfaces.- We first consider small molecules in a minimal basis set (STO-3G). We show in Fig. 1 the dissociation curves for and , compared to the CCSD and CCSD(T). It can be seen that on these small molecules in their minimal basis, the RBM is able to generate accurate representations of the ground states, and remarkably achieve an accuracy better than standard QC methods. To further illustrate the expressiveness of the RBM, we show in Fig. 2 the probability distribution of the most relevant configurations in the wavefunction. We contrast between the RBM and configuration interaction limited to single and double excitations (CISD). In CISD, the Hilbert space is truncated to include only states which are up to two excitations away from the Hartree-Fock configuration. It is clear from the histogram that the RBM is able to capture correlations beyond double excitations. ### Alternative encodings.- The above computations were done using the Jordan-Wigner mapping. To investigate the effect of the mapping choice on the performance of the RBM, we also performed select calculations using the parity and Bravyi-Kitaev mappings. All the aforementioned transformations require a number of spins equal to the number of fermionic modes in the model. However, the support of the Pauli operators in (4), i.e. the number of single-qubit Pauli operators in that are different from the identity , depends on the specific mapping used. Jordan-Wigner and parity mappings have linear scalings , while the Bravyi-Kitaev encoding has a more favorable scaling , due to the logarithmic spin support of the update, parity and remainder sets in (2). Note that one could in principle use generalized superfast mappings Setia et al. (2018) ,which have a support scaling as good as , where is the maximum degree of the fermionic interaction graph defined by (1). However, such a mapping is not practical for the models considered here because the typical large degree of molecular interactions graphs makes the number of spins required for the simulation too large compared to the other model-agnostic mappings. While these encodings are routinely used as tools to study fermionic problems on quantum hardware Kandala et al. (2017), their use in classical computing has not been systematically explored so far. Since they yield different structured many-body wave functions, it is then worth analyzing whether more local mappings can be beneficial for specific NQS representations. In Fig. 3, we analyze the effect of the different encodings on the accuracy of the variational ground-state energy for a few representative diatomic molecules. At fixed computational resources and network expressivity, we typically find that the RBM ansatz can achieve consistent levels of accuracy, independent of the nature of the mapping type. While the Jordan Wigner allows to achieve the lowest energies in those examples, the RBM is nonetheless able to efficiently learn the ground state also in other representations, and chemical accuracy is achieved in all cases reported in Fig. 3. ### Sampling larger basis sets.- The spin-based simulations of the QC problems studied here show a distinctive MCMC sampling behavior that is not usually found in lattice model simulations of pure spin models. Specifically, the ground-state wave function of the diatomic molecules considered is typically sharply peaked around the Hartree-Fock state, and neighboring excited states. This behavior is prominently shown also in Fig. 2, where the largest peaks are several of order of magnitude larger than the distribution tail. As a result of this structure, any uniform sampling scheme drawing states from the VMC distribution , is bound to repeatedly draw the most dominant states, while only rarely sampling less likely configurations. To exemplify this peculiarity, we study the behavior of the ground state energy as a function of the number of MCMC samples used at each step of the VMC optimization. We concentrate on the water molecule in the larger 6-31g basis. In this case, the Metropolis sampling scheme exhibits acceptance rates as low as or less, as a consequence of the presence of dominating states previously discussed. In Fig. 4, we vary the sample size and also compare MCMC sampling with exact sampling. We can see that the accuracy of the simulation depends quite significantly on the sample size, and that chemical accuracy is reached only for a relatively large number of samples. The large number of samples needed in this case, together with a very low acceptance probability for the Metropolis Hasting algorithm, directly points to the inefficiency of uniform sampling from . At present, this represents the most significant bottleneck in the application of our approach to larger molecules and basis sets. This issue however is not a fundamental limitation, and alternatives to the standard VMC uniform sampling can be envisioned to efficiently sample less likely–yet important for chemical accuracy– states. ### Outlook.- In this work we have shown that relatively simple shallow neural networks can be used to compactly encode, with high precision, the electronic wave function of model molecular problems in quantum chemistry. Our approach is based on the mapping between the fermionic quantum chemistry molecular Hamiltonian and corresponding spin Hamiltonians. In turn, the ground state of the spin models can be conveniently modeled with standard variational neural-network quantum states. On model diatomic molecules, we show that a RBM state is able to capture almost the entirety of the electronic excitations, improving on routinely used approaches as CCSD(T) and the Jastrow ansatz. Several future directions can be envisioned. The distinctive peaked structure of the molecular wave function calls for development of alternatives to uniform sampling from the Born probability. These developments will allow to efficiently handle larger basis sets than the ones considered here. Second, our study has explored only a very limited subset of possible neural-network architectures. Most notably, the use of deeper networks might prove beneficial for complex molecular complexes. Another very interesting matter for future research is the comparison of different neural-network based approaches to quantum chemistry. Contemporary to this work, approaches based on antisymmetric wave-functions in continuous space have been presented Pfau et al. (2019); Hermann et al. (2019). These have the advantage that they already feature a full basis set limit. However, the discrete basis approach has the advantage that boundary conditions and fermionic symmetry are much more easily enforced. As a consequence, simple-minded shallow networks can already achieve comparatively higher accuracy than the deeper and substantially more complex networks so-far adopted in the continuum case. ###### Acknowledgements. The Flatiron Institute is supported by the Simons Foundation. A.M. acknowledges support from the IBM Research Frontiers Institute. Neural-network quantum states simulations are based on the open-source software NetKet Carleo et al. (2019). Coupled cluster and configuration interaction calculations are performed using the PySCF package Sun et al. (2017). The mappings from fermions to spins are done using Qiskit Aqua Abraham et al. (2019). The authors acknowledge discussions with G. Booth, T. Berkelbach, M. Holtzmann, J. E. T. Smith, S. Sorella, J. Stokes, and S. Zhang. ## References • H. Abraham, I. Y. Akhalwaya, G. Aleksandrowicz, T. Alexander, G. Alexandrowics, E. Arbel, A. Asfaw, C. Azaustre, P. Barkoutsos, G. Barron, L. Bello, Y. Ben-Haim, L. S. Bishop, S. Bosch, D. Bucher, et al. (2019) Qiskit: an open-source framework for quantum computing. External Links: Document Cited by: Outlook.-. • S. Amari (1998) Natural Gradient Works Efficiently in Learning. Neural Computation 10 (2), pp. 251–276. External Links: ISSN 0899-7667, Link, Document Cited by: Neural-network quantum states.-. • J. B. Anderson (1975) A random‐walk simulation of the Schrödinger equation: H+3. The Journal of Chemical Physics 63 (4), pp. 1499–1503. External Links: ISSN 0021-9606, Link, Document Cited by: Introduction.-. • S. Bravyi and A. Kitaev (2002) Fermionic quantum computation. Ann. of Phys. 298 (1), pp. 210–226. Cited by: Electronic structure on spin systems.-. • G. Carleo, K. Choo, D. Hofmann, J. E. T. Smith, T. Westerhout, F. Alet, E. J. Davis, S. Efthymiou, I. Glasser, S. Lin, M. Mauri, G. Mazzola, C. B. Mendl, E. van Nieuwenburg, O. O’Reilly, H. Théveniaut, G. Torlai, F. Vicentini, and A. Wietek (2019) NetKet: A machine learning toolkit for many-body quantum systems. SoftwareX 10, pp. 100311. External Links: ISSN 2352-7110, Link, Document Cited by: Outlook.-. • G. Carleo and M. Troyer (2017) Solving the quantum many-body problem with artificial neural networks. Science 355 (6325), pp. 602–606. External Links: ISSN 0036-8075, 1095-9203, Link, Document • M. Casula and S. Sorella (2003) Geminal wave functions with Jastrow correlation: A first application to atoms. The Journal of Chemical Physics 119 (13), pp. 6500–6511. External Links: ISSN 0021-9606, Link, Document Cited by: Introduction.-. • G. K. Chan and S. Sharma (2011) The Density Matrix Renormalization Group in Quantum Chemistry. Annual Review of Physical Chemistry 62 (1), pp. 465–481. External Links: Cited by: Introduction.-. • J. Chen, S. Cheng, H. Xie, L. Wang, and T. Xiang (2018) Equivalence of restricted Boltzmann machines and tensor network states. Physical Review B 97 (8), pp. 085104. External Links: Cited by: Neural-network quantum states.-. • K. Choo, G. Carleo, N. Regnault, and T. Neupert (2018) Symmetries and Many-Body Excitations with Neural-Network Quantum States. Physical Review Letters 121 (16), pp. 167204. External Links: Cited by: Neural-network quantum states.-. • Čížek,Jiří (1966) On the correlation problem in atomic and molecular systems. calculation of wavefunction components in ursell‐type expansion using quantum‐field theoretical methods. The Journal of Chemical Physics 45 (11), pp. 4256–4266. Note: doi: 10.1063/1.1727484 External Links: Document, ISBN 0021-9606, Link Cited by: Introduction.-. • F. Coester and H. Kümmel (1960) Short-range correlations in nuclear wave functions. Nuclear Physics 17, pp. 477 – 485. External Links: ISSN 0029-5582, Link, Document Cited by: Introduction.-. • S. Czischek, M. Gärttner, and T. Gasenzer (2018) Quenches near Ising quantum criticality as a challenge for artificial neural networks. Physical Review B 98 (2), pp. 024311. External Links: Cited by: Neural-network quantum states.-. • D. Deng, X. Li, and S. Das Sarma (2017) Quantum Entanglement in Neural Network States. Physical Review X 7 (2), pp. 021021. External Links: Cited by: Neural-network quantum states.-. • G. Fabiani and J. Mentink (2019) Investigating ultrafast quantum magnetism with machine learning. SciPost Physics 7 (1), pp. 004 (en). External Links: ISSN 2542-4653, Link, Document Cited by: Neural-network quantum states.-. • F. Ferrari, F. Becca, and J. Carrasquilla (2019) Neural Gutzwiller-projected variational wave functions. arXiv:1906.00463 [cond-mat]. Note: arXiv: 1906.00463 External Links: Link Cited by: Neural-network quantum states.-. • R. P. Feynman and M. Cohen (1956) Energy Spectrum of the Excitations in Liquid Helium. Physical Review 102 (5), pp. 1189–1204. External Links: Cited by: Introduction.-. • M. J. Hartmann and G. Carleo (2019) Neural-Network Approach to Dissipative Quantum Many-Body Dynamics. Physical Review Letters 122 (25), pp. 250502. External Links: Cited by: Neural-network quantum states.-. • W. K. Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 (1), pp. 97–109 (en). External Links: ISSN 0006-3444, Link, Document Cited by: Neural-network quantum states.-. • J. Hermann, Z. Schätzle, and F. Noé (2019) Deep neural network solution of the electronic Schr\"odinger equation. arXiv:1909.08423 [physics, stat]. Note: arXiv: 1909.08423 External Links: Link Cited by: Introduction.-, Outlook.-. • Y. Huang and J. E. Moore (2017) Neural network representation of tensor network and chiral states. arXiv:1701.06246 [cond-mat]. Note: arXiv: 1701.06246 External Links: Link Cited by: Neural-network quantum states.-. • R. Jastrow (1955) Many-Body Problem with Strong Forces. Physical Review 98 (5), pp. 1479–1484. External Links: Cited by: Introduction.-. • [23] R. Johnson Computational chemistry comparison and benchmark database (cccbdb). NIST Standard Reference Database (101). Cited by: Appendix A. • A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta (2017) Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature 549 (7671), pp. 242. Cited by: Alternative encodings.-. • Y. Levine, O. Sharir, N. Cohen, and A. Shashua (2019) Quantum Entanglement in Deep Learning Architectures. Physical Review Letters 122 (6), pp. 065301. External Links: Cited by: Neural-network quantum states.-. • D. Luo and B. K. Clark (2019) Backflow Transformations via Neural Networks for Quantum Many-Body Wave Functions. Physical Review Letters 122 (22), pp. 226401. External Links: Cited by: Introduction.-. • R. G. Melko, G. Carleo, J. Carrasquilla, and J. I. Cirac (2019) Restricted boltzmann machines in quantum physics. Nature Physics 15 (9), pp. 887–892. External Links: Document, ISBN 1745-2481, Link Cited by: Neural-network quantum states.-. • A. Nagy and V. Savona (2019) Variational Quantum Monte Carlo Method with a Neural-Network Ansatz for Open Quantum Systems. Physical Review Letters 122 (25), pp. 250501. External Links: Cited by: Neural-network quantum states.-. • Y. Nomura, A. S. Darmawan, Y. Yamaji, and M. Imada (2017) Restricted Boltzmann machine learning for solving strongly correlated quantum systems. Physical Review B 96, pp. 205152. External Links: Cited by: Introduction.-. • D. Pfau, J. S. Spencer, A. G. d. G. Matthews, and W. M. C. Foulkes (2019) Ab-Initio Solution of the Many-Electron Schr\"odinger Equation with Deep Neural Networks. arXiv:1909.02487 [physics]. Note: arXiv: 1909.02487 External Links: Link Cited by: Introduction.-, Outlook.-. • M. Ruggeri, S. Moroni, and M. Holzmann (2018) Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space. Physical Review Letters 120 (20), pp. 205302. External Links: Cited by: Introduction.-. • J. Seeley, M. Richard, and P. Love (2012) The Bravyi-Kitaev transformation for quantum computation of electronic structure. The Journal of Chemical Physics 137 (22), pp. 224109. Cited by: Electronic structure on spin systems.-. • K. Setia, S. Bravyi, A. Mezzacapo, and J. D. Whitfield (2018) Superfast encodings for fermionic quantum simulation. arXiv preprint arXiv:1810.05274. Cited by: Alternative encodings.-. • P. Smolensky (1986) Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. D. E. Rumelhart, J. L. McClelland, and C. PDP Research Group (Eds.), pp. 194–281. External Links: ISBN 0-262-68053-X, Link Cited by: Neural-network quantum states.-. • S. Sorella, M. Casula, and D. Rocca (2007) Weak binding between two aromatic rings: Feeling the van der Waals attraction by quantum Monte Carlo methods. The Journal of Chemical Physics 127 (1), pp. 014105. External Links: ISSN 0021-9606, 1089-7690, Document Cited by: Neural-network quantum states.-. • S. Sorella (1998) Green Function Monte Carlo with Stochastic Reconfiguration. Physical Review Letters 80 (20), pp. 4558–4561. External Links: Cited by: Neural-network quantum states.-. • Q. Sun, T. C. Berkelbach, N. S. Blunt, G. H. Booth, S. Guo, Z. Li, J. Liu, J. D. McClain, E. R. Sayfutyarova, S. Sharma, S. Wouters, and G. K. Chan (2017) PySCF: the python‐based simulations of chemistry framework. Vol. 8. External Links: Cited by: Outlook.-. • L. F. Tocchio, F. Becca, A. Parola, and S. Sorella (2008) Role of backflow correlations for the nonmagnetic phase of the $t\text{–}{t}^{\ensuremath{’}}$ Hubbard model. Physical Review B 78 (4), pp. 041101. External Links: Cited by: Introduction.-. • A. Tranter, S. Sofia, J. Seeley, M. Kaicher, J. McClean, R. Babbush, P. Coveney, F. Mintert, F. Wilhelm, and P. Love (2015) The Bravyi–Kitaev transformation: properties and applications. International Journal of Quantum Chemistry 115 (19), pp. 1431–1441. Cited by: Electronic structure on spin systems.-. • F. Vicentini, A. Biella, N. Regnault, and C. Ciuti (2019) Variational Neural-Network Ansatz for Steady States in Open Quantum Systems. Physical Review Letters 122 (25), pp. 250503. External Links: Cited by: Neural-network quantum states.-. • S. R. White and R. L. Martin (1999) Ab initio quantum chemistry using the density matrix renormalization group. The Journal of Chemical Physics 110 (9), pp. 4127–4130. External Links: ISSN 0021-9606, Link, Document Cited by: Introduction.-. • S. R. White (1992) Density matrix formulation for quantum renormalization groups. Physical Review Letters 69 (19), pp. 2863–2866. External Links: Document Cited by: Introduction.-. • E. Wigner and P. Jordan (1928) Über das Paulische Äguivalenzverbot. Z. Phys 47, pp. 631. Cited by: Electronic structure on spin systems.-. • N. Yoshioka and R. Hamazaki (2019) Constructing neural stationary states for open quantum many-body systems. Physical Review B 99 (21), pp. 214306. External Links: Cited by: Neural-network quantum states.-. • S. Zhang and H. Krakauer (2003) Quantum Monte Carlo Method using Phase-Free Random Walks with Slater Determinants. Physical Review Letters 90 (13), pp. 136401. External Links: Cited by: Introduction.-. ## Appendix A Geometries for diatomic molecules The equilibrium geometries for the molecules presented in this work were obtained from the CCCBDB database  Johnson . For convenience, we present them in Table 2. ## Appendix B Computing matrix elements A crucial requirement for the efficient implementation of the stochastic variational Monte Carlo procedure to minimize the ground-state energy, is the ability to efficiently compute the matrix elements of the spin Hamiltonian , appearing in the local energy, Eq. 9. Since is a sum of products of Pauli operators, the goal is to efficiently compute matrix elements of the form M(→σ,→σ′)=⟨→σ′|σν11σν22…σνNN|→σ⟩, (11) where denotes a Pauli matrix with acting on site . Because of the structure of the Pauli operators, these matrix elements are non-zero only for a specific such that {σ′i=σiνi∈(I,Z)σ′i=−σiνi∈(X,Y) (12) and the matrix element is readily computed as M(→σ,→σ′)=(iny)Πk:vk∈(y,z)σk, (13) where is the total number of operators in the string of Pauli matrices.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009783267974854, "perplexity": 2389.7828430742215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00225.warc.gz"}
https://economics.stackexchange.com/questions/18518/homogeneous-of-degree-two-utility-functions-and-homothetic-preferences/18519
# Homogeneous of Degree Two Utility Functions and Homothetic Preferences. The understanding that I am not clear is in when do homothetic preferences represent a utility function and vice-versa. My solution to the problem is posted below the problem: A consumer’s preferences are described by a utility function that is homogeneous of degree two: For all $\alpha > 0$ and $x \in R^{L}_{+}$ , $u(\alpha x) = \alpha^2 u(x)$ The problem that I am not getting clear is: Q) "Are this consumer’s preferences homothetic? Show that they are or give a counterexample." My solution: According to Mas Colell et al. "Microeconomic Theory" (chapter 3, page 50) Therefore, this given consumer's preferences are not homothetic as it doesn't generate a utility function that is homogeneous of degree 1 (HOD(1)). A counter example would be a utility function that is HOD(1) like the Cobb Douglas Utility Function $U(x_1, x_2) = x_{1}^{\alpha} x_{2}^{1-\alpha}$ To conclude, this consumer's preferences are not homothetic as it represents a utility function of HOD(2). While , according to Mas Colell et al. preference $\pmb{\succsim}$ is homothetic $\textbf{if and only if}$ it admits a utility function that is HOD(1). Could you please help me in understanding where I am going wrong with what Mas-Colell mentioned above "necessary and sufficient condition" and how a utility function that is HOD(2) implies that $\pmb{\succsim}$ is homothetic. Thanks. First of all, in order to provide a counterexample, you need to construct a utility function that is homogeneous of degree two, but is not homothetic. Therefore, the counterexample you gave in your solution doesn't work. To prove the statement directly, let $u(x)$ be a utility representation that is homogeneous of degree two. That is, $u(\alpha x)=\alpha^2 u(x)$. Therefore, if $x\sim y$, which means $u(x)=u(y)$, we have $$u(\alpha x)= \alpha^2 u(x)=\alpha^2 u(y)=u(\alpha y).$$ This means $\alpha x\sim \alpha y$, and hence the preferences are homothetic. We can also use the proposition in MWG: A continuous $\succeq$ is homothetic if and only if it admits a utility function $u(x)$ that is homogeneous of degree one. One caveat is that the utility representation is unique up to monotone transformations, so even if one representation $u(x)$ is not homogeneous of degree one, the preferences could still be homothetic if a monotone transformation of the representation, $\phi(u(x))$, is. In this question, if we consider a monotone transformation $\hat{u} (x)=(u(x))^\frac{1}{2}$, this $\hat{u}(x)$ still represents the preferences $\succeq$. Notice that $$\hat{u} (\alpha x)=(u(\alpha x))^\frac{1}{2}=(\alpha^2 u(x))^\frac{1}{2}=\alpha (u(x))^\frac{1}{2}=\alpha\hat{u} (x),$$ meaning that this new representation is homogeneous of degree one. Therefore, by the proposition above, the preferences are homothetic. actually if a utility function is HOD(2) then it is not HOD(1), therefore, you can conclude (as Mas Colell states), that it does not represent an homotetic preference relation. As a supportive example, consider an economy with just 1 good and a consumer whose preferences are HOD(2) and can be represented by the utiliy function $u(x)=x^2$. It is easy to see that $u(\cdot)$ is homogeneous of degree 2 (for $\alpha>0$): $$u(\alpha x)=(\alpha x)^2=\alpha^2x^2=\alpha^2 u(x)$$ however you also know that $\forall \alpha\neq1$ you have $\alpha^2u(x)\neq \alpha u(x)$ therefore you conclude that $u(\cdot)$ is not HOD(1). At this point you can use Mas Colell's proposition and conclude that this kind of preferences is not necessarily homotetic. What Mas Colell means with "necessary and sufficient" is that as long as $\succeq$ is continuous, homotetic and rational you can always rationalize it as with a HOD(1) utility function; furthermore, as long as the utility representation of a preference relation is HOD(1), you can always prove that $U(a)>U(b)\Rightarrow a\succeq b$ where $\succeq$ is rational, continuous and homotetic (cfr. Mas Colell pag.96, excercise 3.C.5). • The symmetric Cobb-Douglas preference is homothetic. Here is a HOD(1) representation: $$U(x,y) = \left(xy\right)^{1/2}$$ Here is a HOD(2) representation: $$U(x,y) = xy.$$ – Giskard Sep 30 '17 at 20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595775008201599, "perplexity": 485.8436976359687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00088.warc.gz"}
https://math.stackexchange.com/questions/1976250/what-is-the-maximum-distance-of-k-points-in-an-n-dimensional-hypercube
# What is the maximum distance of k points in an n-dimensional hypercube? For this question, I'm thinking only about the euclidean distance: Let $p_1 = (x_1^{(1)}, \dots, x_n^{(1)})$ and $p_2 = (x_1^{(2)}, \dots, x_n^{(2)})$ be $n$-dimensional points. The euclidean distance of $p_1$ and $p_2$ is $$d(p_1, p_2) = \sqrt{\sum_{i=1}^n {\left (x_i^{(1)} - x_i^{(2)} \right )}^2}$$ Lets say $\alpha(n, k)$ is the maximum distance for $k$ points in the unit-hypercube of $\mathbb{R}^n$: $$\alpha(n, k) = \max( \left \{\min(d(p_i, p_j))| (p_1, \dots, p_k) \in [0, 1]^n, i, j \in \{1, \dots, k\} \right \})$$ ## $n = 1$ • $\alpha(1, k = 2 = 2^n) = 1$ • $\alpha(1, k = 3)= 0.5$ • $\alpha(1, k) = \frac{1}{k-1}$ ## $n = 2$ • $\alpha(2, k = 2) = \sqrt{2}$: The maximum distance is the diagonal and hence $\sqrt{1+1}$ • $\alpha(2, k = 3)=?$ • $\alpha(2, k = 4 = 2^n) = 1$: Putting each point at the corners of the square. • $\alpha(2, k = 5)$: I guess like 4 but with one point in the center? (hence $\frac{\sqrt 2}{2}$?) ## n = 3 • $\alpha(3, k = 2) = \sqrt{3}$: The diagonal again and hence $\sqrt{1+1+1}$ • $\alpha(3, k = 2^n)$: The corners again and hence 1 ## Arbitrary $n$ • $\alpha(n, k=2) = \sqrt{n}$ • $\alpha(n, 2^n) = 1$ What is $\alpha(n, k)$? • if $k\leq n$ isn't it sufficent to consider the graph made up by the corners of the hypercube? – tired Oct 19 '16 at 20:33 • @tired: I'm not sure. This would mean $\alpha(2, 3) = 1$, but I'm relatively certain that you could place the points on the edges (not the corners) and get a bigger distance. – Martin Thoma Oct 19 '16 at 20:40 • @tired: No. This would mean $\alpha(2, 3) = 1$. But $p_1 = (0, 0.5)$, $p_2 = (0, 0.75)$, $p_3 = (1, 1)$ has a bigger distance than 1. – Martin Thoma Oct 19 '16 at 20:49 • Am I the only one not grasping how the (maximum) distance between $k>2$ points is defined here? – Jack D'Aurizio Oct 19 '16 at 20:51 • @JackD'Aurizio I've added a definition of the distance. I hope that helps. – Martin Thoma Oct 19 '16 at 21:01 (This should be a comment but I'll post it as an answer since I don't have enough reputation to comment.) I'd like to answer the case with $n = 2$ and $k = 3$. The proof is really simple and can be found geometrically, if you assume two facts: • the first point is on a vertex of the 2D hypercube, and the two others are on the opposite edges. It makes sense all 3 points should be on the edges to maximize distance. • the resulting triangle is equilateral. This also makes sense, as in a non-equilateral triangle, one of the sides would have a smaller length and thus penalize the minimal distance between the points. The triangle will look like this: Equilateral triangle inside unit square Solving for $x$ (using the fact than the triangle is equilateral), we find $x = 2-\sqrt{3} \approx 0.2679$, and finally: $$a = \alpha(2, k=3) = \sqrt{6} - \sqrt{2} \approx 1.035276\dots$$ But solving for $\alpha(n, k)$ in the general case seems challenging and I did not find a solution on the internet, interesting problem!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254686832427979, "perplexity": 410.65859577513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00201.warc.gz"}
http://slideplayer.com/slide/2466983/
# M.1 U.1 Complex Numbers. ## Presentation on theme: "M.1 U.1 Complex Numbers."— Presentation transcript: M.1 U.1 Complex Numbers What are imaginary numbers? Viewed the same way negative numbers once were How can you have less than zero?  Numbers which square to give negative real numbers. “I dislike the term “imaginary number” — it was considered an insult, a slur, designed to hurt i‘s feelings. The number i is just as normal as other numbers, but the name “imaginary” stuck so we’ll use it.” Imaginary numbers deal with rotations, complex numbers deal with scaling and rotations simultaneously (we’ll discuss this further later in the week) Imaginary Numbers What is the square root of 9? Imaginary Numbers The constant, i, is defined as the square root of negative 1: Imaginary Numbers The square root of -9 is an imaginary number... Imaginary Numbers Simplify these radicals: =6xi =2y√5yi Multiples of i Consider multiplying two imaginary numbers: So... Multiples of i Powers of i: Powers of i - Practice i28 i75 i113 i86 i1089 1 -i i -1 Solutions Involving i Solve: Complex Numbers Have a real and imaginary part . Write complex numbers as a + bi Examples: 3 - 7i, i, -4i, 5 + 2i Real = a Imaginary = bi Add & Subtract Like Terms Example: (3 + 4i) + (-5 - 2i) = -2 + 2i Practice (4 + 7i) - (2 - 3i) (3 - i) + (7i) (-3 + 2i) - (-3 + i) Add these Complex Numbers: (4 + 7i) - (2 - 3i) (3 - i) + (7i) (-3 + 2i) - (-3 + i) = 2 +10i = 3 + 6i = i Multiplying FOIL and replace i2 with -1: Practice Multiply: 5i(3 - 4i) (7 - 4i)(7 + 4i) = i = 65 Division/Standard Form A complex number is in standard form when there is no i in the denominator. Rationalize any fraction with i in the denominator. Monomial Denominator: Binomial Denominator: Rationalizing Monomial: multiply the top & bottom by i. Complex #: Rationalize Binomial: multiply the numerator and denominator by the conjugate of the denominator ... conjugate is formed by negating the imaginary term of a binomial Practice Simplify: Absolute Value of Complex Numbers Absolute Value is a numbers distance from zero on the coordinate plane. a = x-axis b = y axis Distance from the origin (0,0) = |z| = √x2+y2 Modulus Graphing Complex Numbers Exit Ticket Simplify Write the following in standard form (-2+4i) –(3+9i) Write the following in standard form 8+7i 3+4i Find the absolute value 4-5i
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196171760559082, "perplexity": 2646.5974167417053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00182.warc.gz"}
http://www.math.psu.edu/ccma/cmal/laboratories/cmal/lecture/2015/0326/1452.html
#### Discontinuous Petrov Galerkin Method with Optimal Test Functions views • Speaker(s)Leszek Demkowicz (niversity of Texas at Austin) • DateFrom 2014-06-26 To 2014-06-26 • VenueRoom 77201 at #78 courtyard, Beijing International Center for Mathematical Research Speaker: Leszek Demkowicz (University of Texas at Austin) Time: Thu, 06/26/2014 - 09:00 Place: Room 77201 at #78 courtyard, Beijing International Center for Mathematical Research Abstract: The concept of a variational formulation is usually attributed to Johann Bernoulli as it is directly linked to the classical Calculus of Variations started by Johann and Jacob Bernoulli and later developed by Euler and Lagrange. Indeed, mid-way between the minimization problem and the Euler-Lagrange equations, we arrive at an integral identity thathas to be satisfied for all admissible variations, the Principle of Virtual Work. The essence of the principle is the fact that the solution is characterized through its action on test functions. In the mathematical language, we are dealing withan operator that takes values in the dual to the test space. Each of the three formulations: the minimization problem, the Euler-Lagrange equations, and the variational formulation may provide a starting point for a numerical approximation. If the minimized functional is represented by a quadratic form, the corresponding variational problem is linear. If the quadratic form is positive definite (the functional is strictly convex), the minimization and variational problems are fully equivalent. This equivalence carries over to the discrete level and represents the essence of the Ritz method: solution of the discrete variational problem is equivalent to the minimization of discrete energy. This guarantees the stability of Finite Element (FE) discretization regardless ofa mesh being used. The Ritz method always delivers the best approximation in the sense of the energy norm. If we focus on the equivalence of variational formulation and the Euler-Lagrange equations (based on integration by parts and Fourier's lemma), we realize that variational (weak) formulations can be developed for arbitrary problems described by Partial Differential Equations (PDEs). The essence of the Galerkin method is then to discretize the variational formulation rather than the PDEs. The critical question is whether the Galerkin method will converge beyond the “safe scenarios'' provided by the positive definite self-adjoint operators (the Ritz setting). A partial answer has been provided by Mikhlin's theory of asymptotic stability. If a positive definite self-adjoint operator is perturbed with a lower order term (compact operator), the Galerkin method is asymptotically stable and in fact optimal: for fine enough meshes, it will deliver again the best approximation error. To this class of problems belong for instance standard vibrations and wave propagation problems. The delicate issue is how to determine whether the mesh is fine enough to guarantee the stability... A more fundamental idea was proposedby Petrov in 1959 who suggested using different test and trial spaces in the Galerkin formulation. If the trial space should be used to guarantee the approximability of the solution, the main role of the test space is to provide stability. We arrive at the fundamental Babuska's Theorem (1971) and the concept of the inf-sup condition, rooted in Banach Closed Range Theorem: if the test space can be selected in such a way that the discrete inf-sup condition is satisfied, the method will be stable and converge. The famous phrase states: ``discrete stability and approximability imply convergence''. The practical issue how to select the test space remains and, in essence, has been the main focus of all FE developments inthe last four decades including mixed methods, stabilized methods, bubblemethods, exact sequences, etc. Jay Gopalakrishnan and I presented a new FE method that automatically guarantees discrete stability by means of a Petrov-Galerkin scheme with optimal test functions computed on a fly. The main idea is very simple: compute (approximately) and use test functions thatrealize the supremum in the inf-sup conditions - the best test functions you can have. Surprise or not, we arrive at a minimum residual method (generalized least squares) in which the approximate solution delivers again the best approximation error in a special ``energy'' (residual) norm. The circle has been closed - we are back to the Ritz setting but now for any class of linear problems. Critical to the practicality of the method is the use of discontinuous test functions (``broken'' test spaces) and so-called ultra-weak variational formulation. In collaboration with several colleagues, we managed to develop a general theory for linear Partial Differential Equations (PDEs) including singular perturbation problems. The methodology has been applied to a variety of usual model problems: Poisson equation, convection-diffusion, elasticity, wave propagation: acoustics, electromagnetics, elastodynamics, Stokes, beams and shells. It also has been formally extended to nonlinear problems and applied to both incompressible and compressible Navier-Stokes equations. I will conclude my presentation by flashing a few representative numerical results.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176809549331665, "perplexity": 466.0291374442788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937780.9/warc/CC-MAIN-20180420120351-20180420140351-00095.warc.gz"}
https://lucadifino.wordpress.com/2010/10/13/how-to-calculate-speed-of-cosmic-ray-particles-from-kinetic-energy/
## How to calculate speed of cosmic ray particles from kinetic energy In high energy and astroparticle physics energies for cosmic ray particles are given in GeV/n. But how much is that in term of speed? First of all we must remember that these energies are kinetic energies (k). The total energy  E of a particle is the sum of its kinetic energy k and its mass m: E = m + k with c=1 and energy and mass measured with the same unit. In special relativity: E = g m where g is the Lorentz factor that is equal to: and is the ratio between the speed of a particle and the speed of light c. Remember that the mass of a proton is m=0.938 GeV. Lorentz factor is then given by: or and b is then Assuming that mass is linear with number of nucleons in the nucleus, the same calculation applies to any ion using the kinetic energy per nucleon. We see that for energies bigger than 2 GeV/n particles travel almost at light speed (> 95 %). k g b 100 KeV 1.000107 0.0146 1 MeV 1.001066 0.04614 10 MeV 1.010661 0.14486 100 MeV 1.106610 0.42825 1 GeV 2.066098 0.87507 2 GeV 3.132196 0.94767 5 GeV 6.330490 0.98744 10 GeV 11.66098 0.99632 100 GeV 107.6098 0.99996 This post was inspired by Protoni quasi veloci come la luce and Protoni quasi veloci come la luce: soluzione It was very useful for tomorrow challenge. Annunci Questa voce è stata pubblicata in Physics e contrassegnata con . Contrassegna il permalink.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529613852500916, "perplexity": 1975.7151824771854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592387.80/warc/CC-MAIN-20180721051500-20180721071500-00334.warc.gz"}
https://www.physicsforums.com/threads/easy-tank-model-outflow-of-tank-proportional-to-volume-of-tank.531534/
# Homework Help: Easy Tank Model? Outflow of tank proportional to volume of tank. 1. Sep 18, 2011 ### takbq2 1. Suppose we have a tank partially fi lled with water. There is a pipe feeding water to the tank as a variable ow rate and there is also a drain pipe with a computer controlled variable valve hooked to a sensor in the tank. The valve opens exactly enough to let water drain from the tank at a rate proportional to the volume of the tank. The program allows for us to set one number: the constant of proportionality. Write a model for this physical problem. Be sure to de ne all the variables in your model. (b) Suppose the in ow rate is constant. How should the proportionality constant in the control mechanism be set to keep the tank near a constant desired volume? (c) Suppose the in flow rate is periodic. To be de nite let's say the flow rate is sinusoidal and known exactly, how should the constant of proportionality be set for the controller to best keep the tank at a constant desired volume. 2. Relevant equations flow in = flow out (if desired in this case) 3. The attempt at a solution I call f0 the flow out and fi the flow in. fi varies with, say, t. f0 is proportional to V, the volume of the tank. The volume of the tank is: V = the volume initially in the tank, Vi, + fi(t) - f0. f0 is proportional to V by c., but in my statement about the V, f0 is on that side so it can't really be in the model. If I could get help figuring out the model, I could answer parts (b) and (c) pretty easily it seems. My first proportion was f0=Vc thus, f0 = (Vi+fi(t))c But I know this can't be right because in answering part b, fi would need to be as close as possible to f0, but any amount for c would mean that the amount out was equal t the entire amount in the tank. help? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Sep 18, 2011 ### uart Use the fact that the "inflow minus outflow" is equal to the rate of change in volume to write a simple DE (differential equation) for the system. The DE is the system model. 3. Sep 19, 2011 ### takbq2 dV/dT = c(fi(t)-fo) ? On second thought, dV/dt = fi(t) - c*fo(t) ?? seems better, can someone verify this or otherwise please? Thanks! Last edited: Sep 19, 2011 4. Sep 19, 2011 ### rude man What does "constant of proportionality" mean? Is it h, the target height of water? 5. Sep 19, 2011 ### uart That's on the right track, but use the fact, in the problem statement, that the outflow is proportional to V so as to write your DE with just one input variable (f_i) and one state variable (V). The state variable V also happens to be the output variable in this case, which is nice. 6. Sep 20, 2011 ### takbq2 I'm sorry, I'm confused. They are proportional so f_out = kV, thus, dV/dt = fin(t) - k*V(t) which is not right? >=\ 7. Sep 20, 2011 ### takbq2 Well I know it's not right. I'm just coming to the same answers over and over again because I've done it so much I can't think outside my current train of thought :S It must be q_in(t) = dV/dt + kV ... one final check on this please?? Last edited: Sep 20, 2011 8. Sep 20, 2011 ### uart No that's the correct DE for the system. Now for part b) you can take F_in as a constant and look at what value of "k" you require to keep V constant, say V_desired. Note that V = const means dV/dt = 0. 9. Sep 20, 2011 ### takbq2 thanks a lot for your help, uart. I got part b, K would = Qin/V. Working on last part.. if it is a sinusoid nothing changes, you still want amount in to equal amount out, so just sinusoid(t)_in/V = k for part (c) I would think Last edited: Sep 20, 2011
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8920243382453918, "perplexity": 1848.2881964014375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158320.19/warc/CC-MAIN-20180922103644-20180922124044-00415.warc.gz"}
https://www.lessonplanet.com/teachers/multiply-up-to-12-by-0-b
# Multiply Up To 12 By 0 (B) ##### This Multiply Up To 12 By 0 (B) worksheet also includes: In this multiplication worksheet, students find the product by multiplying a variety of numbers times zero. Students solve 36 problems.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748695492744446, "perplexity": 2375.292858920173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00011.warc.gz"}
https://phys.libretexts.org/Bookshelves/Relativity/General_Relativity_(Crowell)/05%3A_Curvature/5.10%3A_From_Metric_to_Curvature
# 5.10: From Metric to Curvature $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ ## Finding the Christoffel Symbol from the Metric We’ve already found the Christoffel symbol in terms of the metric in one dimension. Expressing it in tensor notation, we have $\Gamma^{d}_{ba} = \frac{1}{2} g^{cd} (\partial_{?} g_{??}),$ where inversion of the one-component matrix G has been replaced by matrix inversion, and, more importantly, the question marks indicate that there would be more than one way to place the subscripts so that the result would be a grammatical tensor equation. The most general form for the Christoffel symbol would be $\Gamma^{b}_{ac} = \frac{1}{2} g^{db} (L \partial_{c} g_{ab} + M \partial_{a} g_{cb} + N \partial_{b} g_{ca}),$ where $$L$$, $$M$$, and $$N$$ are constants. Consistency with the onedimensional expression requires $L + M + N = 1$ and vanishing torsion gives $$L = M$$. The $$L$$ and $$M$$ terms have a different physical significance than the $$N$$ term. Suppose an observer uses coordinates such that all objects are described as lengthening over time, and the change of scale accumulated over one day is a factor of k > 1. This is described by the derivative $$\partial_{t} g_{xx}$$ < 1, which affects the $$M$$ term. Since the metric is used to calculate squared distances, the gxx matrix element scales down by $$\frac{1}{\sqrt{k}}$$. To compensate for $$\partial_{t} v^{x} < 0$$, so we need to add a positive correction term, $$M > 0$$, to the covariant derivative. When the same observer measures the rate of change of a vector $$v^t$$ with respect to space, the rate of change comes out to be too small, because the variable she differentiates with respect to is too big. This requires $$N < 0$$, and the correction is of the same size as th$$M$$ correction, so $$|M| = |N|$$. We find $$L = M = −N = 1$$. Exercise $$\PageIndex{1}$$ Does the above argument depend on the use of space for one coordinate and time for the other? The resulting general expression for the Christoffel symbol in terms of the metric is $\Gamma^{c}_{ab} = \frac{1}{2} g^{cd} (\partial_{a} g_{bd} + \partial_{b} g_{ad} - \partial_{d} g_{ab}) \ldotp$ One can readily go back and check that this gives $\nabla_{c} g_{ab} = 0. \label{eq10}$ Confirming Equation \ref{eq10} is a bit tedious. For that matter, tensor calculations in general can be infamously time-consuming and error-prone. Any reasonable person living in the 21st century will therefore resort to a computer algebra system. The most widely used computer algebra system is Mathematica, but it’s expensive and proprietary, and it doesn’t have extensive built-in facilities for handling tensors. It turns out that there is quite a bit of free and open-source tensor software, and it falls into two classes: coordinate-based and coordinate-independent. The best open-source coordinate-independent facility available appears to be Cadabra, and in fact the verification of $$\nabla_{c} g_{ab}$$ = 0 is the first example given in the Leo Brewin’s handy guide to applications of Cadabra to general relativity.13 Exercise $$\PageIndex{2}$$ In the case of 1 dimension, show that this reduces to the earlier result of $$−(\frac{1}{2}) \frac{dG}{dX}$$. Since $$\Gamma$$ is not a tensor, it is not obvious that the covariant derivative, which is constructed from it, is a tensor. But if it isn’t obvious, neither is it surprising – the goal of the above derivation was to get results that would be coordinate-independent. Example 10: Christoffel symbols on the globe, quantitatively In example 9, we inferred the following properties for the Christoffel symbol $$\Gamma^{\theta}_{\phi \phi}$$ on a sphere of radius R: $$\Gamma^{\theta}_{\phi \phi}$$ is independent of $$\phi$$ and R, $$\Gamma^{\theta}_{\phi \phi}$$ < 0 in the northern hemisphere (colatitude θ less than π/2), $$\Gamma^{\theta}_{\phi \phi}$$ = 0 on the equator, and $$\Gamma^{\theta}_{\phi \phi}$$ > 0 in the southern hemisphere. The metric on a sphere is $ds^2 = R^2 d\theta^{2} + R^2 \sin^2 \theta d\phi^{2}.$ The only nonvanishing term in the expression for $$\Gamma^{\theta}_{\phi \phi}$$ is the one involving $$\partial_{\theta} g_{\phi \phi} = 2R^{2} \sin \theta \cos \theta$$. The result is $\Gamma^{\theta}_{\phi \phi} = − \sin \theta \cos \theta$ which can be verified to have the properties claimed above. Numerical Solution of the Geodesic Equation In Section 5.7, I gave an algorithm that demonstrated the uniqueness of the solutions to the geodesic equation. This algorithm can also be used to find geodesics in cases where the metric is known. The following program, written in the computer language Python, carries out a very simple calculation of this kind, in a case where we know what the answer should be; even without any previous familiarity with Python, it shouldn’t be difficult to see the correspondence between the abstract algorithm presented in Section 5.7 and its concrete realization below. For polar coordinates in a Euclidean plane, one can compute $$\Gamma^{r}_{\phi \phi}$$ = −r and $$\Gamma^{\phi}_{r \phi} = \frac{1}{r}$$ (problem 2). Here we compute the geodesic that starts out tangent to the unit circle at $$\phi=0$$. It is not necessary to worry about all the technical details of the language (e.g., line 1, which makes available such conveniences as math.pi for $$\pi$$). Comments are set off by pound signs. Lines 16-34 are indented because they are all to be executed repeatedly, until it is no longer true that $$\lambda < \lambda_{max}$$ (line 15). Exercise $$\PageIndex{3}$$ By inspecting lines 18-22, find the signs of $$\ddot{r}$$ and $$\ddot{\phi}$$ at $$\lambda$$ = 0. Convince yourself that these signs are what we expect geometrically. The output is as follows: We can see that $$\phi$$ → 90 deg. as $$\lambda \rightarrow \infty$$, which makes sense, because the geodesic is a straight line parallel to the y axis. A less trivial use of the technique is demonstrated in Section 6.2, where we calculate the deflection of light rays in a gravitational field, one of the classic observational tests of general relativity. ## The Riemann Tensor in Terms of the Christoffel Symbols The covariant derivative of a vector can be interpreted as the rate of change of a vector in a certain direction, relative to the result of parallel-transporting the original vector in the same direction. We can therefore see that the definition of the Riemann curvature tensor in Section 5.4 is a measure of the failure of covariant derivatives to commute: $(\nabla_{a} \nabla_{b} - \nabla_{b} \nabla_{a}) A^{c} = A^{d} R^{c}_{dab}$ A tedious calculation now gives $$R$$ in terms of the $$\Gamma$$s: $R^{a}_{bcd} = \partial_{c} \Gamma^{a}_{db} - \partial_{d} \Gamma^{a}_{cb} + \Gamma^{a}_{ce} \Gamma^{e}_{db} - \Gamma^{a}_{de} \Gamma^{e}_{cb}$ This is given as another example later in Brewin’s manual for applying Cadabra to general relativity.14 (Brewin writes the upper index in the second slot of R.) ## Some General Ideas about Gauge Let’s step back now for a moment and try to gain some physical insight by looking at the features that the electromagnetic and relativistic gauge transformations have in common. We have the following analogies: electromagnetism differential geometry global symmetry A constant phase shift $$\alpha$$ has no observable effects. Adding a constant onto a coordinate has no observable effects. local symmetry A phase shift $$\alpha$$ that varies from point to point has no observable effects. An arbitrary coordinate transformation has no observable effects. The gauge is described by . . . $$\alpha$$ $$g_{\mu \nu}$$ . . . and differentiation of this gives the gauge field. . . Ab $$\Gamma^{c}_{ab}$$ A second differentiation gives the directly observable field(s) . . . E and B $$R^c_{dab}$$ The interesting thing here is that the directly observable fields do not carry all of the necessary information, but the gauge fields are not directly observable. In electromagnetism, we can see this from the Aharonov-Bohm effect, shown in Figure $$\PageIndex{1}$$.15 The solenoid has B = 0 externally, and the electron beams only ever move through the external region, so they never experience any magnetic field. Experiments show, however, that turning the solenoid on and off does change the interference between the two beams. This is because the vector potential does not vanish outside the solenoid, and as we’ve seen in section 4.2, the phase of the beams varies according to the path integral of the Ab. We are therefore left with an uncomfortable, but unavoidable, situation. The concept of a field is supposed to eliminate the need for instantaneous action at a distance, which is forbidden by relativity; that is, (1) we want our fields to have only local effects. On the other hand, (2) we would like our fields to be directly observable quantities. We cannot have both 1 and 2. The gauge field satisfies 1 but not 2, and the electromagnetic fields give 2 but not 1. Note We describe the effect here in terms of an idealized, impractical experiment. For the actual empirical status of the Aharonov-Bohm effect, see Batelaan and Tonomura, Physics Today 62 (2009) 38. Figure 5.9.2 shows an analog of the Aharonov-Bohm experiment in differential geometry. Everywhere but at the tip, the cone has zero curvature, as we can see by cutting it and laying it out flat. But even an observer who never visits the tightly curved region at the tip can detect its existence, because parallel-transporting a vector around a closed loop can change the vector’s direction, provided that the loop surrounds the tip. In the electromagnetic example, integrating A around a closed loop reveals, via Stokes’ theorem, the existence of a magnetic flux through the loop, even though the magnetic field is zero at every location where A has to be sampled. In the relativistic example, integrating $$\Gamma$$ around a closed loop shows that there is curvature inside the loop, even though the curvature is zero at all the places where $$\Gamma$$ has to be sampled. The fact that $$\Gamma$$ is a gauge field, and therefore not locally observable, is simply a fancy way of expressing the ideas introduced on in section 5.6, that due to the equivalence principle, the gravitational field in general relativity is not locally observable. This nonobservability is local because the equivalence principle is a statement about local Lorentz frames. The example in Figure 5.9.2 is non-local. Example 11: Geodetic effect and structure of the source • In Section 5.5, we estimated the geodetic effect on Gravity Probe B and found a result that was only off by a factor of 3$$\pi$$. The mathematically pure form of the 3$$\pi$$ suggests that the geodetic effect is insensitive to the distribution of mass inside the earth. Why should this be so? • The change in a vector upon parallel transporting it around a closed loop can be expressed in terms of either (1) the area integral of the curvature within the loop or (2) the line integral of the Christoffel symbol (essentially the gravitational field) on the loop itself. Although I expressed the estimate as 1, it would have been equally valid to use 2. By Newton’s shell theorem, the gravitational field is not sensitive to anything about its mass distribution other than its near spherical symmetry. The earth spins, and this does affect the stress-energy tensor, but since the velocity with which it spins is everywhere much smaller than c, the resulting effect, called frame dragging, is much smaller. ## References This page titled 5.10: From Metric to Curvature is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Benjamin Crowell via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364777207374573, "perplexity": 271.2605158290041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00046.warc.gz"}
https://cryptohack.gitbook.io/cryptobook/lattices/lll-reduction/gaussian-reduction
# Overview Lagrange's algorithm, often incorrectly called Gaussian reduction, is the 2D analouge to the Euclidean algorithm and is used for lattice reduction. Intuitively, lattice reduction is the idea of finding a new basis that consists of shorter vectors. Before going into Lagrange's algorithm, we first recap the Euclidean algorithm: def euclid(m,n): while n!=0: q = round(m/n) m -= q*n if abs(n) > abs(m): m, n = n, m return abs(m) The algorithm primarily consists of two steps, a reduction step where the size of $m$is brought down by a multiple of $n$and a swapping step that ensures $m$is always the largest number. We can adapt this idea for lattices: def lagrange(b1,b2): mu = 1 while mu != 0: mu = round((b1*b2) / (b1*b1)) b2 -= mu*b1 if b1*b1 > b2*b2: b1, b2 = b2, b1 return b1, b2 Here $\mu$is actually the Gram-Schmidt coefficient $\mu_{2,1}$and it turns out that this algorithm will always find the shortest possible basis! Using the basis $\begin{matrix} b_1&=&(-1.8,1.2)\\ b_2&=&(-3.6,2.3) \end{matrix}$ the Lagrange reduction looks like and here we see it clearly gives the shortest vectors. # Optimality proof Let $L$be a lattice. The basis $b_1,b_2$is defined to be the shortest for any other basis $b_1',b_2',\left\lVert b_1'\right\rVert\leq\left\lVert b_2'\right\rVert$, we have $\left\lVert b_1\right\rVert\leq\left\lVert b_1'\right\rVert$and $\left\lVert b_2\right\rVert\leq\left\lVert b_2'\right\rVert$. Note that this generally cannot be generalized to other dimensions, however in dimension 2, this is possible and is given by Lagrange's algorithm. The proof is a somewhat messy sequence of inequalities that eventually lead to the conclusion we want. Let $b_1,b_2$be the output of the Lagrange reduction for some lattice $L$. To prove that Lagrange reduction gives the shortest basis, we first show that $\left\lVert b_1\right\rVert$is the shortest vector in $L$. We know that $\frac{\left|\langle b_1,b_2\rangle\right|}{\left\lVert b_1\right\rVert^2}\le\frac12$from the algorithm directly. Let $v=mb_1+nb_2\in L$be any element in $L$. We first show that $\left\lVert b_1\right\rVert\leq\left\lVert v\right\rVert$: Since $m^2-mn+n^2=\left(m-\frac n2\right)^2+\frac34n^2$, this quantity is only $0$when $m=n=0$and is a positive integer for all other cases, hence $\left\lVert v\right\rVert\geq\left\lVert b_1\right\rVert$and $\left\lVert b_1\right\rVert$is a shortest vector of $L$. Note that we can have multiple vectors with the same norm as $b_1$, for instance $-b_1$. So this is not a unique shortest vector. Suppose there exists some basis $b'_1,b'_2$for $L$such that $\left\lVert b_1'\right\rVert\leq\left\lVert b_2'\right\rVert$. We show that $\left\lVert b_2\right\rVert\leq\left\lVert b_2'\right\rVert$. Let $b_2'=mb_1+nb_2$. If $n=0$, then $b_2'=\pm b_1$as $b_1',b_2'$must form a basis. This means that $\left\lVert b_1\right\rVert=\left\lVert b_1'\right\rVert=\left\lVert b_2'\right\rVert$ and by the inequality above, we must have $\pm b_1'=b_2$or $\pm b_1'=b_1+b_2$. The first case tells us that $\left\lVert b'_1\right\rVert=\left\lVert b_2\right\rVert$. By squaring the second case, we get but since $\left\lVert b_1\right\rVert$is the shortest vector, $\left\lVert b_1\right\rVert=\left\lVert b_2\right\rVert$. Otherwise, we have $m,n\neq0$ and $m^2-mn+n^2\geq1$, so Hence proving Lagrange's algorithm indeed gives us the shortest basis vectors. # Exercises 1) Show that the output of Lagrange's algorithm generate the same lattice as the input. 2) Find a case where $\left\lVert b_1\right\rVert=\left\lVert b_2\right\rVert=\left\lVert b_1+b_2\right\rVert$. Notice that the vectors here is the equality case for the bound given in Exercise 4 of the introduction, this actually tells us that the optimal lattice circle packing in 2D is given by this precise lattice! It turns out that this is actually the optimal circle packing in 2D but the proof is significantly more involved. (See https://arxiv.org/abs/1009.4322 for the details) 3*) Let $\mu_{2,1}=\lfloor\mu_{2,1}\rceil+\varepsilon=\mu+\epsilon$, show that $\left\lVert b_2\right\rVert^2\geq\left(\left(|\mu|-\frac12\right)^2-\varepsilon^2\right)\left\lVert b_1\right\rVert^2+\left\lVert b_2-\mu b_1\right\rVert$ and show that $|\mu|\geq2$for all steps in the algorithm except the first and last, hence $\left\lVert b_1\right\rVert\left\lVert b_2\right\rVert$decreases by at least $\sqrt3$ at each loop and the algorithm runs in polynomial time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682937264442444, "perplexity": 423.5139998799332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00181.warc.gz"}
https://asmedigitalcollection.asme.org/ESDA/proceedings-abstract/ESDA2004/41758/497/302261
For uniform deformation, based on bulk microfabrication with isotropic etching, two types of hemispherical electrostatic micro deformable focusing mirror are designed. One of the focusing mirrors is center-anchored, and the other is circular clamped. Using theory of shells, theoretical solution of deformation under uniform electrostatic force is derived. For more detail analysis of the electrostatic and elastic forces coupling problem, finite element is used to analyze the deformation of the mirror structure. Applying electrostatic force, the profile of micro focusing mirror will be not the spherical and change to become a curve like parabolic surface. Using least square method, the curve is fitted as a parabolic curve and the focal lengths of the focusing micro mirror are obtained. The result shows the focal length without applying electrostatic force can be determined by different micro mirror radius and isotropic etching depth. When the electrostatic forces are applied, the deformation and the focal length change differently between the two types of focusing mirror. For circular clamped micro mirror, the deformation is larger near circular clamped region and uniform in the center regime. Therefore, the relation of focal length and applying voltage is a concave curve with minimum values. That is, the focusing length decreasing as the applying voltage increasing and reaches a limit values. When the applying voltage continues increasing after reaching the minimum value, the focal length increases fast. It also shows the thicker structure layer needs larger applied voltage. But the focal length changes in larger stroke. The pull-in voltage is about 100 volt when the structure layer are both 2 μm. However, the pull-in voltage increases nonlinearly as gap increasing. When the gap increases to 4 μm, the pull-in voltage is about 300 volt. The result shows center-anchored micro mirror has better performance. The deformation is more uniform and the focal length increases nonlinearly as applied voltage increasing. It is found the stroke of focal length is larger and the applied voltage is less. The results shows even when the gap and structure layer is 4 and 2 μm, the pull-in voltage is about 62 volts. However, the stoke changes from 990 to about 1320 μm when applying voltage is from 0 to 60 volts. Therefore, with low applied voltage and large focal length stoke, the center-anchored micro mirror has good performance. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930061399936676, "perplexity": 1154.4090665290046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00401.warc.gz"}
https://alanrendall.wordpress.com/category/dynamical-systems/
## Archive for the ‘dynamical systems’ Category ### Feinberg’s proof of the deficiency zero theorem November 23, 2015 I discussed the deficiency zero theorem of chemical reaction network theory (CRNT) in a previous post. (Some further comments on this can be found here and here.) This semester I am giving a lecture course on chemical reaction network theory. Lecture notes are growing with the course and can be found in German and English versions on the web page of the course. The English version can also be found here. Apart from introductory material the first main part of the course was a proof of the Deficiency Zero Theorem. There are different related proofs in the literature and I have followed the approach in the classic lecture notes of Feinberg on the subject closely. The proof in those notes is essentially self-contained apart from one major input from a paper of Feinberg and Horn (Arch. Rat. Mech. Anal. 66, 83). In this post I want to give a high-level overview of the proof. The starting point of CRNT is a reaction network. It can be represented by a directed graph where the nodes are the complexes (left or right hand sides of reactions) and the directed edges correspond to the reactions themselves. The connected components of this graph are called the linkage classes of the network and their number is usually denoted by $l$. If two nodes can be connected by oriented paths in both directions they are said to be strongly equivalent. The corresponding equivalence classes are called strong linkage classes. A strong linkage class is called terminal if there is no directed edge leaving it. The number of terminal strong linkage classes is usually denoted by $t$. From the starting point of the network making the assumption of mass action kinetics allows a system of ODE $\dot c=f(c)$ to be obtained in an algorithmic way. The quantity $c(t)$ is a vector of concentrations as a function of time. Basic mathematical objects involved in the definition of the network are the set $\cal S$ of chemical species, the set $\cal C$ of complexes and the set $\cal R$ of reactions. An important role is also played by the vector spaces of real-valued functions on these finite sets which I will denote by $F({\cal S})$, $F({\cal C})$ and $F({\cal R})$, respectively. Using natural bases they can be identified with $R^m$, $R^n$ and $R^r$. The vector $c(t)$ is an element of $F({\cal S})$. The mapping $f$ from $F({\cal S})$ to itself can be written as a composition of three mappings, two of them linear, $f=YA_k\Psi$. Here $Y$, the complex matrix, is a linear mapping from $F({\cal C})$ to $F({\cal S})$. $A_k$ is a linear mapping from $F({\cal C})$ to itself. The subscript $k$ is there because this matrix is dependent on the reaction constants, which are typically denoted by $k$. It is also possible to write $f$ in the form $Nv$ where $v$ describes the reaction rates and $N$ is the stoichiometric matrix. The image of $N$ is called the stoichiometric subspace and its dimension, the rank of the network, is usually denoted by $s$. The additive cosets of the stoichiometric subspace are called stoichiometric compatibility classes and are clearly invariant under the time evolution. Finally, $\Psi$ is a nonlinear mapping from $F({\cal S})$ to $F({\cal C})$. The mapping $\Psi$ is a generalized polynomial mapping in the sense that its components are products of powers of the components of $c$. This means that $\log\Psi$ depends linearly on the logarithms of the components of $c$. The condition for a stationary solution can be written as $\Psi(c)\in {\rm ker} (YA_k)$. The image of $\Psi$ is got by exponentiating the image of a linear mapping. The matrix of this linear mapping in natural bases is $Y^T$. Thus in looking for stationary solutions we are interested in finding the intersection of the manifold which is the image of $\Psi$ with the kernel of $YA_k$. The simplest way to define the deficiency of the network is to declare it to be $\delta=n-l-s$. A fact which is not evident from this definition is that $\delta$ is always non-negative. In fact $\delta$ is the dimension of the vector space ${\rm ker} Y\cap {\rm span}(\Delta)$ where $\Delta$ is the set of complexes of the network. An alternative concept of deficiency, which can be found in lecture notes of Gunawardena, is the dimension $\delta'$ of the space ${\rm ker} Y\cap{\rm im} A_k$. Since this vector space is a subspace of the other we have the inequality $\delta'\le\delta$. The two spaces are equal precisely when each linkage class contains exactly one terminal strong linkage class. This is, in particular, true for weakly reversible networks. The distinction between the two definitions is often not mentioned since they are equal for most networks usually considered. If $c^*$ is a stationary solution then $A_k\Psi (c^*)$ belongs to ${\rm ker} Y\cap{\rm im} A_k$. If $\delta'=0$ (and in particular if $\delta=0$) then this means that $A_k\Psi (c^*)=0$. In other words $\Psi (c^*)$ belongs to the kernel of $A_k$. Stationary solutions of this type are called complex balanced. It turns out that if $c^*$ is a complex balanced stationary solution the stationary solutions are precisely those points $c$ for which $\log c-\log c_*$ lies in the orthogonal complement of the stoichiometric subspace. It follows that whenever we have one solution we get a whole manifold of them of dimension $n-s$. It can be shown that each manifold of this type meets each stoichiometric class in precisely one point. This is proved using a variational argument and a little convex analysis. It is clear from what has been said up to now that it is important to understand the positive elements of the kernel of $A_k$. This kernel has dimension $t$ and a basis each of whose elements is positive on a terminal strong linkage class and zero otherwise. Weak reversibility is equivalent to the condition that the union of the terminal strong linkage classes is the set of all complexes. It can be concluded that when the network is not weakly reversible there exists no positive element of the kernel of $A_k$. Thus for a network which is not weakly reversible and has deficiency zero there exist no positive stationary solutions. This is part of the Deficiency Zero Theorem. Now consider the weakly reversible case. There a key statement of the Deficiency Zero Theorem is that there exists a complex balanced stationary solution $c^*$. Where does this $c^*$ come from? We sum the vectors in the basis of ${\rm ker} A_k$ and due to weak reversibility this gives something which is positive. Then we take the logarithm of the result. When $\delta=0$ this can be represented as a sum of two contributions where one is of the form $Y^T z$. Then $c^*=e^z$. A further part of the deficiency zero theorem is that the stationary solution $c^*$ in the weakly reversible case is asymptotically stable. This is proved using the fact that for a complex balanced stationary solution the function $h(c)=\sum_{s\in\cal S}[c(s)(\log c(s)-\log c^*(s)-1)+c(s^*)]$ is a Lyapunov function which vanishes for $c=c^*$ ### Siphons in reaction networks October 8, 2015 The concept of a siphon is one which I have been more or less aware of for quite a long time. Unfortunately I never had the impression that I had understood it completely. Given the fact that it came up a lot in discussions I was involved in and talks I heard last week I thought that the time had come to make the effort to do so. It is of relevance for demonstrating the property of persistence in reaction networks. This is the property that the $\omega$-limit points of a positive solution are themselves positive. For a bounded solution this is the same as saying that the infima of all concentrations at late times are positive. The most helpful reference I have found for these topics is a paper of Angeli, de Leenheer and Sontag in a proceedings volume edited by Queinnec et. al. There are two ways of formulating the definition of a siphon. The first is more algebraic, the second more geometric. In the first the siphon is defined to be a set $Z$ of species with the property that whenever one of the species in $Z$ occurs on the right hand side of a reaction one of the species in $Z$ occurs on the left hand side. Geometrically we replace $Z$ by the set $L_Z$ of points of the non-negative orthant which are common zeroes of the elements of $Z$, thought of as linear functions on the species space. The defining property of a siphon is that $L_Z$ is invariant under the (forward in time) flow of the dynamical system describing the evolution of the concentrations. Another way of looking at the situation is as follows. Consider a point of $L_Z$. The right hand side of the evolution equations of one of the concentrations belonging to $Z$ is a sum of positive and negative terms. The negative terms automatically vanish on $L_Z$ and the siphon condition is what is needed to ensure that the positive terms also vanish there. Sometimes minimal siphons are considered. It is important to realize that in this case $Z$ is minimal. Correspondingly $L_Z$ is maximal. The convention is that the empty set is excluded as a choice for $Z$ and correspondingly the whole non-negative orthant as a choice for $L_Z$. What is allowed is to choose $Z$ to be the whole of the species space which means that $L_Z$ is the origin. Of course whether this choice actually defines a siphon depends on the particular dynamical system being considered. If $x_*$ is an $\omega$-limit point of a positive solution but is not itself positive then the set of concentrations which are zero at that point is a siphon. In particular stationary solutions on the boundary are contained in siphons. It is remarked by Shiu and Sturmfels (Bull. Math. Biol. 72, 1448) that for a network with only one linkage class if a siphon contains one stationary solution it consists entirely of stationary solutions. To see this let $x_*$ be a stationary solution in the siphon $Z$. There must be some complex $y$ belonging to the network which contains an element of $Z$. If $y'$ is another complex then there is a directed path from $y'$ to $y$. We can follow this path backwards from $y$ and conclude successively that each complex encountered contains an element of $Z$. Thus $y'$ contains an element of $Z$ and since $y'$ was arbitrary all complexes have this property. This means that all complexes vanish at $x_*$ so that $x_*$ is a stationary solution. Siphons can sometimes be used to prove persistence. Suppose that $Z$ is a siphon for a certain network so that the points of $Z$ are potential $\omega$-limit points of solutions of the ODE system corresponding to this network. Suppose further that $A$ is a conserved quantity for the system which is a linear combination of the coordinates with positive coefficents. For a positive solution the quantity $A$ has a positive constant value along the solution and hence also has the same value at any of its $\omega$-limit points. It follows that if $A$ vanishes on $Z$ then no $\omega$-limit point of that solution belongs to $Z$. If it is possible to find a conserved quantity $A$ of this type for each siphon of a given system (possibly different conserved quantities for different siphons) then persistence is proved. For example this strategy is used in the paper of Angeli et al. to prove persistence for the dual futile cycle. The concept of persistence is an important one when thinking about the general properties of reaction networks. The persistence conjecture says that any weakly reversible reaction network with mass action kinetics is persistent (possibly with the additional assumption that all solutions are bounded). In his talk last week Craciun mentioned that he is working on proving this conjecture. If true it implies the global attractor conjecture. It also implies a statement claimed in a preprint of Deng et. al. (arXiv:1111.2386) that a weakly reversible network has a positive stationary solution in any stoichiometric compatobility class. This result has never been published and there seems to be some doubt as to whether the proof is correct. ### Trip to the US October 5, 2015 Last week I visited a few places in the US. My first stop was Morgantown, West Virginia where my host was Casian Pantea. There I had a lot of discussions with Casian and Carsten Conradi on chemical reaction network theory. This synergized well with the work I have recently been doing preparing a lecture course on that subject which I will be giving in the next semester. I gave a talk on MAPK and got some feedback on that. It rained a lot and there was not much opportunity to do anything except work. One day on the way to dinner while it was relatively dry I saw a Cardinal and I fortunately did have my binoculars with me. On Wednesday afternoon I travelled to New Brunswick and spent most of Thursday talking to Eduardo Sontag at Rutgers. It was a great pleasure to talk to an excellent mathematician who also knows a lot about immunology. He and I have a lot of common interests which is in part due to the fact that I was inspired by several of his papers during the time I was getting into mathematical biology. I also had the opportunity to meet Evgeni Nikolaev who told me a variety of interesting things. They concerned bifurcation theory in general, its applications to the kinds of biological models I am interested in and his successes in applying mathematical models to understanding concrete problems in biomedical research such as the processes taking place in tuberculosis. My personal dream is to see a real coming together of mathematics and immunology and that I have the chance to make a contribution to that process. On Friday I flew to Chicago in order to attend an AMS sectional meeting. I had been in Chicago once before but that is many years ago now. I do remember being impressed by how much Lake Michigan looks like the sea, I suppose due to the structure of the waves. This impression was even stronger this time since there were strong winds whipping up the waves. Loyola University, the site of the meeting, is right beside the lake and it felt like home for me due to the combination of wind, waves and gulls. The majority of those were Ring-Billed Gulls which made it clear which side of the Atlantic I was on. There were also some Herring Gulls and although they might have been split from those on the other side of the Atlantic by the taxonomists I did not notice any difference. It was the first time I had been at an AMS sectional meeting and my impression was that the parallel sessions were very parallel, in other words in no danger of meeting. Most of the people in our session were people I knew from the conferences I attended in Charlotte and in Copenhagen although I did make a couple of new acquaintances, improving my coverage of the reaction network community. In a previous post I mentioned Gheorghe Craciun’s ideas about giving the deficiency of a reaction network a geometric interpretation, following a talk of his in Copenhagen. Although I asked him questions about this on that occasion I did not completely understand the idea. Correspondingly my discussion of the point here in my blog was quite incomplete. Now I talked to him again and I believe I have finally got the point. Consider first a network with a single linkage class. The complexes of the network define points in the species space whose coordinates are the stoichiometric coefficients. The reactions define oriented segments joining the educt complex to the product complex of each reaction. The stoichiometric subspace is the vector space spanned by the differences of the complexes. It can also be considered as a translate of the affine subspace spanned by the complexes themselves. This makes it clear that its dimension $s$ is at most $n-1$, where $n$ is the number of complexes. The number $s$ is the rank of the stoichiometric matrix. The deficiency is $n-1-s$. At the same time $s\le m$. If there are several linkage classes then the whole space has dimension at most $n-l$, where $l$ is the number of linkage classes. The deficiency is $n-l-s$. If the spaces corresponding to the individual linkage classes have the maximal dimension allowed by the number of complexes in that class and these spaces are linearly independent then the deficiency is zero. Thus we see that the deficiency is the extent to which the complexes fail to be in general position. If the species and the number of complexes have been fixed then deficiency zero is seen to be a generic condition. On the other hand fixing the species and adding more complexes will destroy the deficiency zero condition since then we are in the case $n-l>m$ so that the possibility of general position is excluded. The advantage of having this geometric picture is that it can often be used to read off the deficiency directly from the network. It might also be used to aid in constructing networks with a desired deficiency. ### Oscillations in the MAP kinase cascade September 10, 2015 In a recent post I mentioned my work with Juliette Hell on the existence of oscillations in the Huang-Ferrell model for the MAP kinase cascade. We recently put our paper on the subject on ArXiv. The starting point of this project was the numerical and neuristic work of Qiao et. al., PLoS Comp. Biol. 3, 1819. Within their framework these authors did an extensive search of parameter space and found Hopf bifurcations and periodic solutions for many parameters. The size of the system is sufficiently large that it represents a significant obstacle to analytical investigations. One way of improving this situation is to pass to a limiting system (MM system) by a Michaelis-Menten reduction. In addition it turns out that the periodic solutions already occur in a truncated system consisting of the first two layers of the cascade. This leaves one layer with a single phosphorylation and one with a double phosphorylation. In a previous paper we had shown how to do Michaelis-Menten reduction for the truncated system. Now we have generalized this to the full cascade. In the truncated system the MM system is of dimension three, which is still quite convenient for doing bifurcation theory. Without truncation the MM system is of dimension five, which is already much more difficult. It is however possible to represent the system for the truncated cascade as a (singular) limit of that for the full cascade and thus transport information from the truncated to the full cascade. Consider the MM system for the truncated cascade. The aim is then to find a Hopf bifurcation in a three-dimensional dynamical system with a lot of free parameters. Because of the many parameters is it not difficult to find a large class of stationary solutions. The strategy is then to linearize the right hand side of the equations about these stationary solutions and try show that there are parameter values where a suitable bifurcation takes place. To do this we would like to control the eigenvalues of the linearization, showing that it can happen that at some point one pair of complex conjugate eigenvalues passes through the imaginary axis with non-zero velocity as a parameter is varied, while the remaining eigenvalue has non-zero real part. The behaviour of the eigenvalues can largely be controlled by the trace, the determinant and an additional Hurwitz determinant. It suffices to arrange that there is a point where the trace is negative, the determinant is zero and the Hurwitz quantity passes through zero with non-zero velocity. This we did. A superficially similar situation is obtained by modelling an in vitro model for the MAPK cascade due to Prabakaran, Gunawardena and Sontag mentioned in a previous post in a way strictly analogous to that done in the Huang-Ferrell model. In that case the layers are in the opposite order and a crucial sign is changed. Up to now we have not been able to show the existence of a Hopf bifurcation in that system and our attempts up to now suggest that there may be a real obstruction to doing so. It should be mentioned that the known necessary condition for a stable hyperbolic periodic solution, the existence of a negative feedback loop, is satisfied by this system. Now I will say some more about the model of Prabakaran et. al. Its purpose is to obtain insights on the issue of network reconstruction. Here is a summary of some things I understood. The in vitro biological system considered in the paper is a kind of simplification of the Raf-MEK-ERK MAPK cascade. By the use of certain mutations a situation is obtained where Raf is constitutively active and where ERK can only be phosphorylated once, instead of twice as in vivo. This comes down to a system containing only the second and third layers of the MAPK cascade with the length of the third layer reduced from three to two phosphorylation states. The second layer is modelled using simple mass action (MA) kinetics with the two phosphorylation steps being treated as one while in the third layer the enzyme concentrations are included explicitly in the dynamics in a standard Michaelis-Menten way (MM-MA). The resulting mathematical model is a system of seven ODE with three conservation laws. In the paper it is shown that for given values of the conserved quantities the system has a unique steady state. This is an application of a theorem of Angeli and Sontag. Note that this is not the same system of equations as the system analogous to that of Huang-Ferrell mentioned above. The idea now is to vary one of the conserved quantities and monitor the behaviour of two functions $x$ and $y$ of the unknowns of the system at steady state. It is shown that for one choice of the conserved quantity $x$ and $y$ change in the same direction while for a different choice of the conserved quantity they change in opposite directions when the conserved quantity is varied. From a mathematical point of view this is not very surprising since there is no obvious reason forbidding behaviour of this kind. The significance of the result is that apparently biologists often use this type of variation in experiments to reach conclusions about causal relationships between the concentrations of different substances (activation and inhibition), which can be represented by certain signed oriented graphs. In this context ‘network reconstruction’ is the process of determining a graph of this type. The main conclusion of the paper, as I understand it, is that doing different experiments can lead to inconsistent results for this graph. Note that there is perfect agreement between the experimental results in the paper and the results obtained from the mathematical model. In a biological system if two experiments give conflicting results it is always possible to offer the explanation that some additional substance which was not included in the model is responsible for the difference. The advantage of the in vitro model is that there are no other substances which could play that role. ### Models for photosynthesis, part 3 September 8, 2015 Here I continue the discussion of models for photosynthesis in two previous posts. There I described the Pettersson and Poolman models and indicated the possibility of introducing variants of these which use exclusively mass action kinetics. I call these the Pettersson-MA and Poolman-MA models. I was interested in obtaining information about the qualitative behaviour of solutions of these ODE systems. This gave rise to the MSc project of Dorothea Möhring which she recently completed successfully. Now we have extended this work a little further and have written up the results in a paper which has just been uploaded to ArXiv. The central issue is that of overload breakdown which is related to the mathematical notion of persistence. We would like to know under what circumstances a positive solution can have $\omega$-limit points where some concentrations vanish and, if so, which concentrations vanish in that case. It seems that there was almost no information on the latter point in the literature so that the question of what exactly overload breakdown is remained a bit nebulous. The general idea is that the Pettersson model should have a stronger tendency to undergo overload breakdown while the Poolman model should have a stronger tendency to avoid it. The Pettersson-MA and Poolman-MA models represent a simpler context to work in to start with. For the Pettersson-MA model we were able to identify a regime in which overload breakdown takes place. This is where the initial concentrations of all sugar phosphates and inorganic phosphate in the chloroplast are sufficiently small. In that case the concentrations of all sugar phosphates tend to zero at late times with two exceptions. The concentrations of xylulose-4-phosphate and sedoheptulose-7-phosphate do not tend to zero. These results are obtained by linearizing the system around a simple stationary solution on the boundary and applying the centre manifold theorem. Another result is that if the reaction constants satisfy a certain inequality a positive solution can have no positive $\omega$-limit points. In particular, there are no positive stationary solutions in that case. This is proved using a Lyapunov function related to the total number of carbon atoms. In the case of the Poolman-MA model it was shown that the stationary point which was stable in the Pettersson case becomes unstable. Moreover, a quantitative lower bound for concentration of sugar phosphates at late times in obtained.These results fit well with the intuitive picture of what should happen. Some of the results on the Poolman-MA model can be extended to analogous ones for the original Poolman model. On the other hand the task of giving a full rigorous definition of the Pettersson model was postponed for later work. The direction in which this could go has been sketched in a previous post. There remains a lot to be done. It is possible to define a kind of hybrid model by setting $k_{32}=0$ in the Poolman model. It would be desirable to completely clarify the definition of the Pettersson model and then, perhaps, to show that it can be obtained as a well-behaved limiting system of the hybrid system in the sense of geometric singular perturbation theory. This might allow the dynamical properties of solutions of the different systems to be related to each other. The only result on stationary solutions obtained so far is a non-existence theorem. It would be of great interest to have positive results on the existence, multiplicity and stability of stationary solutions. A related question is that of classifying possible $\omega$-limit points of positive solutions where some of the concentrations are zero. This was done in part in the paper but what was not settled is whether potential $\omega$-limit points with positive concentrations of the hexose phosphates can actually occur. Finally, there are a lot of other models for the Calvin cycle on the market and it would be interesting to see to what extent they are susceptible to methods similar to those used in our paper. ### Phosphorylation systems September 1, 2015 In order to react to their environment living cells use signalling networks to propagate information from receptors, which are often on the surface of the cell, to the nucleus where transcription factors can change the behaviour of the cell by changing the rate of production of different proteins. Signalling networks often make use of phosphorylation systems. These are networks of proteins whose enzymatic activity is switched on or off by phosphorylation or dephosphorylation. When switched on they catalyse the (de-)phophorylation of other proteins. The information passing through the network is encoded in the phosphate groups attached to specific amino acids in the proteins concerned. A frequently occurring example of this type of system is the MAPK cascade discussed in a previous post. There the phosphate groups are attached to the amino acids serine, threonine and tyrosine. Another type of system, which is common in bacteria, are the two-component systems where the phosphate groups are attached to histidine and aspartic acid. There is a standard mathematical model for the MAPK cascade due to Huang and Ferrell. It consists of three layers, each of which is a simple or dual futile cycle. Numerical and heuristic investigations indicate that the Huang-Ferrell model admits periodic solutions for certain values of the parameters. Together with Juliette Hell we set out to find a rigorous proof of this fact. In the beginning we pursued the strategy of showing that there are relaxation oscillations. An important element of this is to prove that the dual futile cycle exhibits bistability, a fact which is interesting in its own right, and we were able to prove this, as has been discussed here. In the end we shifted to a different strategy in order to prove the existence of periodic solutions. The bistability proof used a quasistationary (Michaelis-Menten) reduction of the Huang-Ferrell system. It applied bifurcation theory to the Michaelis-Menten system and geometric singular perturbation theory to lift this result to the original system. To prove the existence of periodic solutions we used a similar strategy. This time we showed the presence of Hopf bifurcations in a Michaelis-Menten system and lifted those. The details are contained in a paper which is close to being finished. In the meantime we wrote a review article on phosphorylation systems. Here I want to mention some of the topics covered there. In our paper there is also an introduction to two-component systems. A general conclusion of the paper is that phosphorylation systems give rise to a variety of interesting mathematical problems which are waiting to be investigated. It may also be hoped that a better mathematical understanding of this subject can lead to new insights concerning the biological systems being modelled. Biological questions of interest in this context include the following. Are dynamical features of the MAPK cascade such as oscillations desirable for the encoding of information or are they undesirable side effects? To what extent do feedback loops tend to encourage the occurrence of features of this type and to what extent do they tend to suppress them? What are their practical uses, if any? If the function of the system is damaged by mutations how can it be repaired? The last question is of special interest due to the fact that many cancer cells have mutations in the Raf-MEK-ERK cascade and there have already been many attempts to overcome their negative effects using kinase inhibitors, some of them successful. A prominent example is the Raf inhibitor Vemurafenib which has been used to treat metastatic melanoma. ### Reaction networks in Copenhagen July 9, 2015 Last week I attended a workshop on reaction network theory organized by Elisenda Feliu and Carsten Wiuf. It took place in Copenhagen from 1st to 3rd July. I flew in late on the Tuesday evening and on arrival I had a pleasant feeling of being in the north just due to the amount and quality of the light. Looking at the weather information for Mainz I was glad I had got a reduction in temperature of several degrees by making this trip. A lot of comments and extra information on the talks at this conference can be found on the blog of John Baez and that of Matteo Polettini. Now, on my own slower time scale, I will write a bit about things I heard at the conference which I found particularly interesting. The topic of different time scales is very relevant to the theme of the meeting and the first talk, by Sebastian Walcher, was concerned with it. Often a dynamical system of interest can be thought of as containing a small parameter and letting this parameter tend to zero leads to a smaller system which may be easier to analyse. Information obtained in this way may be transported back to the original system. If the parameter is a ratio of time scales then the limit will be singular. The issue discussed in the talk is that of finding a suitable small parameter in a system when one is suspected. It is probably unreasonable to expect to find a completely general method but the talk presented algorithms which can contribute to solving this type of problem. In the second talk Gheorghe Craciun presented his proof of the global attractor conjecture, which I have mentioned in a previous post. I was intrigued by one comment he made relating the concept of deficiency zero to systems in general position. Later he explained this to me and I will say something about the direction in which this goes. The concept of deficiency is central in chemical reaction network theory but I never found it very intuitive and I feel safe in claiming that I am in good company as far as that is concerned. Gheorghe’s idea is intended to improve this state of affairs by giving the deficiency a geometric interpretation. In this context it is worth mentioning that there are two definitions of deficiency on the market. I had heard this before but never looked at the details. I was reminded of it by the talk of Jeanne Marie Onana Eloundou-Mbebi in Copenhagen, where it played an important role. She was talking about absolute concentration robustness. The latter concept was also the subject of the talk of Dave Anderson, who was looking at the issue of whether the known results on ACR for deterministic reaction networks hold in some reasonable sense in the stochastic case. The answer seems to be that they do not. But now I return to the question of how the deficiency is defined. Here I use the notation $\delta$ for the deficiency as originally defined by Feinberg. The alternative, which can be found in Jeremy Gunawardena’s text with the title ‘Chemical reaction network theory for in silico biologists’ will be denoted by $\delta'$. Gunawardena, who seems to find the second definition more natural, proves that the two quantities are equal provided a certain condition holds (each linkage class contains precisely one terminal strong linkage class). This condition is, in particular, satisfied for weakly reversible networks and this is perhaps the reason that the difference in definitions is not often mentioned in the literature. In general $\delta\ge\delta'$, so that deficiency zero in the sense of the common definition implies deficiency zero in the sense of the other definition. For a long time I knew very little about control theory. The desire to change this motivated me to give a course on the subject in the last winter semester, using the excellent textbook of Eduardo Sontag as my main source. Since that I had never taken the time to look back on what I learned in the course of doing this and this became clearer to me only now. In Copenhagen Nicolette Meshkat gave a talk on identifiability in reaction networks. I had heard her give a talk on a similar subject at the SIAM life science conference last summer and not understood much. I am sure that this was not her fault but mine. This time around things were suddenly clear. The reason is that this subject involves ideas coming from control theory and through giving the course I had learned to think in some new directions. The basic idea of identifiability is to extract information on the parameters of a dynamical system from the input-output relation. There was another talk with a lot of control theory content by Mustafa Khammash. He had brought some machines with him to illustrate some of the ideas. These were made of Lego, driven by computers and communicating with each other via bluetooth devices. One of these was a physical realization of one of the favourite simple examples in control theory, stabilization of the inverted pendulum. Another was a robot programmed to come to rest 30 cm in front of a barrier facing it. Next he talked about an experiment coupling living cells to a computer to form a control system. The output from a population of cells was read by a combination of GFP labeling and a FACS machine. After processing the signal the resulting input was done by stimulating the cells using light. This got a lot of media attention unter the name ‘cyborg yeast’. After that he talked about a project in which programmes can be incorporated into the cells themselves using plasmids. In one of the last remarks in his talk he mentioned how cows use integral feedback to control the calcium concentration in their bodies. I think it would be nice to incorporate this into popular talks or calculus lectures in the form ‘cows can do integrals’ or ‘cows can solve differential equations’. The idea would be to have a striking example of what the abstract things done in calculus courses have to do with the real world. My talk at the conference was on phosphorylation systems and interestingly there was another talk there, by Andreas Weber, which had a possibly very significant relation to this. I only became aware of the existence of the corresponding paper (Errami et. al., J. Comp. Phys. 291, 279) a few weeks ago and since it involves a lot of techniques I am not too familiar with and has a strong computer science component I have only had a limited opportunity to understand it. I hope to get deeper into it soon. It concerns a method of finding Hopf bifurcations. This conference was a great chance to maintain and extend my contacts in the community working on reaction networks and get various types of inside information on the field ### Models for photosynthesis, part 2 May 22, 2015 In my previous post on this subject I discussed the question of the status of the variables in the Poolman model of photosynthesis and in the end I was convinced that I had understood which concentrations are to be considered as dynamical unknowns and which as constants. The Poolman model is a modified version of the Pettersson model and the corresponding questions about the nature of the variables have the same answers in both cases. What I am calling the Pettersson model was introduced in a paper of Pettersson and Ryde-Pettersson (Eur. J. Biochem 175, 661) and there the description of the variables and the equations is rather complete and comprehensible. Now I will go on to consider the second question raised in the previous post, namely what the evolution equations are. The evolution equations in the Poolman model are modifications of those in the Pettersson model and are described relative to those in the original paper on the former model. For this reason I will start by describing the equations for the Pettersson model. As a preparation for that I will treat a side issue. In a reaction network a reaction whose rate depends only on the concentrations of the substances consumed in the reaction is sometimes called NAC (for non-autocatalytic). For instance this terminology is used in the paper of Kaltenbach quoted in the previous post. The opposite of NAC is the case where the reaction rate is modulated by the concentrations of other substances, such as activators or inhibitors. The unknowns in the Pettersson model are concentrations of substances in the stroma of the chloroplast. The substances involved are 15 carbohydrates bearing one or more phosphate groups, inorganic phosphate and ATP, thus 17 variables in total. In addition to ordinary reactions between these substances there are transport processes in which sugar phosphates are moved from the stroma to the cytosol in exchange for inorganic phosphate. For brevity I will also refer to these as reactions. The total amount of phosphate in the stroma is conserved and this leads to a conservation law for the system of equations, a fact explicitly mentioned in the paper. On the basis of experimental data some of the reactions are classified as fast and it is assumed that they are already at equilibrium. They are also assumed to be NAC and to have mass-action kinetics. This defines a set of algebraic equations. These are to be used to reduce the 17 evolution equations which are in principle there to five equations for certain linear combinations of the variables. The details of how this is done are described in the paper. I will now summarize how this works. The time derivatives of the 16 variables other than inorganic phosphate are given in terms of linear combinations of 17 reaction rates. Nine of these reaction rates, which are not NAC, are given explicitly. The others have to be treated using the 11 algebraic equations coming from the fast reactions. The right hand sides $F_i$ of the five evolution equations mentioned already are linear combinations of those reaction rates which are given explicitly. These must be expressed in terms of the quantities whose time derivatives are on the left hand side of these equations, using the algebraic equations coming from the fast reactions and the conservation equation for the total amount of phosphate. In fact all unknowns can be expressed in terms of the concentrations of RuBP, DHAP, F6P, Ru5P and ATP. Call these quantities $s_i$. Thus if the time derivatives of the $s_i$ can be expressed in terms of the $F_i$ we are done. It is shown in the appendix to the paper how a linear combination of the time derivatives of the $s_i$ with coefficients only depending on the $s_i$ is equal to $F_i$. Moreover it is stated that the time derivatives of the $s_i$ can be expressed in terms of these linear combinations. Consider now the Poolman model. One way in which it differs from the Pettersson model is that starch degradation is included. The other is that while the kinetics for the ‘slow reactions’ (i.e. those which are not classified as fast in the Pettersson model) are left unchanged, the equilibrium assumption for the fast reactions is dropped. Instead the fast reactions are treated as reversible with mass action kinetics. In the thesis of Sergio Grimbs (Towards structure and dynamics of metabolic networks, Potsdam 2009) there is some discussion of the models of Poolman and Pettersson. It is investigated whether information about multistability in these models can be obtained using ideas coming from chemical reaction network theory. Since the results from CRNT considered require mass action kinetics it is implicit in the thesis that the systems are being considered which are obtained by applying mass action to all reactions in the networks for the Poolman and Pettersson models. These systems are therefore strictly speaking different from those of Pettersson and Poolman. In any case it turned out that these tools were not useful in this example since the simplest results did not apply and for the more complicated computer-assisted ones the systems were too large. In the Pettersson paper the results of computations of steady states are presented and a comparison with published experimental results looks good in a graph presented there. So whay can we not conclude that the problem of modelling the dynamics of the Calvin cycle was pretty well solved in 1988? The paper contains no details on how the simulations were done and so it is problematic to repeat them. Jablonsky et. al. set up simulations of this model on their own and found results very different from those reported in the original paper. In this context the advantage of the Poolman model is that it has been put into the BioModels database so that the basic data is available to anyone with the necessary experience in doing simulations for biochemical models. Forgetting the issue of the reliability of their simulations, what did Petterson and Ryde-Pettersson find? They saw that depending on the external concentration of inorganic phosphate there is either no positive stationary solution (for high values of this parameter) or two (for low values) with a bifurcation in between. When there are two stationary solutions one is stable and one unstable. It looks like there is a fold bifurcation. There is a trivial stationary solution with all sugar concentrations zero for all values of the parameter. When the external phosphate concentration tends to zero the two positive stationary solutions coalesce with the trivial one. The absence of positive stationary solutions for high phosphate concentrations is suggested to be related to the concept of ‘overload breakdown’. This means that sugars are being exported so fast that the production from the Calvin cycle cannot keep up and the whole system breaks down. It would be nice to have an analytical proof of the existence of a fold bifurcation for the Pettersson model but that is probably very difficult to get. ### Models for photosynthesis May 15, 2015 Photosynthesis is a process of central importance in biology. There is a large literature on modelling this process. One step is to identify networks of chemical reactions involved. Another is to derive mathematical models (usually systems of ODE) from these networks. Here when I say ‘model’ I mean ‘mathematical model’ and not the underlying network. In a paper by Jablonsky et. al. (BMC Systems Biology 5: 185) existing models are surveyed and a number or errors and inconsistencies in the literature are pointed out. This reminded me of the fact that a widespread problem in the biological literature is that the huge amount of data being generated these days contains very many errors. Here I want to discuss some issues related to this, concentrating on models for the Calvin cycle of photosynthesis and, in particular, on what I will call the Poolman model. A point which might seem obvious and trivial to the mathematician is that a description of a mathematical model (I consider here only ODE models) should contain a clear answer to the following two questions. 1) What are the unknowns? 2) What are the evolution equations? One source of ambiguity involved in the first question is the impossibility of modelling everything. It is usually unreasonable to model a whole organism although this has been tried for some simple ones. Even if it were possible, the organism is in interaction with other organisms and its environment and these things cannot also be included. In practise it is necessary to fix a boundary of the system we want to consider and cut there. One way of handling the substances outside the cut in a mathematical model is to set their concentrations to constant values, thus implicitly assuming that to a good approximation these are not affected by the dynamics within the system. Let us call these external species and the substances whose dynamics is included in the model internal species. Thus part of answering question 1) is to decide on which species are to be treated as internal. In this post I will confine myself to discussing question 1), leaving question 2) for a later date. Suppose we want to answer question 1) for a model in the literature. What are potential difficulties? In biological papers the equations (and even the full list of unknowns) are often banished to the supplementary material. In addition to being less easy to access and often less easy to read (due to typographical inferiority) than the main text I have the feeling that this supplementary material is often subjected to less scrutiny by the referees and by the authors, so that errors or incompleteness can occur more easily. Sometimes this information is only contained in some files intended to be read by a computer rather than a human being and it may be necessary to have, or be able to use, special software in order to read them in any reasonable way. Most of these difficulties are not absolute in nature. It is just that the mathematician embarking on such a process should ideally be aware of some of the challenges awaiting him in advance. How does this look in the case of the Poolman model? It was first published in a journal in a paper of Poolman, Fell and Thomas (J. Exp. Botany, 51, 319). The reaction network is specified by Fig. 1 of the paper. This makes most of the unknowns clear but leaves the following cases where something more needs to be said. Firstly, it is natural to take the concentration of ADP to be defined implicitly through the concentration of ATP and the conservation of the total amount of adenosine phosphates. Secondly, it is explictly stated that the concentrations of NADP and NADPH are taken to be constant so that these are clearly external species. Presumably the concentration of inorganic phosphate in the stroma is also taken to be constant, so that this is also an external variable, although I did not find an explicit statement to this effect in the paper. The one remaining possible ambiguity involves starch – is it an internal or an external species in this model? I was not able to find anything directly addressing this point in the paper. On the other hand the paper does refer to the thesis of Poolman and some internet resources for further information. In the main body of the thesis I found no explicit resolution of the question of external phosphate but there it does seem that this quantity is treated as an external parameter. The question of starch is particularly important since this is a major change in the Poolman model compared to the earlier Pettersson model on which it is based and since Jablonsky et. al. claim that there is an error in the equation describing this step. It is stated in the thesis that ‘a meaningful concentration cannot be assigned to’ … ‘the starch substrate’ which seems to support my impression that the concentration of starch is an external species. Finally a clear answer confirming my suppositions above can be found in Appendix A of the thesis which describes the computer implementation. There we find a list of variables and constants and the latter are distinguished by being preceded by a dollar sign. So is there an error in the equation for starch degradation used in the Poolman model? My impression is that there is not, in the sense that the desired assumptions have been implemented successfully. The fact that Jablonsky et. al. get the absurd result of negative starch concentrations is because they compute an evolution for starch which is an external variable in the Poolman model. What could be criticised in the Poolman model is that the amount of starch in the chloroplast varies a lot over the course of the day. Thus a model with starch as an external variable could only be expected to give a good approximation to reality on timescales much shorter than one day. ### The species-reaction graph May 14, 2015 In the study of chemical reaction networks important features of the networks are often summarised in certain graphs. Probably that most frequently used is the species graph (or interaction graph), which I discussed in a previous post. The vertices are the species taking part in the network and the edges are related to the non-zero entries of the Jacobian matrix of the vector field defining the dynamics. Since the Jacobian matrix depends in general on the concentrations at which it is evaluated there is a graph (the local graph) for each set of values of the concentrations. Sometimes a global graph is defined as the union of the local graphs. A sign can be attached to each edge of the local graph according to the sign of the corresponding partial derivative. In the case, which does occur quite frequently in practise, that the signs of the partial derivatives are independent of the concentrations the distinction between local and global graphs is not necessary. In the general case a variant of the species graph has been defined by Kaltenbach (arXiv:1210.0320). In that case there is a directed edge from vertex $i$ to vertex $j$ if there is any set of concentrations for which the corresponding partial derivative is non-zero and instead of being labelled with a sign the edge is labelled with a function, namely the partial derivative itself. Another more complicated graph is the species-reaction graph or directed species-reaction graph (DSR graph). As explained in detail by Kaltenbach the definition (and the name of the object) are not consistent in the literature. The species-reaction graph was introduced in a paper of Craciun and Feinberg (SIAM J. Appl. Math. 66, 1321). In a parallel development which started earlier Mincheva and Roussel (J. Math. Biol. 55, 61) developed results using this type of graph based on ideas of Ivanova which were little known in the West and for which published proofs were not available. In the sense used by Kaltenbach the DSR graph is an object closely related to his version of the interaction graph. It is a bipartite graph (i.e. there are two different types of vertices and each edge connects a vertex of one type with a vertex of the other). In the DSR graph the species define vertices of one type and the reactions the vertices of the other type. There is a directed edge from species $i$ to reaction $j$ if species $i$ is present on the LHS of reaction $j$. There is a directed edge from reaction $i$ to species $j$ if the net production of species $j$ in reaction $i$ is non-zero. The first type of edge is labelled by the partial derivative of flux $j$ with respect to species $i$. The second type is labelled by the corresponding stoichiometric coefficient. The DSR graph determines the interaction graph. The paper of Soliman I mentioned in a recent post uses the DSR graph in the sense of Kaltenbach. A type of species-reaction graph has been used in quite a different way by Angeli, de Leenheer and Sontag to obtain conditions for the montonicity of the equations for a network written in reaction coordinates.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 161, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8626670241355896, "perplexity": 271.374250590692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448506.69/warc/CC-MAIN-20151124205408-00196-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/157685/characterization-of-ideals-in-rings-of-fractions
# Characterization of ideals in rings of fractions Let $R$ be a commutative unital ring. Let $S$ be a multiplicative subset. Is there a characterisation of the ideals in the ring of fractions $S^{-1}R$ in terms of ideals $I$ in $R$ and $R$? - I should think that the ideas of the $S^{-1}R$ should somehow depend on both $R$ and $S$. If $R$ is an integral domain and $S = R - \{0\}$, then $S^{-1}R$ is a field and has no nontrivial ideals. However if $S = \{1\}$, then the ideals of $S^{-1}R$ correspond to the ideals of $R$. –  William Jun 13 '12 at 8:10 @William Good point, thank you! –  Matt N. Jun 13 '12 at 8:13 @William We usually don't downvote without explanation here on SE. –  Matt N. Jun 13 '12 at 8:25 +1 just because of that downvote that was not ok –  Belgi Jun 13 '12 at 8:27 @Matt N. I didn't downvote. –  William Jun 13 '12 at 8:29 I don't believe in general you can say much about how ideals in $R$ are related to those in the localization. I will state two instances where we do have some correspondence: Prime ideals: The prime ideals of $S^{-1}R$ are in one to one correspondence with prime ideals of $R$ that don't meet $S.$ That is to say prime ideals $\mathfrak{p} \subset R$ such that $\mathfrak{p} \cap S = \emptyset$. The correspondence is given by $\mathfrak{p} \leftrightarrow S^{-1}\mathfrak{p}$. The proof of this fact is not hard, one direction is easy and for the other you just need to use the isomorphism $\overline{S}^{-1}(R/I) \cong S^{-1}R/S^{-1}I$. This isomorphism comes from applying $S^{-1}$ to the exact sequence $$0 \longrightarrow I \longrightarrow R \longrightarrow R/I \longrightarrow 0$$ of $R$ - modules. By $\overline{S}$ I mean the image of $S$ in the quotient, which is still a multiplicative set. Ideals $I\subset R$ such that $xs \in I$ implies that $x \in I$ for all $s \in S$: It is not hard to show that for any ideal $J$ of $S^{-1}R$, we have that $(J^c)^e = J$ where $()^c$ denotes contraction, $()^e$ denotes extension. Now if we are given an ideal $I$ of $R$ instead, it is not true that $(I^e)^c = I$. What is true though (which is not hard to prove) is that $$(J^c)^e = \{r \in R: rs \in I \hspace{2mm} \text{for some} \hspace{2mm} s \in S\}.$$ It is easy to see from here that the processes of extension and contraction define the following bijective correspondence: $$\{ \text{ ideals I of R | xs \in I \implies x \in I for all s \in S}\} \leftrightarrow \{ \text{ideals of S^{-1}R}\}.$$ - Thank you! And can we also say something about non-prime ideals? –  Matt N. Jun 13 '12 at 8:31 @MattN. Please see my edit. –  fpqc Jun 13 '12 at 8:33 Ooh, this is the stuff about contractions and extensions I'm about to read in the next chapter! I should be more patient : ) You are awesome. Thanks! –  Matt N. Jun 13 '12 at 8:34 @MattN. No problem. You have flooded math.se with a lot of commutative algebra posts recently! By the way for your tensor products question, I have posted an answer showing how to show the isomorphism using the universal property of the tensor product. –  fpqc Jun 13 '12 at 8:36 So the down vote is from you as a punishment for flooding SE with CA questions? I'll have a look at it later, I'm in quite a rush to get through the book. –  Matt N. Jun 13 '12 at 8:38 Proposition: Let $R$ be a commutative ring with unity. Proper ideals of the ring of fractions $D^{-1}R$ are of the form $\displaystyle D^{-1}I = \bigg\{ \frac{i}{d} : i \in I,\ d \in D\bigg\}$ with $I$ an ideal of $R$ and $I \cap D = \emptyset$. Proof: Let $J$ be a proper idea of $D^{-1}R$. Let $I = J \cap R$ and observe that $I$ is an ideal of $R$. Suppose to the contrary that $I \cap D \neq \emptyset$. Let $d \in I \cap D$ then $d \in I$. Observe that $\displaystyle \frac{d}{1} \in J$. Moreover, since $J$ is an ideal, it must absorb any element from $D^{-1}R$. Observe that $\displaystyle \frac1d \in D^{-1}R$. Hence it must follow that $\displaystyle \frac1d \cdot \frac{d}{1} = 1 \in J$ and thus $J$ contains a unit which implies $J = D^{-1}R$, a contradiction to $J$ being proper. Thus $I \cap D = \emptyset$. Let $j \in J$. Observe that $\displaystyle j = \frac{i}{d} = \frac{1}{d}\frac{i}{1}$ for some $i \in R$, $d \in D$. Since $J$ is an ideal of $D^{-1}R$ it must absorb $\displaystyle \frac{d}{1}$ and thus $\displaystyle \frac{d}{1}\bigg(\frac{1}{d}\frac{i}{1}\bigg) = \frac{i}{1} \in J$. Now since $I = J \cap R$ and $\frac{i}{1} = i \in J \cap R$, it follows that $i \in I$. Hence $J \subseteq D^{-1}I$. Let $x \in D^{-1}I$ where $\displaystyle x = \frac{i}{d}$ for some $i \in I$ and some $d \in D$. Since $i \in I = J \cap R$, then $i \in J$. Hence $x \subseteq D^{-1}I$. Thus we can conclude that $J = D^{-1}I$. As an example you might like to consider $R = \mathbb Z$ and $D = \{12^i : i = 0, 1, 2, \ldots\}$. You can see that $I = D^{-1}2\mathbb Z$ is not a proper ideal because you will get a unit with $r = 6$, $i = \frac{2}{12}$. Proposition: Let $R$ be a commutative ring with unity. Prime ideals in $D^{-1}R$ are of the form $D^{-1}P$ where $P$ is a prime ideal of $R$ and $P \cap D = \emptyset$. Proof:Suppose $Q$ is a prime ideal of $\displaystyle D^{-1}R = \bigg\{ \frac{r}{d} : r \in R,\ d \in D \bigg\}$. Set $P = Q \cap R$. Suppose to the contrary that $P \cap D \neq \emptyset$. Choose $d \in P \cap D$. Observe that $\frac{d}{1} \in Q$. Moreover $\frac1d \in D^{-1}R$. Since $Q$ is a prime ideal, it follows that $\frac1d \frac{d}{1} = 1 \in Q$ which implies that $Q = D^{-1}R$ a contradiction to $Q$ being a prime ideal, which by definition is proper. Hence $P \cap D = \emptyset$. Let $q \in Q$. Then $q = \frac{r_1}{d_1}$ for $r_1 \in R$ and $d_1 \in D$. Observe that $\frac{d_1}{1} \in D^{-1}R$. So, py property of $Q$ being an ideal, $\frac{d_1}{1} \frac{r_1}{d_1} = \frac{r_1}{1} \in Q$ and since $P = Q \cap R$, then $r_1 \in P$. Hence we have $\frac{r_1}{1} \in D^{-1}P$. Thus $Q \subseteq D^{-1}P$. Let $x \in D^{-1}P$. Then $x = \frac{p_1}{d_1}$ where $p_1 \in P$ and $d_1 \in D$. Since $P = Q \cap R$ and $p_1 \in P$ it follows that $p_1 \in Q$ so $x \in Q$. Hence $D^{-1}P \subseteq Q$. Conclude that $Q = D^{-1}P$. - Using a similar argument as I did for proper ideals, I showed the case for prime ideals which @BenjaLim showed. –  Robert Feb 7 '13 at 20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9900036454200745, "perplexity": 107.65434649351421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/253407/how-to-calculate-information-included-in-rs-confusion-matrix
# How to calculate information included in R's confusion matrix I want know the formulas for the information highlighted in the confusion matrix in the picture below: • Although the code & output depicted are from R, this isn't a programming question. The question asks for the formulas used to produce the highlighted information. Those formulas are mathematical &/or statistical in nature. This should be considered on topic here, IMO. Dec 27, 2016 at 21:22 • Dec 27, 2016 at 22:04 library(caret) mat = matrix(c(55,34,56,255), ncol=2, byrow=TRUE) mat # output omitted confusionMatrix(as.table(mat), positive="B") # Confusion Matrix and Statistics # # A B # A 55 34 # B 56 255 # # Accuracy : 0.775 # 95% CI : (0.7309, 0.815) # No Information Rate : 0.7225 # P-Value [Acc > NIR] : 0.00994 # # Kappa : 0.4024 # Mcnemar's Test P-Value : 0.02686 # # Sensitivity : 0.8824 # Specificity : 0.4955 # Pos Pred Value : 0.8199 # Neg Pred Value : 0.6180 # Prevalence : 0.7225 # Detection Rate : 0.6375 # Detection Prevalence : 0.7775 # Balanced Accuracy : 0.6889 # # 'Positive' Class : B This comes from the caret package in R. It presents a confusion matrix, which is a contingency table of the predicted and actual classes from some classifier, with some information about the confusion matrix that can help you interpret different aspects of the quality of the classifier. The Accuracy is the proportion of the cases that were classified correctly. You can see that there were 400 cases, and the classifier correctly said A and B (in your case 0 and 1) 55 and 255 times, respectively. Thus, there were 310 correct classifications out of 400 for 77.5% correct. Because each classification is either correct or incorrect, that value is a binomial and we can form a confidence interval for it, just as we can for any other binomial. There are lots of ways to calculate confidence intervals for a binomial. Because 289 of the 400 cases were in the 'positive' class, you could just say "B" for every cases without fitting any classifier and still get 72.25% correct. That doesn't sound so bad, but in fact, it means you don't know anything relevant to the situation at all, so we want to take that into account. That is called the "no information rate". Before you think your model is accurate, or provides valuable information, you want your accuracy to be greater than that. You might further want to know if your accuracy doesn't just happen to be above that level, but if it is significantly greater than the no information rate. A test of whether an observed proportion is greater than some specified value is a simple one-tailed binomial test. The p-value from which is what the P-Value [Acc > NIR] displays. That p-value only tells you that your accuracy is better, you may also want to know if your model is biased. That is, does your classifier tend to say B / 1 more often than is really the case? You could test the proportion the model says B against the observed proportion B, but that wouldn't really take the structure of the confusion matrix into account, however. It wouldn't recognize that the underlying data are non-independent, and therefore doesn't deal with the non-independence appropriately (from a statistical point of view). The test that does recognize these things is McNemar's test. I discuss McNemar's test here and here, which you may want to read to get a fuller overview. Your last highlighted line is the p-value from McNemar's test of the confusion matrix. • +1 nice to see you answer these types of questions. Aug 21, 2017 at 16:48 Wonderful answers given above up to McNemar's test. In the below is to add notes for the rest of the calculations. ==================== Sensitivity : 0.8824 Specificity : 0.4955 Pos Pred Value : 0.8199 Neg Pred Value : 0.6180 Prevalence : 0.7225 Detection Rate : 0.6375 Detection Prevalence : 0.7775 Balanced Accuracy : 0.6889 'Positive' Class : B ==================== Caret's ConfusionMatrix function prints the above type of output. The predicted classes are rows and actual cases are columns. So the ACTUAL Negative cases are (55+56) 111, and ACTUAL Positive cases are (34+255) 289. The PREDICTED Negative cases are (55+34) 89, and PREDICTED Positive cases are (56+255) 311. Sensitivity = Rate of True Positives captured by the algorithm (255/289) is 0.882353 (0.8824) Specificity = Rate of True Negatives captured by the algorithm (55/111) is 0.495495 (0.4955) In the above we used actual marginal totals for calculating the rates. In the below we will use predicted marginal totals for calculating the rates, for Positive Predictive Value (Pos Pred Value) and Negative Predictive Value(Neg Pred Value). Pos Pred Value = Rate of Positives captured among the total Pos Predicted, is 255/311 = 0.8199 Neg Pred Value = 55/89 = 0.6180 Prevalence is the the rate of "All ACTUAL Postives" in the whole population = 289/GrandTotal, 289/400 = 0.7225 Detection rate is among total population, how much is detected = 255/400 = 0.6375 Detection prevalence is rate of "All PREDICTED Positives" in the whole population = 311/400 = 0.7775 Balanced Accuracy is the average of Sensitivity and Specificity = (0.882352+0.495495)/2 = 0.6889 Additional notes which are not printed by the Caret's ConfusionMatrix function, but are commonly used are: Precision is useful in actual targeting plan; we can only act on predicted and so this is useful: Same as Pos Predicted Value = 0.8199 Recall is useful to make judgements about the model and we want to see how much the model is able to recall correctly: Same as Sensitivity = 0.8824 F1: Equally weighted average of Precision and Recall; because these are rates, we have to use harmonic mean. Harmonic mean of Precision and Recall, 2*[Precision+Recall)/(Precision*Recall)] = 2[(0.8199 x 0.8824)/(0.8199 + 0.8824)] = 0.8500
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044248819351196, "perplexity": 1643.4325682481233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00690.warc.gz"}
https://www.physicsforums.com/threads/electrons-vs-quarks.325732/
# Electrons vs Quarks 1. Jul 18, 2009 ### sidmontu Hi, I am currently a student, still grasping some basic concepts of quantum mechanics. I've been reading some books, and the model on quarks intrigue me. There's something I'll like to clarify though. Mass Up Quark - 1.5 to 3.3 MeV/c2 Down Quark - 3.5 to 6.0 MeV/c2 Electron - 0.511 MeV/c2 Proton radius - 1.0 x 10^-15 (3 times smaller than an electron) ----------------------- So electrons have a charge that is 3 times stronger than a down quark, have a radius that is at least 6 times bigger than a down quark, yet they weigh about 6 to 12 times less than a down quark. 1) Am I right in saying that? Or did I get some values wrong? Because it seems quite absurd to me the way an electron's mass, size and charge compare to a down quark. 2) Also, why are there differing masses of each quark (e.g. 1.5 to 3.3 MeV/c2) whereas electrons have a fixed known mass value of 0.511 MeV/c2? Is this due to experimentation error due to the difficulty of measuring the mass of a quark? Thanks. 2. Jul 18, 2009 ### clem The "classical radius" of the electron is just a dimensional construct and has nothing to do with the radius of the electron (although some classical physicists may have thought it did). As far as it can be measured, and in current theory, the electron is a point particle. 3. Jul 18, 2009 ### Staff: Mentor The experimental upper limit for the electron radius, from scattering experiments, is something like $10^{-20}$ m. (This means that we haven't detected an effect that would be caused by a nonzero radius, but because of experimental uncertainty we wouldn't have been able to detect anything smaller.) It's difficult to measure properties of individual quarks because we can't isolate them. 4. Jul 18, 2009 ### sidmontu Hi, thanks for the replies. clem: I understand that the electron is regarded as a point particle for simplicity sake in models. It makes it easier to do standard mathematical calculations if you consider it as a singularity. Am I right? jtbell: I did notice the 10^-20 upper limit for the radius due to the scattering experiments as pointed out in a wikipedia article on electrons. But if it's the case, that makes an electron at least 5000 times smaller than a proton, and in turn about at least 2500 times smaller than a quark. Still a pretty huge number when you compare it's mass is only 6 to 12 times smaller than a down quark. That makes an electron very dense? So theoratically, two up quarks (of first generation) should have identical masses, and the current value of mass (1.5 to 3.3 MeV/c2) is due to experimental limitations? Thanks again. 5. Jul 18, 2009 ### HallsofIvy Staff Emeritus I think it is more correct to say that at sub-atomic levels, the whole notion of "radius" or "size" in general becomes ambiguous. 6. Jul 18, 2009 ### clem It is not just for simplicity. Most quantum theories of the electron really mean it is a point particle. The problem with teaching of physics is that classical physics is covered for the first two years, which makes it very hard to think like a quantum mechanic. You have to consciously disregard much of your classical training. The upper limit is just an upper limit, related to experimental precision. There is no good estimate of the size of a quark, other than it is consistent with also being a point particle. The classical concept of "density" is meaningless for a quantum point particle. All quarks of the same flavor have the same mass. The "mass" of a quark cannot be measured as directly as the electron or proton mass. The quark mass appears as a parameter in theoretical models, and its value can different for different models. Last edited: Jul 18, 2009 7. Jul 18, 2009 ### humanino The current quark mass is renormalization scheme dependent at next-to-next to leading order (IIRC), the scheme usually chosen is MS-bar, and the above quoted mass is the one for instance given on the PDG web site. For light quarks, we use chiral perturbation theory which as usual requires an absolute scale to be determined otherwise. Its uncertainty is experimental. In principle one can go from one scheme to another to relate different values in different schemes. See the review on quark masses Similar Discussions: Electrons vs Quarks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262073040008545, "perplexity": 709.3704957284995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00226.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-453/topics/Topic-8404/subtopics/Subtopic-111172/?activeTab=theory
# Calculating Trigonometric Expressions Lesson Image source: Wikipedia ## Trigonometry: A definition Trigonometry is the study of angles and the edges that they are made up from. It forms a very important part of a number of professional fields including science, engineering and other areas of applied maths. However, trigonometry is also used often by builders, carpenters, surveyors, architects, town planners and many other professions. ### So what's the big deal? Trigonometry allows us to work with angles and lines that form triangles, helping us to figure out properties of certain objects. Using trigonometry we can find how tall something is or how far away it is or what its angle of elevation is. On top of this, we can extend these ideas to countless other areas such as analysing projectiles and their velocities, wave motion, complex numbers and into 3 dimensions (like in the image above, where the angles of the robotic arm and their positions are found using trigonometry). As an old area of maths, dating back to early centuries, trigonometry has proven to be a fundamental element in our understanding of both the Earth and Space we live in. The interactive below shows how the trigonometric ratios change with different angles. Take a look at what happens to the tangent function when the angle is 90 degrees (you can change the scale to zoom in and out). What does this really mean?  What are the opposite and adjacent lengths? Similarly look at the cosine of 0 or the sine of 90. If you want to see how this interactive works you can watch this short video. #### Worked Examples ##### QUESTION 1 Round $62^\circ$62°$19$19' to the nearest degree. 1. $\editable{}$° ##### QUESTION 2 Convert $13$13°$36$36'$35$35'' to degrees correct to 2 decimal places. ##### QUESTION 3 Evaluate $12\cos$12cos$25$25°$10$10' correct to 2 decimal places. ##### QUESTION 4 Evaluate the following expression, writing your answer correct to two decimal places. 1. $\frac{\cos16^\circ}{\tan30^\circ+\cos18^\circ}$cos16°tan30°+cos18° ## Degrees minutes and seconds in angles Angles as we know are measured in degrees.  But did you know that if we break an angle up into its smaller parts we get minutes and seconds. 1 degree is made up of 60 minutes. And every 1 minute is made up of 60 seconds. This means that this is a base $60$60 system.  Instead of a base $10$10 system for our decimal numbers, to move between the 'columns' multiply or divide by $60$60.  And to move 2 columns we move $3600$3600 ($60\times60$60×60). ### Conversions between notations A base 60 system means that we need to be extra careful when converting between decimal version of angle and the DMS (degree, minute, second) notation. #### DMS to decimal To convert from DMS notation to decimal we convert each part separately. #### Decimal to DMS • Firstly remove the whole number, this is the whole degrees. • Then take the decimal and multiply by $60$60. • Remove that whole number, this is the number of minutes. • Take the decimal component of that number and multiply by $60$60. This is the number of seconds.  Round this to the required decimal places as necessary. #### Examples ##### Question 1 Convert $35.5$35.5 degrees into degrees, minutes and seconds. Think: A common misconception is to think that the decimal point means that $35.5$35.5 degrees is the same as $35^\circ5'$35°5 .  Its not! I do know that $0.5$0.5 of anything means half.  This is half of a degree which is $0^\circ30'$0°30 , because there are $60'$60 in a degree. Do: $35.5^\circ$35.5°  is $35$35 whole degrees and $0.5$0.5 of a degree.  So, $35.5^\circ$35.5° is $35^\circ30'$35°30. ##### Question 2 Convert $28.42$28.42° into DMS notation. $28^\circ$28° Firstly remove the whole number, this is the whole degrees. $0.42\times60$0.42×60 $=$= $25.2'$25.2′ Then take the decimal and multiply by $60$60 $0^\circ25'$0°25′ Remove that whole number, this is the number of minutes $0.2\times60$0.2×60 $=$= $12''$12′′ Take the decimal component of that number and multiply by $60$60. This is the number of seconds.  Round this to the required decimal places as necessary. $28.42$28.42° $=$= $28^\circ25'12"$28°25′12" Final answer! Often though you will use a scientific calculator to perform a lot of these conversions.  Make sure you know how to use your calculator correctly. ##### Question 3 Round  $58^\circ28'$58°28 to the nearest degree. Think: when rounding I need to identify half way.  For minutes and seconds halfway is at $30$30.  So if the value is more than $30$30, I will round up. If it is less than $30$30 I will round it off. Do: $0^\circ28'$0°28, is less than $0^\circ30'$0°30, so I will round it off. $58^\circ28'$58°28 to the nearest degree is $58^\circ$58°. ##### Question 4 Round $23^\circ16'35"$23°1635" to the nearest minute. Think:  When rounding I need to identify halfway.  For minutes and seconds halfway is at $30$30.  So if the value is more than $30$30, I will round up. If it is less than $30$30 I will round it down. Do: $35$35 seconds is more than $30$30, so I will round up. $23^\circ16'35"$23°1635" to the nearest minute is $23^\circ17'$23°17. ##### Question 5 Convert $22^\circ8'$22°8 to seconds. $22$22 degrees is $22\times60$22×60 minutes = $1320$1320 minutes Total minutes I have now is $1320+8=1328$1320+8=1328 minutes $1328$1328 minutes is $1328\times60$1328×60 seconds = $79680$79680 seconds Make sure you know how to use your calculator effectively.  You can use the DMS converter below as well. #### More Worked Examples ##### QUESTION 6 Round $62^\circ$62°$19$19' to the nearest degree. 1. $\editable{}$° ##### QUESTION 7 Convert $13$13°$36$36'$35$35'' to degrees correct to 2 decimal places. ##### QUESTION 8 Evaluate $12\cos$12cos$25$25°$10$10' correct to 2 decimal places. ##### QUESTION 9 Evaluate the following expression, writing your answer correct to two decimal places. 1. $\frac{\cos16^\circ}{\tan30^\circ+\cos18^\circ}$cos16°tan30°+cos18° ## Calculating trigonometric expressions As with every topic in mathematics that you study there is a conceptual side, (the what you need to know and understand) and the practical side (the what you need to do and answer).  To calculate trigonometric expressions you will need to use a scientific calculator.  Make sure your calculator has the sine (sin), cosine (cos) and tangent  (tan) buttons and that you can enter in degrees, minutes and seconds as angle values. #### Examples ##### Question 1 Evaluate $\tan55^\circ$tan55° This questions is asking for us to work out the ratio of the opposite and adjacent sides of a triangle with angle $55$55 degrees. Because we don't have the side lengths though, we use the trigonometric function of tangent.  Using your calculator type in $\tan55^\circ$tan55°= and you will get $1.43$1.43 to two decimal places. ##### Question 2 Find $\theta$θ if $\sin\theta=0.65$sinθ=0.65 answer to 2 decimal places This question is asking us what the angle is if the ratio of the opposite and hypotenuse is 0.65.  To answer this question you use the inverse sin button on your scientific calculator.  Often it looks a bit like this $\sin^{-1}$sin1 $\sin\theta=0.65$sinθ=0.65 $\sin^{-1}$sin1 $0.65=40.54$0.65=40.54° ##### Question 3 Evaluate $\frac{\tan35^\circ}{\cos42^\circ-\sin28^\circ}$tan35°cos42°sin28° Think: I will need to remember my order of operations, in this case the subtraction on the denominator comes first, then the division with tan. Do$\frac{\tan35^\circ}{\cos42^\circ-\sin28^\circ}=2.56$tan35°cos42°sin28°=2.56 Try to complete the whole question on your calculator in one step, this will reduce rounding errors. #### More Worked Examples ##### QUESTION 4 Round $62^\circ$62°$19$19' to the nearest degree. 1. $\editable{}$° ##### QUESTION 5 Convert $13$13°$36$36'$35$35'' to degrees correct to 2 decimal places. ##### QUESTION 6 Evaluate $12\cos$12cos$25$25°$10$10' correct to 2 decimal places. ##### QUESTION 7 Evaluate the following expression, writing your answer correct to two decimal places. 1. $\frac{\cos16^\circ}{\tan30^\circ+\cos18^\circ}$cos16°tan30°+cos18°
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155550956726074, "perplexity": 2004.9536705505252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00141.warc.gz"}
https://www.computer.org/csdl/trans/tp/2009/05/ttp2009050919-abs.html
The Community for Technology Leaders Issue No. 05 - May (2009 vol. 31) ISSN: 0162-8828 pp: 919-930 Ying Zhu , Siemens Corporate Research, Princeton Dorin Comaniciu , Siemens Corporate Research, Princeton Bohyung Han , Mobileye Vision Technologies, Princeton Larry S. Davis , University of Maryland - College Park, College Park ABSTRACT Particle filtering is frequently used for visual tracking problems since it provides a general framework for estimating and propagating probability density functions for nonlinear and non-Gaussian dynamic systems. However, this algorithm is based on a Monte Carlo approach and the cost of sampling and measurement is a problematic issue, especially for high-dimensional problems. We describe an alternative to the classical particle filter in which the underlying density function has an analytic representation for better approximation and effective propagation. The techniques of density interpolation and density approximation are introduced to represent the likelihood and the posterior densities with Gaussian mixtures, where all relevant parameters are automatically determined. The proposed analytic approach is shown to perform more efficiently in sampling in high-dimensional space. We apply the algorithm to real-time tracking problems and demonstrate its performance on real video sequences as well as synthetic examples. INDEX TERMS Bayesian filtering, density interpolation, density approximation, mean shift, density propagation, visual tracking, particle filter. CITATION Ying Zhu, Dorin Comaniciu, Bohyung Han, Larry S. Davis, "Visual Tracking by Continuous Density Propagation in Sequential Bayesian Filtering Framework", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, no. , pp. 919-930, May 2009, doi:10.1109/TPAMI.2008.134
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9143933653831482, "perplexity": 2222.6102591444974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424876.76/warc/CC-MAIN-20170724122255-20170724142255-00485.warc.gz"}
http://mathhelpforum.com/geometry/156277-classify-beam.html
## Classify the beam Hello i have to classify these beam (characteristics, types), i'd like to ask your help thank you a) y+3k-k=0 b) c)k(2x-1)=y+3k d) e) f)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369637966156006, "perplexity": 4331.678959760968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00623-ip-10-145-167-34.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/38406/clarification-on-central-charge-equals-number-of-degrees-of-freedom/38967
# Clarification on “central charge equals number of degrees of freedom” It's often stated that the central charge c of a CFT counts the degrees of freedom: it adds up when stacking different fields, decreases as you integrate out UV dof from one fixed point to another, etc... But now I am puzzled by the fact that certain fields have negative central charge, for example a b/c system has $c=-26$. How can they be seen has negatives degrees of freedom? Is it because they are fictional dof, remnant of a gauge symmetry? On their own, they would describe a non unitary theory, incoherent at the quantum level. - Yes, unitary CFTs have a positive central charge. The $bc$ system has $c=-26$ but the theory is only unitary after switching to BRST cohomologies, and then only the light-cone degrees of freedom with $c=24$ survive. So the bosonic string has $24$ degrees of freedom. –  Luboš Motl Sep 26 '12 at 18:43 –  Qmechanic Sep 26 '12 at 19:20 Thanks for both comments, I read the two question page you are referring to Qmechanic but I don't feel like it answers my question, maybe it wasn't clearly formulated. My point was only asking why those fields (b/c system) give a algebraically negative contribution to the "number of degrees of freedom of your theory"-number. Is it because in that case, the intuitive association central charge = number of dof is a bit abusive? Is it because the b/c are fictitious dof (accounting for a gauge symmetry)? Maybe something else? –  Just_a_wannabe Sep 26 '12 at 19:39 I already went through the light cone quantization of bosonic string theory, and the path integral formalism but the answer did not appear (clear) to me... –  Just_a_wannabe Sep 26 '12 at 19:41 Right, your question is different. I was merely linking to the (so far) only other Phys.SE post that discusses the central charge $c=-26$ for the $bc$ ghost system. –  Qmechanic Sep 26 '12 at 20:24 The central charge counts the number of degrees of freedom only for matter fields living on a flat manifold (or supermanifold in the case of superstrings). An example where this counting argument fails for matter fields is the case of strings moving on a group manifold $G$ whose central charge is given by the Gepner-Witten formula: $c = \frac{k\mathrm{dim}(G)}{k+\kappa(G)}$ Where $k$ is the level and $\kappa$ is the dual coxeter number. Please see the following article by Juoko Mickelsson. One of the best ways to understand this fact (and in addition the ghost sector central extension) is to follow the Bowick-Rajeev approach described in a series of papers, please see for example the following scanned preprint. I'll try to explain their apprach in a few words. Bowick and Rajeev use the geometric quantization approach. They show that the Virasoro central charges are curvatures of line bundles over $Diff(S^1)/S^1$ called the vacuum bundles. Bowick and Rajeev quantize the space of loops living on the matter field manifold. This is an infinite dimensional Kaehler manifold. One way to think about it is as a collection of the Fourier modes of the string, the Fourier modes corresponding to positive frequencies are the holomorphic coordinates and vice versa. In addition, in order to define an energy operator (Laplacian) on this manifold one needs a metric (this causes the distinction between the flat and curved metric cases where the dimension counting is valid or not. The reason that the counting argument works in the flat case is that the Laplacian in this case has constant coefficients). The quantization of a given loop results Fock space in which all the negative frequency modes are under the Dirac sea. However this Fock space is not invariant under a reparametrization of the loop. One can imagine that over each point of $Diff(S^1)/S^1$, there is a Fock space labeled by this point. This is the Fock bundle whose collection of vacuum vectors is a line bundle called the vacuum bundle. Bowick and Rajeev proved that the central charge is exactly the curvature of this line bundle. The situation for the ghosts is different. Please see the Bowick-Rajeev refence above. Their contribution to the central charge is equal to the curvature of the canonical bundle. This bundle appears in geometric quantization due to the noninvariance of the path integral measure on $Diff(S^1)/S^1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289481043815613, "perplexity": 293.38985505321637}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657116650.48/warc/CC-MAIN-20140914011156-00303-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/031V
Lemma 9.28.2. Let $K$ be a field of characteristic $p > 0$. Let $K \subset L$ be a separable algebraic extension. Let $\alpha \in L$. 1. If the coefficients of the minimal polynomial of $\alpha$ over $K$ are $p$th powers in $K$ then $\alpha$ is a $p$th power in $L$. 2. More generally, if $P \in K[T]$ is a polynomial such that (a) $\alpha$ is a root of $P$, (b) $P$ has pairwise distinct roots in an algebraic closure, and (c) all coefficients of $P$ are $p$th powers, then $\alpha$ is a $p$th power in $L$. Proof. It follows from the definitions that (2) implies (1). Assume $P$ is as in (2). Write $P(T) = \sum \nolimits _{i = 0}^ d a_ i T^{d - i}$ and $a_ i = b_ i^ p$. The polynomial $Q(T) = \sum \nolimits _{i = 0}^ d b_ i T^{d - i}$ has distinct roots in an algebraic closure as well, because the roots of $Q$ are the $p$th roots of the roots of $P$. If $\alpha$ is not a $p$th power, then $T^ p - \alpha$ is an irreducible polynomial over $L$ (Lemma 9.14.2). Moreover $Q$ and $T^ p - \alpha$ have a root in common in an algebraic closure $\overline{L}$. Thus $Q$ and $T^ p - \alpha$ are not relatively prime, which implies $T^ p - \alpha | Q$ in $L[T]$. This contradicts the fact that the roots of $Q$ are pairwise distinct. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9959734678268433, "perplexity": 55.96290902931718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00079.warc.gz"}
https://irmar.univ-rennes1.fr/seminaire/seminaire-pampers/thi-ngoc-anh-nguyen
# Gromov-Witten invariants in Fano threefolds of index 2: some examples In this talk, I explain one explicit formula for counting the number of rational curves passing through certain number of points in the 3-dimensional projective spaces $\mathbb{C}P^3$ (also called Gromov-Witten invariants in $\mathbb{C}P^3$) based on the paper of E.Brugallé and P.Georgieva. I also give the idea for computing GW invariants in two other types of Fano threefolds index 2 and claim a general formula.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053500890731812, "perplexity": 327.3525131509782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00502.warc.gz"}
https://drserendipity.com/notes/notes_by_subjects/artificial_intelligence/computer-vision/3-advanced-computer-vision-deep-learning/3-6-optional-attention-mechanisms/10-08-multiplicative-attention-v2/
# 10 – 08 Multiplicative Attention V2 Earlier in this lesson, we looked at how the key concept of attention is to calculate an attention weight vector, which is used to amplify the signal from the most relevant parts of the input sequence and in the same time, drown out the irrelevant parts. In this video, we’ll begin to look at the scoring functions that produce these attention weights. An attention scoring function tends to be a function that takes in the hidden state of the decoder and the set of hidden states of the encoder. Since this is something we’ll do at each timestep on the decoder side, we only use the hidden state of the decoder at that timestep or the previous timestep in some scoring methods. Given these two inputs, this vector and this matrix, it produces a vector that scores each of these columns. Before looking at the matrix version, which calculates the scores for all the encoder hidden states in one step, let’s simplify it by looking at how to score a single encoder hidden state. The first scoring method and the simplest is to just calculate the dot product of the two input vectors. The dot product of two vectors produces a single number, so that’s good. But the important thing is the significance of this number. Geometrically, the dot product of two vectors is equal to multiplying the lengths of the two vectors by the cosine of the angle between them, and we know that cosine has this convenient property that it equals one if the angle is zero and it decreases, the wider the angle becomes. What this means is that if we have two vectors with the same length, the smaller the angle between them, the larger the dot product becomes. This dot product is a similarity measure between vectors. The dot product produces a larger number, the smaller the angle between the vectors are. In practice, however, we want to speed up the calculation by scoring all the encoder hidden states at once, which leads us to the formal mathematical definition of dot product attention. That’s what we have here. It is the hidden state of the current timestep transposed times the matrix of the encoder hidden timesteps. That looks like this and that will produce the vector of the scores. With the simplicity of this method comes the drawback of assuming the encoder and decoder have the same embedding space. So, while this might work for text summarization, for example, where the encoder and decoder use the same language and the same embedding space. For machine translation, however, you might find that each language tends to have its own embedding space. This is a case where we might want to use the second scoring method, which is a slight variation on the first. It simply introduces a weight matrix between the multiplication of the decoder hidden state and the encoder hidden states. This weight matrix is a linear transformation that allows the inputs and outputs to use different embeddings and the result of this multiplication would be the weights vector. Let us now look back at this animation and incorporate everything that we know about attention. The first time step in the attention decoder starts by taking an initial hidden state as well as the embedding for the end symbol. It does its calculation and generates the hidden state at that timestep and here, we are ignoring the actual outputs of the RNN, we’re just using the hidden states. Then we do our attention step. We do that by taking in the matrix of the hidden states of the encoder. We produce a scoring as we’ve mentioned. So, if we’re doing multiplicative attention, we’ll use the dot product. In general, we produce the scores, we do a softmax, we multiply the softmax scores by each corresponding hidden state from the encoder, we sum them up producing our attention context vector and then what we do next is this, we concatenate the attention context vector with the hidden state of a decoder at that timestep so h4. So this would be c4 concatenated with h4, that’s what we will do here. So, we basically glued them together as one vector and then we pass them through a fully connected neural network which is basically multiplying by the weights matrix WC and apply a tanh activation. The output of this fully connected layer would be our first outputted word in the output sequence. We can now proceed to the second step, passing the hidden state to it and taking the output from the first decoder timestep. We produce h5, we start our attention at this step as well, we score, we produce a weights vector, we do softmax, we multiply, we add them up producing c5. The attention context vector at step five, we glue it together with the hidden state, we pass it through the same fully-connected network with tanh activation producing the second word in our output and this goes on until we have completed outputting the output sequence. This is pretty much the full view of how attention works in sequence-to-sequence models. In the next video, we’ll touch on additive attention.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164841771125793, "perplexity": 271.7640853405926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00536.warc.gz"}
https://www.physicsforums.com/threads/steel-ball-collision-problem.193988/#post-1481689
# Steel Ball Collision problem • Start date • #1 bulldog23 120 0 [SOLVED] collision problem ## Homework Statement Two identical steel balls, each of mass 3.4 kg, are suspended from strings of length 27 cm so that they touch when in their equilibrium position. We pull one of the balls back until its string makes an angle theta = 74o with the vertical and let it go. It collides elastically with the other ball. A) How high will the other ball rise above its starting point? B)Suppose that instead of steel balls we use putty balls. They will collide inelastically and remain stuck together after the collision. How high will the balls rise after the collision? P=m*v m1v1=m2v2 ## The Attempt at a Solution I am unsure how to start this problem. I know that the total initial momentum must equal the total final momentum, because the collision is elastic. Can someone help me out by explaining part A and part B? • #2 saket 169 0 I know that the total initial momentum must equal the total final momentum, because the collision is elastic. I think, the reason u are giving for your assertion is, simply put, WRONG! Initial momentum = Final Momentum, if net force in the direction of the momentum is zero! Collison is elastic => No energy loss. In the part (A).. as collison is elastic, u can apply conservation of energy. (Of course, u have to apply once conservation of linear momentum also.) Doing part (A), must give u enough hints to do the second part. • #3 bulldog23 120 0 can you explain part A more. How do I apply conservation of energy and conservation of momentum? • #4 bulldog23 120 0 How do I find the height of the first ball? • #5 robbondo 90 0 use your trig skills... so angle is theta, the height would be possibly the adjacent side? You are give the hypotenuse. I think u can take it from there. • #6 saket 169 0 Initially, height of first ball is R[1 - Cos(theta)], where R is the length of the string and theta is the angle mentioned in problem. Find speed of first ball when it is just about to strike second ball. When first ball strikes second ball, it has speed in horizontal direction. Apply collision principles to find final speed just after collision of the two balls. And, then get the height attained by each of the balls. Note that, tension is always perpendicular to the displacement of either ball, so work done by it will be zero! • #7 bulldog23 120 0 so I set 1/2mv^2 equal to mgh, and solved for the velocity. I got 1.95 m/s. Am I doing this right? • #8 saket 169 0 solve the problem totally.. and then seek help. if u ask after every step... it's like being dependent on us and not beieving urself. • #9 bulldog23 120 0 alright so I did do it right. The initial height of ball 1 equals the final height of ball 2 because the collision is elastic. So how do I approach part B? • #10 saket 169 0 do it exactly the same way. just be careful, while applying principles of collision. Here, after the collision the two balls are going to stick together. Take this into account for conservation of linear momentum, and u will be able to do it. • #11 bulldog23 120 0 I tried using 1/2(m1+m2)v2=(m1+m2)gy. I tried solving for y, but I am not getting the right answer. Can you explain where I am going wrong? • #12 the time 1 0 What characteristic of Momentum makes it SO useful to engineers and scientists when analyzing explosions and collisions?Before something explodes the object is often not moving but after it explodes pieces are moving in all directions with varying speeds. How can it be said that the net momentum after the explosion is zero?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653296232223511, "perplexity": 718.1911520808676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00566.warc.gz"}
https://dsp.stackexchange.com/questions/29658/why-is-x-not-considered-a-primitive-polynomial-while-being-considered-an-irred
# Why is $x$ not considered a primitive polynomial while being considered an irreducible polynomial? In "Shu Lin, Daniel J. Costello-Error Control Coding (2nd Edition), Prentice Hall 2004" it is given that in $GF(2)$, if $f(x)$ is an irreducible polynomial of degree $m$ it divides $x^n+1$ where $n = 2^m -1$. And as per mathematica documentation, $x$ is an irreducible polynomial, here is the conflict. $x$ does not divide $x+1$. Also $x$ is declared non-primitive! Just want clarification. • Your definition of $m$ does not make sense: in general, $m \neq 2^m-1$. – MBaz Mar 23 '16 at 15:06 • @MBaz:Thanks for pointing out the error in typing. I corrected it. I meant actually $n$ = $2^m$ -1 – Seetha Rama Raju Sanapala Mar 23 '16 at 17:06 From the Wikipedia page irreducible polynomials: "a non-constant polynomial $f$ in $F[x]$ is said to be irreducible over [the field] $F$ if it is not the product of two polynomials of positive degree." From this definition, it should be clear that $x$ cannot be viewed as the product of two polynomials of positive degree. Take one polynomial to be $x$ then there is no other polynomial (with degree greater than zero). Irreducible polynomials act like prime numbers in the field in that other polynomials can be factored into (powers of) the irreducible polynomials. A primitive polynomial has a different definition. "In field theory, a branch of mathematics, a primitive polynomial is the minimal polynomial of a primitive element of the finite extension field $GF(p^m)$." Notice that irreducible polynomials can be defined over any field, but primitive polynomials only apply to the extension of a finite field. These polynomials are minimal polynomials that generate all of the elements of the extension field. From this definition, we require that $x$ generates all of the (nonzero) elements of the field. It does in this case. We also require that it is a minimal polynomial which means it has a root which is a primitive element (generates all of the nonzero elements of the extension field). It fails this test as the root of $x$ is $0$ and $0$ is not a primitive element of any field since $0^n = 0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886539876461029, "perplexity": 211.53297150882887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00409.warc.gz"}
https://www.math.auckland.ac.nz/~waldron/Preprints/Lifting/lifting.html
# Integral error formulae for the scale of mean value interpolations which includes Kergin and Hakopian interpolation ## Abstract: In this paper, we provide an integral error formula for a certain scale of mean value interpolations which includes the multivariate polynomial interpolation schemes of Kergin and Hakopian. This formula involves only derivatives of order one higher than the degree of the interpolating polynomial space, and from it we can obtain sharp $L_\infty$-estimates. These $L_\infty$-estimates are precisely those that numerical analysts want, to guarantee that a scheme based on such an interpolation has the maximum possible order. Keywords: scale of mean value interpolations, Kergin interpolation, Hakopian interpolation, Lagrange interpolation, Hermite interpolation, Hermite-Genocchi formula, multivariate divided difference, plane wave, lifting, Radon transform Math Review Classification: 41A05, 41A63, 41A80 (primary), 41A10, 41A44, 44A12 (secondary) Length: 18 pages Comment: Written in TeX. This paper is the basis of Chapter 1 of Shayne Waldron's dissertation. Last updated: 24 October 1997 Status: Appeared in Numer. Math. 77 (1997), no. 1, 105--122.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9158496856689453, "perplexity": 1616.8635398281715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201922.85/warc/CC-MAIN-20190319073140-20190319095140-00175.warc.gz"}
https://www.physicsforums.com/threads/weinbergs-gauge-fixed-quantum-gravity.1006651/
# Weinberg's gauge-fixed quantum gravity • A • Start date • #1 433 68 Summary: Anyone completed the derivation of Einstein's Equation? In this 1965 paper by Weinberg, https://journals.aps.org/pr/abstract/10.1103/PhysRev.138.B988, he describes a quantum field theory of the graviton in a Coulomb-like fixed gauge, where the free graviton has only space-space components and is traceless. This of course makes the field dynamics non-covariant; he then shows that to get back covariance, you need to add a nonlocal "Newtonian" term to the Hamiltonian and also have the graviton couple to a conserved tensor. After a long calculation he gets back the linear form of Einstein's equations, and argues that the tensor on the right-hand side will include a gravitational energy term that is equivalent to the nonlinear parts of the left-hand side in Einstein's equations. But he does not prove this. He also does not prove that certain noncovariant "gradient terms" in his graviton propagator will not contribute to physical amplitudes; he conjectures that this requirement will in fact fix the form of the gravitational energy term. Has this approach been taken up by others? Have these conjectures ever been proven? Likes ohwilleke, atyy, Demystifier and 1 other person • #3 433 68 Hey, good to see you're still around! I've been away from PF for a while, but when I come back I get my first response from an old friend! DeWitt's work is certainly very central and powerful, but I'm specifically interested in the "Coulomb-gauge" approach developed by Weinberg in that paper. I like it because it gives the field operators a fully explicit interpretation, in terms of creating and annihilating (on-shell) gravitons. OTOH the explicitly nonlocal Hamiltonian is a bit of a steep price... though maybe not for a Bohmian like you! Likes vanhees71 and Demystifier • #4 Demystifier Gold Member 12,274 4,590 Not an expert, but perhaps the gravitational Coulomb gauge is studied more in classical gravitational wave literature. Likes vanhees71 • #5 433 68 Not an expert, but perhaps the gravitational Coulomb gauge is studied more in classical gravitational wave literature. It is actually quite similar to the standard Transverse Traceless (TT) gauge. It might even be identical; I'm not sure about that. Kind of ironic that Weinberg calls his gauge "too ugly to deserve a name"! Was TT gauge in use for gravitational waves, back in 1965? Anyhow, none of the classical GR literature will address the the issue of the gradient terms in the propagator. They also are unlikely to have used Weinberg's Hamiltonian much, because of the nonlocality. But it would be interesting if someone did a detailed comparison between this Hamiltonian and the ADM version. As for the gravitational energy pseudotensor, it's obvious that moving the nonlinear terms in ##G_{\mu\nu}## to the RHS of the Einstein Equation does give a noncovariantly-conserved (and symmetric) form for the total SEM pseudotensor - assuming Einstein's Equation holds. The interesting question is finding some set of assumptions that make this form unique, beyond the also-obvious point that it serves as the source for the linear part of ##G_{\mu\nu}##. (Linear here means first-order in ##h_{\mu\nu} = g_{\mu\nu}-\eta_{\mu\nu}##) Likes vanhees71 • Last Post Replies 1 Views 539 • Last Post Replies 4 Views 2K • Last Post Replies 1 Views 2K • Last Post Replies 13 Views 3K • Last Post Replies 1 Views 3K • Last Post Replies 2 Views 2K • Last Post Replies 5 Views 4K • Last Post Replies 14 Views 8K • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935430645942688, "perplexity": 1268.0335502861678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00214.warc.gz"}
https://www.physicsforums.com/threads/moment-of-inertia-for-infinite-density-mass-per-area.276968/
# Moment of Inertia for Infinite Density (mass per area) 1. Dec 3, 2008 ### Hells_Kitchen 1. The problem statement, all variables and given/known data Suppose you have the density of a disk given by σ=σ0*rn. For n approaches infinity, find the limit of M.I. Interpret your result, which should be physical, reasonable and intuitively clear. 2. Relevant equations Now i found the moment of intertia of the object to be I am having trouble with the limit though because if I try to do Lo'Hopitals Rule on R^(n+4)/(n+4) it doesnt help and I am not sure if I can use just I = MR^2(n+2)/(n+4) and treat M as a constant but I dont think that would be possible since M is a function of n itself. When i try to figure the limit i get undefined solutions and I am not sure what that means physically. Can someone help please? Thanks, HK Last edited by a moderator: Apr 24, 2017 at 8:53 AM 2. Dec 3, 2008 ### alphysicist Hi Hells_Kitchen, It's true that the value of M depends on n; however the goal of a moment of inertia calculation is to get an expression in the form I = (number) M (length)2 (or perhaps sums of those types of terms). So for purposes of taking the limit of large n, I think you can consider M to be a "given" and find out what happens to the numerical factor in front of the MR2 as n gets larger. 3. Dec 3, 2008 ### Hells_Kitchen In that case through Lo'Pitals rule by taking the derivative of lim n --> infinity of (n+2)/(n+4) = 1. So I = MR^2 right? So that seems like the moment of inertia of a point mass does that mean that if the mass per area of the disk is infinite it will act as if it were a point mass instead of a disk physically speaking? 4. Dec 3, 2008 ### alphysicist That looks right to me. I'm not sure if that would be true. For a point mass the R is the distance away from some origin--and where the origin is placed is entirely up to you. For the disk, the R is set by the physical size of the disk itself. I would think more about comparing this to a thin ring. 5. Dec 3, 2008 ### Hells_Kitchen Oh you mean a thin walled hollow ring which has a moment of inertia of MR^2? That does make sense to me as well. So when the mass per area is infinite the disk will act as if its mass was concentrated R distance away uniformally and hollow in the middle. Thanks a bunch for the help! 6. Dec 4, 2008 ### alphysicist Glad to help! What I was actually thinking was not that the mass per area is infinite, but about how the density goes as rn. If you plot a series of curves, say r^2, r^7, r^70, etc. over a range from 0->2 (for example), at the very high n values the mass is overwhelmingly located in a smaller and smaller ring at the largest r value. for example at n=2, it looks like: http://img185.imageshack.us/img185/687/r2newor7.jpg at n=7, http://img185.imageshack.us/img185/3636/r7newnx8.jpg and at n=70 http://img185.imageshack.us/img185/852/r70newft7.jpg so it actually effectively becomes a ring for large n (because the inner portions are so much less dense). Last edited by a moderator: Apr 24, 2017 at 8:53 AM Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Moment of Inertia for Infinite Density (mass per area)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475023150444031, "perplexity": 672.9544675415872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00068-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/7717/difference-between-axioms-theorems-postulates-corollaries-and-hypotheses
# Difference between axioms, theorems, postulates, corollaries, and hypotheses I've heard all these terms thrown about in proofs and in geometry, but what are the differences and relationships between them? Examples would be awesome! :) - Go read this Wikipedia article and the articles it links to. –  kahen Oct 24 '10 at 20:22 One difficulty is that, for historical reasons, various results have a specific term attached (Parallel postulate, Zorn's lemma, Riemann hypothesis, Collatz conjecture, Axiom of determinacy). These do not always agree with the the usual usage of the words. Also, some theorems have unique names, for example Hilbert's Nullstellensatz. Since the German word there incorporates "satz", which means "theorem", it is not typical to call this the "Nullstellensatz theorem". These things make it harder to pick up the general usage. –  Carl Mummert Oct 24 '10 at 23:15 In Geometry, "Axiom" and "Postulate" are essentially interchangeable. In antiquity, they referred to propositions that were "obviously true" and only had to be stated, and not proven. In modern mathematics there is no longer an assumption that axioms are "obviously true". Axioms are merely 'background' assumptions we make. The best analogy I know is that axioms are the "rules of the game". In Euclid's Geometry, the main axioms/postulates are: (I) Given any two distinct points, there is a line that contains them. (II) Any line segment can be extended to an infinite line. (III) Given a point and a radius, there is a circle with center in that point and that radius. (IV) All right angles are equal to one another. (V) If a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles. (The parallel postulate). A theorem is a logical consequence of the axioms. In Geometry, the "propositions" are all theorems: they are derived using the axioms and the valid rules. A "Corollary" is a theorem that is usually considered an "easy consequence" of another theorem. What is or is not a corollary is entirely subjective. Sometimes what an author thinks is a 'corollary' is deemed more important than the corresponding theorem. (The same goes for "Lemma"s, which are theorems that are considered auxiliary to proving some other, more important in the view of the author, theorem). A "hypothesis" is an assumption made. For example, "If $x$ is an even integer, then $x^2$ is an even integer" I am not asserting that $x^2$ is even or odd; I am asserting that if something happens (namely, if $x$ happens to be an even integer) then something else will also happen. Here, "$x$ is an even integer" is the hypothesis being made to prove it. See the Wikipedia pages on axiom, theorem, and corollary. The first two have many examples. - Arturo, I hope you don't mind if I edged your already excellent answer a little bit nearer to perfection. –  Guess who it is. Oct 25 '10 at 0:26 @J.M.: Heh. Not at all; thanks for the corrections! You did miss the single quotation mark after "propositions" in the second paragraph, though. (-: –  Arturo Magidin Oct 25 '10 at 0:33 Great answer. Clear and informal, while still accurate. Better than wikipedia's, in my opinion. –  7hi4g0 Feb 8 '14 at 0:47 Why is Bertrand's postulate considered a postulate? I don't think it would be obvious to anybody except to extraordinary geniuses like Euler, Gauss or Ramanujan.. –  AvZ Feb 14 at 5:54 Based on logic, an axiom or postulate is a statement that is considered to be self-evident. Both axioms and postulates are assumed to be true without any proof or demonstration. Basically, something that is obvious or declared to be true and accepted but have no proof for that, is called an axiom or a postulate. Axioms and postulate serve as a basis for deducing other truths. The ancient Greeks recognized the difference between these two concepts. Axioms are self-evident assumptions, which are common to all branches of science, while postulates are related to the particular science. Axioms Aristotle by himself used the term “axiom”, which comes from the Greek “axioma”, which means “to deem worth”, but also “to require”. Aristotle had some other names for axioms. He used to call them as “the common things” or “common opinions”. In Mathematics, Axioms can be categorized as “Logical axioms” and “Non-logical axioms”. Logical axioms are propositions or statements, which are considered as universally true. Non-logical axioms sometimes called postulates, define properties for the domain of specific mathematical theory, or logical statements, which are used in deduction to build mathematical theories. “Things which are equal to the same thing, are equal to one another” is an example for a well-known axiom laid down by Euclid. Postulates The term “postulate” is from the Latin “postular”, a verb which means “to demand”. The master demanded his pupils that they argue to certain statements upon which he could build. Unlike axioms, postulates aim to capture what is special about a particular structure. “It is possible to draw a straight line from any point to any other point”, “It is possible to produce a finite straight continuously in a straight line”, and “It is possible to describe a circle with any center and any radius” are few examples for postulates illustrated by Euclid. What is the difference between Axioms and Postulates? • An axiom generally is true for any field in science, while a postulate can be specific on a particular field. • It is impossible to prove from other axioms, while postulates are provable to axioms. - 1. Since it is not possible to define everything, as it leads to a never ending infinite loop of circular definitions, mathematicians get out of this problem by imposing "undefined terms". Words we never define. In most mathematics that two undefined terms are set and element of. 2. We would like to be able prove various things concerning sets. But how can we do so if we never defined what a set is? So what mathematicians do next is impose a list of axioms. An axiom is some property of your undefined object. So even though you never define your undefined terms you have rules about them. The rules that govern them are the axioms. One does not prove an axiom, in fact one can choose it to be anything he wishes (of course, if it is done mindlessly it will lead to something trivial). 3. Now that we have our axioms and undefined terms we can form some main definitions for what we want to work with. 4. After we defined some stuff we can write down some basic proofs. Usually known as propositions. Propositions are those mathematical facts that are generally straightforward to prove and generally follow easily form the definitions. 5. Deep propositions that are an overview of all your currently collected facts are usually called Theorems. A good litmus test, to know the difference between a Proposition and Theorem, as somebody once remarked here, is that if you are proud of a proof you call it a Theorem, otherwise you call it a Proposition. Think of a theorem as the end goals we would like to get, deep connections that are also very beautiful results. 6. Sometimes in proving a Proposition or a Theorem we need some technical facts. Those are called Lemmas. Lemmas are usually not useful by themselves. They are only used to prove a Proposition/Theorem, and then we forget about them. 7. The net collection of definitions, propositions, theorems, form a mathematical theory. - Technically Axioms are self-evident or self-proving, while postulates are simply taken as given. However really only Euclid and really high end theorists and some poly-maths make such a distinction. See http://www.friesian.com/space.htm Theorems are then derived from the "first principles" i.e. the axioms and postulates. - No, that "technical" division really leads nowhere, and nowadays no one follows it. –  Andres Caicedo Nov 23 '14 at 17:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166478276252747, "perplexity": 647.1341846652241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064919.24/warc/CC-MAIN-20150827025424-00050-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/term-wise-differentiation-of-power-series.225074/
Term-wise Differentiation of Power Series 1. Hootenanny 9,677 Staff Emeritus For those who don't know I'm writing a tutorial (Introduction / Summary of Differentiation) in the tutorials forum. I have come to the point of introducing Transcendental functions. I would like to introduce the exponential function first (via the Taylor series) and then present the natural logarithm as it's inverse. Although not entirely necessary, I would like to present a concise proof of term-wise differentiation of power series in the tutorial. If anyone knows of a concise online proof, or even better, would be willing to contribute a proof directly, please let me know, either in this thread or via PM. 2. flebbyman 28 I would say that it follows from the linearity of differentiation 3. HallsofIvy 40,795 Staff Emeritus Because you are talking about an infinite series, you also need the fact that a power series converges uniformly inside its radius of convergence. 4. Hootenanny 9,677 Staff Emeritus Is it necessary that both the original and differentiated series converges uniformally, I thought that the original series need only converge? Last edited: Apr 2, 2008
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702142477035522, "perplexity": 546.9480793301872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988860.4/warc/CC-MAIN-20150728002308-00105-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.maa.org/programs/faculty-and-departments/course-communities/conditional-probability-0
# Conditional Probability This is one in a series of lessons on probability from the Math Goodies site. The definition of conditional probability is given along with several examples from a variety of settings. A Venn diagram is also used for illustration. Several multiple choice exercises are provided. Identifier: http://www.mathgoodies.com/lessons/vol6/conditional.html Rating: Creator(s): Math Goodies Cataloger: Kyle Siegrist Publisher: Math Goodies Rights: Copyright 1998-2012, Mrs. Glosser's Math Goodies. No information on use given.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110502600669861, "perplexity": 2122.9655777551193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159031.19/warc/CC-MAIN-20160205193919-00293-ip-10-236-182-209.ec2.internal.warc.gz"}
https://iwaponline.com/wst/article-abstract/30/8/281/28007/Sediment-accumulation-in-a-series-of-four-pilot?redirectedFrom=fulltext
Settling and accumulation of sediments have been measured on the bottom of the facultative and three maturation ponds of a series of pilot-scale stabilization ponds. The mean deposition rate in the facultative pond showed that the attempt to establish a short-term sedimentation by in situ measurements failed. The rates were largely overestimated and the values calculated from the sediment accumulated in a long-term are closer to the reality. The sediment depth increase rates are 5 cm/year for the facultative pond and 1.3 cm/year for second and third maturation ponds. In the last maturation pond it is 1.6 cm/year. The rate of sediments deposit (volatile solids) in the first pond can be estimated by an at−b equation. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319051861763, "perplexity": 4037.551956733358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00034.warc.gz"}
https://www.math-only-math.com/pythagorean-theorem.html
# Pythagorean Theorem Pythagorean Theorem is also known as ‘Pythagoras theorem’ and is related to the sides of a right angled triangle. Statement of ‘Pythagoras theorem’: In a right triangle the area of the square on the hypotenuse is equal to the sum of the areas of the squares of its remaining two sides. (Length of the hypotenuse)2 = (one side)2 + (2nd side)2 In the given figure, ∆PQR is right angled at Q; PR is the hypotenuse and PQ, QR are the remaining two sides, then (PR)2 = PQ2 + QR2 (h)2 = p2 + b2 [Here h → hypotenuse, p → perpendicular, b → base] Verification of Pythagoras theorem by the method of dissection: In the adjoining figure, ∆ PQR is a right angled triangle where QR is its hypotenuse and PR > PQ. Square on QR is QRBA, square on PQ is PQST and the square on PR is PRUV. The point of intersection of the diagonal of the square PRUV is O. The straight line through the point O parallel to the QR intersects PV and RU at the point J and K respectively. Again the straight line through the point O perpendicular to JK intersects PR and VU at the point L and  respectively. As a result, the square PRUV is divided into four parts which is marked as 1, 2, 3, 4 and the square PQST is marked 5. You can draw the same figure on a thick paper and cut it accordingly and now cut out the squares respectively from this figure. Cut the squares PRUV along JK and LM dividing it in four parts. Now, place the parts 1, 2, 3, 4and 5 properly on the square QRBA. Note: (i) these parts together exactly fit the square. Thus, we find that QR2 = PQ2 + PR2 (ii) Square drawn on side PQ, which means the area of a square of side PQ is denoted by PQ2. 1. Find the value of x using Pythagorean theorem: Solution: Identify the sides and the hypotenuse of the right angle triangle. The one sides length = 8 m and the other side length = 15. 'X' is the length of hypotenuse because it is opposite side of the right angle. Substitute the values into the Pythagorean formula (here 'x' is the hypotenuse) (h)2 = p2 + b2 [Here h → hypotenuse, p → perpendicular, b → base] x2 = 82 + 152 Solve to find the known value of ‘x’ x2 = 64 + 225 x2 = 289 x = √289 x = 17 Therefore, value of x (hypotenuse) = 17 m 2. Use the formula of Pythagorean theorem to determine the length of ‘a’. Solution: Identify the perpendicular, base and the hypotenuse of the right angle triangle. Length of perpendicular = 24 cm and the length of base = a. Length of hypotenuse = 25 cm. Since hypotenuse is the opposite side of the right angle. Substitute the values into the Pythagorean formula (here 'a' is the base) (h)2 = p2 + b2 [Here h → hypotenuse, p → perpendicular, b → base] 252 = 242 + a2 Solve to find the known value of ‘a’ 625 = 576 + a2 625 – 576 = 576 – 576 + a2 49 = a2 a2 = 49 a = √49 a = 7 Therefore, length of ‘a’ (base) = 7 cm 3. Solve to find the missing value of the triangle using the formula of Pythagorean Theorem: Solution: Identify the perpendicular, base and the hypotenuse of the right angle triangle. Perpendicular = k and Base = 7.5 Hypotenuse = 8.5, since hypotenuse is the opposite side of the right angle. Substitute the values into the Pythagorean formula (here 'k' is the perpendicular) (h)2 = p2 + b2 [Here h → hypotenuse, p → perpendicular, b → base] 8.52 = k2 + 7.52 Solve to find the known value of ‘k ’ 72.25 = k2 + 56.25 72.25 – 56.25 = k2 + 56.25 – 56.25 16 = k2 k2 = 16 k = √16 k = 4 Therefore, missing value of the triangle ‘k’ (perpendicular) = 4 Congruent Line-segments Congruent Angles Congruent Triangles Conditions for the Congruence of Triangles Side Side Side Congruence Side Angle Side Congruence Angle Side Angle Congruence Angle Angle Side Congruence Right Angle Hypotenuse Side congruence Pythagorean Theorem Proof of Pythagorean Theorem Converse of Pythagorean Theorem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184452056884766, "perplexity": 1708.8762245581643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00095.warc.gz"}
https://www.physicsforums.com/threads/quotient-spaces.170306/
# Quotient Spaces 1. May 14, 2007 ### matheinste Help needed to eventually understand Quotient spaces. #### Attached Files: • ###### QuotientSpaces.pdf File size: 20.2 KB Views: 85 2. May 14, 2007 ### StatusX The notation v+W refers to what is called the coset of W containing v, which is just the set: $$v + W = \{ v + w | w \in W \}$$ So, for example, 0+W is just W, as is w+W for any w in W, and v+W = v'+W (as sets) if and only if v-v' is in W (you should try proving these facts). So another way of thinking of the quotient space is as the set of cosets v+W. Scalar multiplication is defined by a(v+W)=(av)+W, and similarly for vector addition, and its not hard to check this makes V/W into a vector space with W as the zero vector. 3. May 15, 2007 ### matheinste Thankyou StatusX. I have looked at cosets and now understand ( or at least I think I do ) what is going on. My interest is in physics and so I will now go on to learn why we would construct such an entity as a quotient set in real situations. I will probably need more help in future. Thanks again for your straightforward explanation. Matheinste. 4. May 15, 2007 ### matt grime Quotient spaces occur when you want to set somethings equal to zero. That is where they occur through out maths in various context. In physics, you might want to quotient out some space of operators by those vanishing on certain initial conditions, for instance. Everytime you use vector spaces, you will use quotient spaces. There is nothing hard or mysterious about them. They are very good to think about because they force you to stop thinking about bases which are a hindrance to understanding anything. 5. May 15, 2007 ### matheinste Hello Matt Grime. I think I am heading in the right direction. I am getting the idea that by quotienting or factoring out the various ( infinite ) combinations of components, in various ( infinite ) bases that can be used to represent a geometrical vector you are somehow getting to the intrinsic nature of the vector itself. Is this somewhere close. Matheinste. 6. May 15, 2007 ### mathwonk cosets also arise from linear maps. a linear map from V to W defines cosets as follows: if K is the subspace of vectors mapped o zero, then the cosets iof zk are the sets of vectors maped to the same point. i.e. the inverse image of each point in the image is a coset of K. 7. Feb 11, 2008 ### sp_math_stud Quocient spaces Hi! I'm new in this, and I was looking for some people to help me in these things. I'm studying maths, and I had broken my head with this demonstration. In the quocient spaces, we demostrate that all vectors are independents and we study if they generate the quocient space. My problem with this is how I prove that one vector of this space (x) generate all the space. Thanksfully, Fatima 8. Feb 12, 2008 ### HallsofIvy Staff Emeritus An example you may be familiar with is "modulo arithmetic". The set of all integers is a group and the set of all multiples of 3 is a subgroup. It's cosets are {0, 3, -3, 6, -6, ...} (multiples of 3), {1, -2, 4, -5, ...} (numbers one larger than a multiple of 3) and {2, -1, 5, -4, ...} (numbers one less than a multiple of 3). Treating those cosets as objects them selves, we get a 3 member quotient group- the integers modulo 3. 9. Feb 12, 2008 ### mrandersdk you can also use quotients in describing periodic boundary conditions in a very mathematical way (maybe not the way it is usually done). fx $$\mathbb{R}/\mathbb{Z}$$ makes all numbers on the real axis that differ by a integer multple equal, so if you got a potential that has periodicity of 1 then it is actully a function $$f: \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{R}$$ like when using te first Brillouin zone. guess that you can construct more difficult zones in 3D, if you make some clever quotient of R^3. Similar Discussions: Quotient Spaces
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026354312896729, "perplexity": 711.2232523486842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00177.warc.gz"}
http://zeiny.net/Proposals/UnanchTankProposal/node11.html
## Variational Principles of the Liquid-Structure Interaction Problem The dynamic response of liquids has significant influence on the response of their containers. Inappropriate approximation of the liquid motion may lead to major errors in estimating the seismic response of the container. The liquid pressures and the impact forces form the measurable level of the energy transferred to the tank shell. In addition, the motion of the tank wall is a primary source for the liquid energy. Since this energy transfer occurs simultaneously throughout the liquid boundary, it is essential in the finite element analysis of such problems to use a model that effectively deals with the coupling between the liquid and the tank wall. The equations of motion of a liquid may be formulated by two different approaches. The Eulerian formulation is obtained by tracking the velocity, pressure and density for all points of the space occupied by the liquid at all instances. The Lagrangian formulation is obtained by considering the history of each particle. In the current investigation, a Lagrangian description of the structure's motion is utilized, which makes it necessary to use a Lagrangian description of the liquid-structure interface in order to enforce compatibility between the structure and the liquid elements. The continuity equation in the Eulerian form is utilized inside the liquid domain to mathematically describe the liquid motion inside the tank. The liquid in this analysis is considered to be inviscid, irrotational and incompressible. Such simplifying assumptions allow displacements, pressures or velocity potentials to be the variables in the liquid domain. The displacement-based liquid elements may be easy to incorporate in many finite element programs for structural analysis and it may simplify the enforcement of the liquid-structure interface constraints. However, such elements require two or three degrees of freedom per node. In addition, this approach is not well suited for problems with large liquid displacements and requires special care to prevent zero-energy rotational modes. Alternatively, using pressures or velocity potentials as the unknown degrees of freedom requires only one degree of freedom per node inside the liquid domain, which significantly reduces the computational cost of the analysis, yet adequately represents the physical behavior of the liquid. The latter approach is used in this investigation. Structure Domain The virtual work statement of the structural domain may be written as = - (1) where is the stress-strain matrix, is the strain vector, is the displacement vector, is the structural domain, is the wet surface of the structure, is the liquid pressure vector, is the body force vector and is the mass density of the structure. Figure 1: Boundary conditions at a structure node in contact with liquid element Liquid Domain Following the work done by Kock and Olson (1991), the variational indicator of an incompressible liquid flowing under gravity field is obtained as (2) or concisely, (3) where P is the total pressure which may be also written as (4) where Po is the hydrostatic pressure at the point, is the mass density of the liquid, y is the Cartesian coordinate measured in a direction opposite to that of the gravitational acceleration g, and is the velocity potential. Coupled Liquid-Structure System In order to apply the variational principle to the liquid-structure interaction problem, the liquid and the structure functionals, given by Eq. (1) and (3), are added. The two statements are coupled at the liquid-structure interfaces by = (5) = (6) where is the outward normal unit vector from the liquid towards the structure. Figure (1) illustrates the interaction between a structure node with a liquid element, where Ta is the tributary area of the structure node. A. Zeiny 2000-05-12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837901949882507, "perplexity": 472.6443905828679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820556.7/warc/CC-MAIN-20171017013608-20171017033608-00101.warc.gz"}
https://arxiv.org/abs/1805.05784v1
hep-ex (what is this?) # Title: Search for top squarks decaying via four-body or chargino-mediated modes in single-lepton final states in proton-proton collisions at $\sqrt{s} =$ 13 TeV Abstract: A search for the pair production of the lightest supersymmetric partner of the top quark ($\widetilde{\mathrm{t}}_1$) is presented. The search focuses on a compressed scenario where the mass difference between the top squark and the lightest supersymmetric particle, often considered to be the lightest neutralino ($\widetilde{\chi}^0_1$), is smaller than the mass of the W boson. The proton-proton collision data were recorded by the CMS experiment at a centre-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb$^{-1}$. In this search, two decay modes of the top squark are considered: a four-body decay into a bottom quark, two additional fermions, and a $\widetilde{\chi}^0_1$; and a decay via an intermediate chargino. Events are selected using the presence of a high-momentum jet, significant missing transverse momentum, and a low transverse momentum electron or muon. Two analysis techniques are used, targeting different decay modes of the $\widetilde{\mathrm{t}}_1$: a sequential selection and a multivariate technique. No evidence for the production of top squarks is found, and mass limits at 95% confidence level are set that reach up to 560 GeV, depending on the $m(\widetilde{\mathrm{t}}_1) - m(\widetilde{\chi}^0_1)$ mass difference and the decay mode. Comments: Submitted to JHEP. All figures and tables can be found at this http URL (CMS Public Pages) Subjects: High Energy Physics - Experiment (hep-ex) Report number: CMS-SUS-17-005, CERN-EP-2018-079 Cite as: arXiv:1805.05784 [hep-ex] (or arXiv:1805.05784v1 [hep-ex] for this version) ## Submission history From: The CMS Collaboration [view email] [v1] Tue, 15 May 2018 14:09:53 GMT (609kb,D) [v2] Wed, 19 Sep 2018 13:10:42 GMT (609kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464730620384216, "perplexity": 2033.0216368993642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516892.84/warc/CC-MAIN-20181023174507-20181023200007-00441.warc.gz"}
https://testbook.com/blog/network-theory-gate-ec-quiz-5/
# Network Theory for GATE EC Quiz 5 0 Save Here is a quiz to help you prepare for your upcoming GATE 2016 exam. The GATE EC paper has several subjects, each one as important as the last. However, one of the most important subjects in GATE EC is Network Theory. The subject is vast, but practice makes tackling it easy. This quiz contains important questions which match the pattern of the GATE exam. Check your preparation level in every chapter of Network Theory for GATE. Simply take the quiz and comparing your ranks. Learn about Nodal and mesh analysis, Thevenin and Norton’s theorems, Time domain analysis, Frequency domain analysis and more. Network Theory for GATE EC Quiz 5 Que. 1 For the two port network shown below, the short circuit admittance parameter matrix is 1. $$\left[ {\begin{array}{*{20}{c}} 4&{ – 2}\\ { – 2}&4 \end{array}} \right]s$$ 2. $$\left[ {\begin{array}{*{20}{c}} 1&{ – 0.5}\\ { – 0.5}&1 \end{array}} \right]s$$ 3. $$\left[ {\begin{array}{*{20}{c}} 1&{ – 0.5}\\ {0.5}&1 \end{array}} \right]s$$ 4. $$\left[ {\begin{array}{*{20}{c}} 4&2\\ 2&4 \end{array}} \right]s$$ Que. 2 For parallel RLC circuit , which one of following statement is not correct : 1. The bandwidth of circuit decreases as R is increased. 2. The bandwidth of circuit remains same  if L is increased. 3. At resonance, input impedance is Real quantity. 4. At resonance, the magnitude of input impedance attains the minimum value Que. 3 Current $$I$$ in the circuit shown is 1. $$-j\ A$$ 2. $$j\ A$$ 3. $$0\ A$$ 4. $$20\ A$$ Que. 4 A fully charged mobile phone with a 12∨ battery is good for a 10 minute talk time. Assume that, during the talk time the battery delivers a constant current of 2 A and its voltage drops linearly from 12∨ to 10∨ as shown in figure. How much energy does the battery delivers during this talk time? 1. 220 J 2. 12 KJ 3. 13.2 KJ 4. 14.4 J Que. 5 In the interconnection of ideal sources shown in figure, it is known that 60Vsource is absorbing power. Which of the following can be value of current source I? 1. 10 A 2. 13 A 3. 15 A 4. 18 A
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813424110412598, "perplexity": 3524.2291731926334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00728.warc.gz"}
http://mathhelpforum.com/statistics/20797-i-cant-figure-out-help-plz.html
# Math Help - i cant figure this out help plz 1. ## i cant figure this out help plz A box contains 1 red, 2 white and 4 blue balls. Two balls are drawn without replacement, nothing the color of each. If the first two balls drawn are white and blue in either order, then one more ball is drawn, and its color noted. Determine the number of elements in the sample space for this experiment. 2. Hello, dogfsh722! Just make a list . . . A box contains 1 red, 2 white and 4 blue balls. .Two balls are drawn without replacement, noting the color of each. .If the first two balls drawn are white and blue in either order, then one more ball is drawn, and its color noted. Determine the number of elements in the sample space for this experiment. Two balls are drawn. .There are eight possible outcomes: . . $\{RW,\,RB,\,WR,\,WW,\,BR,\,BB\}\;\;\{WB,\,BW\}$ In the first six outcomes, the experiment is over. In the last two outcomes, a third ball is drawn. . . It could be $R,\,W\text{, or }B.$ Hence, there are: . $2 \times 3 \:=\:6$ three-draw outcomes. The number of outcomes is: . $\text{(6 two-draws) + (6 three-draws)} \; = \;12$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810507595539093, "perplexity": 413.5941961180774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676381.33/warc/CC-MAIN-20151001215756-00117-ip-10-137-6-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/149328/lesson-solving-system-of-equations-using-matrices
# Lesson : Solving System of Equations using matrices I have a matrix $$A = \begin{pmatrix} a & 0 & 0 \\ 2 & b & 5 \\ -3 & 1 & b \end{pmatrix}$$ in my try, I came up with $$bx1 = 0,\quad x2 + 5/a x3 = 0,\quad x2 + a x3 = 0$$ The question is to find all possible choices of $a$, and $b$, that would make the matrix singular. - Are you familiar with determinants? Specifically, the fact that a matrix is singular if and only if its determinant is zero? –  anon May 24 '12 at 18:56 The determinant is $a(b^2-5)$, so the matrix is singular if $a=0$ or if $b=\pm\sqrt 5$. You can also see this without referring to the determinant. If $a=0$, then the top row is all zeros, so the matrix is singular. If $a\neq 0$, then the first row is clearly not in the space spanned by columns 2 and 3. Therefore, the only way you'll get a singular matrix is when those two columns are linearly dependent (scalar multiples of one another). This happens when the ratios of coordinates are the same: $b/1=5/b$, i.e. $b^2=5$. - Thanks. I had been trying to solve the system, using row echelon, and kept getting some weird looking results. –  Rac Main May 24 '12 at 19:21 The matrix $A$ is singular if and only if its determinant is $0$. The determinant of $A$ is isn’t hard to calculate; it turns out to be a very simple function of $a$ and $b$. If you set that expression to $0$ and factor it, you should be able to determine quite easily what values of $a$ and $b$ make it $0$. Added: $$\det A = \left| \begin{array}{c} a & 0 & 0 \\ 2 & b & 5 \\ -3 & 1 & b \end{array}\right|= a\left|\begin{array}{c} b&5\\1&b \end{array}\right|=a(b^2-5)\;.$$ (There is also a shortcut for calculating the determinant of a $3\times 3$ matrix that you can find here; it gives $ab^2-5a$, which is then readily factored to $a(b^2-5)$.) - @RacMain It's $a \times {\rm det}\pmatrix{b & 5 \\ 1 & b}.$ Can you compute that? –  user2468 May 24 '12 at 19:09 You can reduce by rows (or columns: whatever) your matrix. It will be singular iff at least one of the rows (columns) becomes all zeroes at some point. - I tried to do just that and had a hard time figuring it out, so I decided to ask for help with it instead. –  Rac Main May 24 '12 at 19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040241241455078, "perplexity": 159.3161258275205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894865.50/warc/CC-MAIN-20140722025814-00131-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www1.chapman.edu/~jipsen/calc/LimReciprocalLaw.html
Theorem: If $\lim_{x\to a}\ g(x)=M\ne 0$ then $\lim_{x\to a}\ \frac{1}{g(x)}=\frac{1}{M}$. Proof: Suppose $\lim_{x\to a}\ g(x)=M\ne 0$, and consider any $\epsilon >0$. We must find a $\delta>0$ such that if $0< |x-a|< \delta$ then $|\frac{1}{g(x)}-\frac{1}{M}|< \epsilon$. First, observe that $|\frac{1}{g(x)}-\frac{1}{M}|=\frac{|M-g(x)|}{|Mg(x)|}$. The numerator can be made small, but we also have to show that the denominator is not small when $x$ is near $a$. Since $\lim_{x\to a}\ g(x)=M\ne 0$ we can find a $\delta_1>0$ such that if $0<|x-a|< \delta_1$ then $|g(x)-M|< \frac{|M|}{2}$. Therefore $|M|=|M-g(x)+g(x)|\le|M-g(x)|+|g(x)|< \frac{|M|}{2}+|g(x)|$. It follows that $|g(x)|>\frac{M}{2}$, and so $\frac{1}{|Mg(x)|}=\frac{1}{|M||g(x)|}< \frac{1}{|M|}\cdot\frac{2}{|M|}= \frac{2}{M^2}$. Also, there exists a $\delta_2>0$ such that if $0< |x-a|< \delta_2$ then $|g(x)-M|< \epsilon(\frac{M^2}{2})$. Take $\delta=\text{min}\{\delta_1,\delta_2\}$ and assume $0< |x-a|< \delta$. Then $0< |x-a|< \delta_1$ and $0<|x-a|< \delta_2$, so $|\frac{1}{g(x)}-\frac{1}{M}|=\frac{|M-g(x)|}{|Mg(x)|}< (\frac{2}{M^2})\cdot\epsilon\cdot(\frac{M^2}{2})=\epsilon$. This proves that $\lim_{x\to a}\ \frac{1}{g(x)}=\frac{1}{M}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997569918632507, "perplexity": 29.83670535803623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812665.41/warc/CC-MAIN-20180219131951-20180219151951-00337.warc.gz"}
https://brilliant.org/problems/electric-pendulum/
# Electric pendulum As shown above, a charged particle is hanging on the top of two parallel plates. A voltage of $$V$$ generates a uniform electric field inside the plates. If we lift up the charged particle to the point $$x$$ and then let it go, the charged particle oscillates between $$x$$ and $$z$$ with $$y$$ the center point of the oscillation. Which of the following statements is correct? (Ignore gravity, air resistance and the generation of electromagnetic waves.) a) The electric potential at the point $$x$$ is greater than that at the point $$y.$$ b) The period of the oscillation decreases as the voltage $$V$$ increases. c) The magnitude of the electric force acting on the charged particle at the point $$x$$ is greater than that at the point $$y.$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541634917259216, "perplexity": 137.00185245779065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690591.29/warc/CC-MAIN-20170925092813-20170925112813-00190.warc.gz"}
https://undergroundmathematics.org/calculus-meets-functions/r7998/solution
Review question # If $y=f(x)$ has a turning point when $x=\frac{1}{4}$, can we find $\lambda$? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource Ref: R7998 ## Solution It is given that $f(x)=(x-2)^2-\lambda(x+1)(x+2).$ 1. Find the values of $\lambda$ for which the equation $f(x)=0$ has two equal roots. By expanding $f(x)$ we obtain \begin{align*} f(x) &= x^2-4x+4-\lambda(x^2+3x+2) \\ &= (1-\lambda)x^2-(4+3\lambda)x+4-2\lambda. \end{align*} For the equation $f(x)=0$ to have two equal roots, the discriminant must be zero. So we want to solve \begin{align*} 0 &= [-(4+3\lambda)]^2-4(1-\lambda)(4-2\lambda) \\ &= 16+24\lambda+9\lambda^2-4(4-6\lambda+2\lambda^2) \\ &= \lambda^2+48\lambda \\ &= \lambda(\lambda+48), \end{align*} and so $\lambda = 0$ or $\lambda=-48$. 1. Show that, when $\lambda=2$, $f(x)$ has a maximum value of $25$. When $\lambda=2$, we have $f(x)=-x^2-10x$. We find the maximum of $f(x)$ by completing the square. We can write $f(x)=-(x+5)^2+25$, and since $-(x+5)^2$ is strictly negative, we see that the maximum value of $f(x)$ is $25$, which occurs when $x=-5$. Alternatively we could find the derivative $f'(x)$ and set this equal to zero to find the turning point. Or we could find the roots of $f(x)=0$, and use the symmetry of the parabola. 1. Given that the curve $y=f(x)$ has a turning point when $x=\dfrac{1}{4}$, find the value of $\lambda$ and sketch the curve for this value of $\lambda$. We have $f'(x)=2(1-\lambda)x-(4+3\lambda).$ We know that $f'\left(\dfrac{1}{4}\right)=0$, since there is a turning point at $x=\dfrac{1}{4}$. So $2(1-\lambda)\left(\frac{1}{4}\right)-4-3\lambda=0,$ and so $1-\lambda-8-6\lambda=0,$ on multiplying the equation by $2$. This simplifies to $\lambda = -1$. For this value of $\lambda$, we have $f(x)=2x^2-x+6$. To sketch this graph we observe that the $y$-intercept is $6$, and that $x=\dfrac{1}{4}$ gives a minimum, since the coefficient of the $x^2$ term is positive. We calculate that $f\left(\frac{1}{4}\right)=\frac{2}{16}-\frac{1}{4}+6=\frac{2-4+96}{16}=\frac{94}{16}=\frac{47}{8}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999910593032837, "perplexity": 150.69804550777238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00123.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?p=5400
## simplifying complex fraction: (3/8y + 5/6x) / (5/12xy) Quadratic equations and inequalities, variation equations, function notation, systems of equations, etc. Lila Posts: 4 Joined: Mon Feb 07, 2011 6:01 am Contact: ### simplifying complex fraction: (3/8y + 5/6x) / (5/12xy) Please don't freak when you read this question. It's easier than it looks (I think!). I am just dumb and not sure how to answer it. My textbook doesn't explain this method and I don't have a teacher so I'm stuck. (3/8y + 5/6x) / (5/12xy) (9x + 20y) / 10 I am not sure how to add two fractions that have completely different denominators. How did they do this? Thanks! stapel_eliz Posts: 1687 Joined: Mon Dec 08, 2008 4:22 pm Contact: Lila wrote:My textbook doesn't explain this method and I don't have a teacher so I'm stuck. (3/8y + 5/6x) / (5/12xy) (9x + 20y) / 10 I am not sure how to add two fractions that have completely different denominators. How did they do this? To learn how to add rational expressions (that is, polynomial fractions), try this lesson. (If you need to review how to work with numerical fractions, try this review.) To learn how to simplify "complex" fractions (which is the term for this sort of expression), try here. Lila Posts: 4 Joined: Mon Feb 07, 2011 6:01 am Contact: ### Re: simplifying complex fraction: (3/8y + 5/6x) / (5/12xy) Hi, that was helpful but not what I was looking for I'm afraid :( How do I add two fractions that have different pronumerals in the denominator? So if I had: 3/a + 4/b How would I simplify that? Hope there is a way to do this, otherwise I'll have to call the publishers of the textbook and ask them! stapel_eliz Posts: 1687 Joined: Mon Dec 08, 2008 4:22 pm Contact: Lila wrote:How do I add two fractions that have different pronumerals in the denominator? Fractions with variables in the denominators are called "rational expressions". The method for adding rational expressions (that is, polynomial fractions) is exactly the same as for adding numerical fractions. Since the lessons on fractions and adding rational expressions do not relate to what you are attempting, you must mean something other than just adding things like 3/a and 4/b, something other than simplifying 3/a + 4/b. However, I'm afraid I can't figure out what this "something other" might be...? Lila Posts: 4 Joined: Mon Feb 07, 2011 6:01 am Contact: ### Re: simplifying complex fraction: (3/8y + 5/6x) / (5/12xy) okay, I don't know what's going on either! How do I simplify 3/a + 4/b? Would the LCM be 1? stapel_eliz Posts: 1687 Joined: Mon Dec 08, 2008 4:22 pm Contact: Lila wrote:How do I simplify 3/a + 4/b? Would the LCM be 1? Since 1 is not a multiple of $a$ and $b$, no, this cannot be the Least Common Multiple. To learn how to find LCMs of variable expressions, please try here. To see worked examples of this process, please review the lesson (provided earlier) on adding rational expressions. Lila Posts: 4 Joined: Mon Feb 07, 2011 6:01 am Contact: ### Re: simplifying complex fraction: (3/8y + 5/6x) / (5/12xy) oh okay, I'll check it out. Thanks so much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791409730911255, "perplexity": 2474.0375013814573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00092-ip-10-164-35-72.ec2.internal.warc.gz"}
http://adereth.github.io/blog/2014/10/12/silvermans-mode-detection-method-explained/
# Silverman's Mode Estimation Method Explained I started digging into the history of mode detection after watching Aysylu Greenberg’s Strange Loop talk on benchmarking. She pointed out that the usual benchmarking statistics fail to capture that our timings may actually be samples from multiple distributions, commonly caused by the fact that our systems are comprised of hierarchical caches. I thought it would be useful to add the detection of this to my favorite benchmarking tool, Hugo Duncan’s Criterium. Not surprisingly, Hugo had already considered this and there’s a note under the TODO section: I hadn’t heard of using kernel density estimation for multimodal distribution detection so I found the original paper, Using Kernel Density Estimates to Investigate Multimodality (Silverman, 1981). The original paper is a dense 3 pages and my goal with this post is to restate Silverman’s method in a more accessible way. Please excuse anything that seems overly obvious or pedantic and feel encouraged to suggest any modifications that would make it clearer. ## What is a mode? The mode of a distribution is the value that has the highest probability of being observed. Many of us were first exposed to the concept of a mode in a discrete setting. We have a bunch of observations and the mode is just the observation value that occurs most frequently. It’s an elementary exercise in counting. Unfortunately, this method of counting doesn’t transfer well to observations sampled from a continuous distribution because we don’t expect to ever observe the exact some value twice. What we’re really doing when we count the observations in the discrete case is estimating the probability density function (PDF) of the underlying distribution. The value that has the highest probability of being observed is the one that is the global maximum of the PDF. Looking at it this way, we can see that a necessary step for determining the mode in the continuous case is to first estimate the PDF of the underlying distribution. We’ll come back to how Silverman does this with a technique called kernel density estimation later. ## What does it mean to be multimodal? In the discrete case, we can see that there might be undeniable multiple modes because the counts for two elements might be the same. For instance, if we observe: $$1,2,2,2,3,4,4,4,5$$ Both 2 and 4 occur thrice, so we have no choice but to say they are both modes. But perhaps we observe something like this: $$1,1,1,2,2,2,2,3,3,3,4,9,10,10$$ The value 2 occurs more than anything else, so it’s the mode. But let’s look at the histogram: That pair of 10’s are out there looking awfully interesting. If these were benchmark timings, we might suspect there’s a significant fraction of calls that go down some different execution path or fall back to a slower level of the cache hierarchy. Counting alone isn’t going to reveal the 10’s because there are even more 1’s and 3’s. Since they’re nestled up right next to the 2’s, we probably will assume that they are just part of the expected variance in performance of the same path that caused all those 2’s. What we’re really interested in is the local maxima of the PDF because they are the ones that indicate that our underlying distribution may actually be a mixture of several distributions. ## Kernel density estimation Imagine that we make 20 observations and see that they are distributed like this: We can estimate the underlying PDF by using what is called a kernel density estimate. We replace each observation with some distribution, called the “kernel,” centered at the point. Here’s what it would look like using a normal distribution with standard deviation 1: If we sum up all these overlapping distributions, we get a reasonable estimate for the underlying continuous PDF: Note that we made two interesting assumptions here: 1. We replaced each point with a normal distribution. Silverman’s approach actually relies on some of the nice mathematical properties of the normal distribution, so that’s what we use. 2. We used a standard deviation of 1. Each normal distribution is wholly specified by a mean and a standard deviation. The mean is the observation we are replacing, but we had to pick some arbitrary standard deviation which defined the width of the kernel. In the case of the normal distribution, we could just vary the standard deviation to adjust the width, but there is a more general way of stretching the kernel for arbitrary distributions. The kernel density estimate for observations $X_1,X_2,…,X_n$ using a kernel function $K$ is: $$\hat{f}(x)=\frac{1}{n}\sum\limits_{i=1}^n K(x-X_i)$$ In our case above, $K$ is the PDF for the normal distribution with standard deviation 1. We can stretch the kernel by a factor of $h$ like this: $$\hat{f}(x, h)=\frac{1}{nh}\sum\limits_{i=1}^n K(\frac{x-X_i}{h})$$ Note that changing $h$ has the exact same effect as changing the standard deviation: it makes the kernel wider and shorter while maintaining an area of 1 under the curve. ## Different kernel widths result in different mode counts The width of the kernel is effectively a smoothing factor. If we choose too large of a width, we just end up with one giant mound that is almost a perfect normal distribution. Here’s what it looks like if we use $h=5$: Clearly, this has a single maxima. If we choose too small of a width, we get a very spiky and over-fit estimate of the PDF. Here’s what it looks like with $h = 0.1$: This PDF has a bunch of local maxima. If we shrink the width small enough, we’ll get $n$ maxima, where $n$ is the number of observations: The neat thing about using the normal distribution as our kernel is that it has the property that shrinking the width will only introduce new local maxima. Silverman gives a proof of this at the end of Section 2 in the original paper. This means that for every integer $k$, where $1<k<n$, we can find the minimum width $h_k$ such that the kernel density estimate has at most $k$ maxima. Silverman calls these $h_k$ values “critical widths.” ## Finding the critical widths To actually find the critical widths, we need to look at the formula for the kernel density estimate. The PDF for a plain old normal distribution with mean $\mu$ and standard deviation $\sigma$ is: $$f(x)=\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{–\frac{(x-\mu)^2}{2\sigma^2}}$$ The kernel density estimate with standard deviation $\sigma=1$ for observations $X_1,X_2,…,X_n$ and width $h$ is: $$\hat{f}(x,h)=\frac{1}{nh}\sum\limits_{i=1}^n \frac{1}{\sqrt{2\pi}}\mathrm{e}^{–\frac{(x-X_i)^2}{2h^2}}$$ For a given $h$, you can find all the local maxima of $\hat{f}$ using your favorite numerical methods. Now we need to find the $h_k$ where new local maxima are introduced. Because of a result that Silverman proved at the end of section 2 in the paper, we know we can use a binary search over a range of $h$ values to find the critical widths at which new maxima show up. ## Picking which kernel width to use This is the part of the original paper that I found to be the least clear. It’s pretty dense and makes a number of vague references to the application of techniques from other papers. We now have a kernel density estimate of the PDF for each number of modes between $1$ and $n$. For each estimate, we’re going to use a statistical test to determine the significance. We want to be parsimonious in our claims that there are additional modes, so we pick the smallest $k$ such that the significance measure of $h_k$ meets some threshold. Bootstrapping is used to evaluate the accuracy of a statistical measure by computing that statistic on observations that are resampled from the original set of observations. Silverman used a smoothed bootstrap procedure to evaluate the significance. Smoothed bootstrapping is bootstrapping with some noise added to the resampled observations. First, we sample from the original set of observations, with replacement, to get $X_I(i)$. Then we add noise to get our smoothed $y_i$ values: $$y_i=\frac{1}{\sqrt{1+h_k^2/\sigma^2}}(X_{I(i)}+h_k \epsilon_i)$$ Where $\sigma$ is the standard deviation of $X_1,X_2,…,X_n$, $h_k$ is the critical width we are testing, and $\epsilon_i$ is a random value sampled from a normal distribution with mean 0 and standard deviation 1. Once we have these smoothed values, we compute the kernel density estimate of them using $h_k$ and count the modes. If this kernel density estimate doesn’t have more than $k$ modes, we take that as a sign that we have a good critical width. We repeat this many times and use the fraction of simulations where we didn’t find more than $k$ modes as the p-value. In the paper, Silverman does 100 rounds of simulation. ## Conclusion Silverman’s technique was a really important early step in multimodality detection and it has been thoroughly investigated and improved upon since 1981. Google Scholar lists about 670 citations of this paper. If you’re interested in learning more, one paper I found particularly helpful was On the Calibration of Silverman’s Test for Multimodality (Hall & York, 2001). One of the biggest weaknesses in Silverman’s technique is that the critical width is a global parameter, so it may run into trouble if our underlying distribution is a mixture of low and high variance component distributions. For an actual implementation of mode detection in a benchmarking package, I’d consider using something that doesn’t have this issue, like the technique described in Nonparametric Testing of the Existence of Modes (Minnotte, 1997). I hope this is correct and helpful. If I misinterpreted anything in the original paper, please let me know. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338826537132263, "perplexity": 288.63479900798876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00651-ip-10-171-10-108.ec2.internal.warc.gz"}
https://datacadamia.com/data/type/number/system/imaginary
Number System - Imaginary Number An imaginary number i is one solution to $x^2 = -1$ and is a part of the definition of a complex number $$i = \sqrt { -1 }$$ It was invented because formulas sometimes required the manipulation of square roots of negative numbers. Numbers such as i, -i, 3i, 2.17i are called imaginary number. 3 - Example • Problem: $x^2 = -9$ • Solution: x = 3i 4 - Documentation / Reference Data Science Data Analysis Statistics Data Science Linear Algebra Mathematics Trigonometry
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595574140548706, "perplexity": 4554.662340808541}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00131.warc.gz"}
http://mathhelpforum.com/calculus/47526-determining-infinite-limits.html
# Math Help - Determining infinite limits 1. ## Determining infinite limits $2x^2(x-1)/(x+1)^3$ limit as x approaches $-1+$ (right hand) I know the limit approaches negative infinity, but am having a hard time proving my answer by manipulating this function. Thanks in advance. 2. Hello, Originally Posted by NotEinstein $2x^2(x-1)/(x+1)^3$ limit as x approaches $-1+$ (right hand) I know the limit approaches negative infinity, but am having a hard time proving my answer by manipulating this function. Thanks in advance. $\lim_{x\to-1^+}2x^2(x-1) = ?$ and $\lim_{x\to-1^+}(x+1)^3 = ?$ hence what is the limit of the ratio $\frac{2x^2(x-1)}{(x+1)^3}$ when $x$ tends to $-1^+$ ? 3. Originally Posted by NotEinstein $2x^2(x-1)/(x+1)^3$ limit as x approaches $-1+$ (right hand) I know the limit approaches negative infinity, but am having a hard time proving my answer by manipulating this function. Thanks in advance. The top tends to -4, the bottom tends to 0. As a denominator gets very small, the number itself gets very large. So the limit is infinity. 4. Some editing (in red): Originally Posted by Prove It The top tends to -4, the bottom tends to $0{\color{red}^{+}}$. As a denominator gets very small and positive, the number itself gets very large and negative (since the top is negative). So the limit is negative infinity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967207133769989, "perplexity": 515.1309155399315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455166292.61/warc/CC-MAIN-20150501043926-00036-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathoverflow.net/revisions/38904/list
$$\lim_{x\to\infty}\frac{\ln\ln x}{\ln x}$$ where you can cancel the $\ln x$ and wind up with $\ln 1=0$, which turns out correct.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980748653411865, "perplexity": 220.8207685075411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700014987/warc/CC-MAIN-20130516102654-00045-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ham.stackexchange.com/questions/12140/does-a-vertically-or-horizontally-polarized-electromagnetic-wavefront-have-a-thi
# Does a vertically or horizontally polarized electromagnetic wavefront have a thickness? Does a vertically or horizontally polarized electromagnetic wavefront have a thickness ? Or is it really thin and just the height or width of it's wavelength ? • I'm not quite sure what you mean with "thickness"? in an X, Y, Z coordinate system, lets say your wave propagates in Z-direction. So, along which of the three axes do you measure "thickness"? Oct 29 '18 at 10:35 • The obvious assumption is that thickness would be the measurement perpendicular to the linear polarization. This, again, assumes that polarization is purely linear, which it rarely is, and rarely states that way. Oct 29 '18 at 11:28 • A wavefront is a surface created by joining all the points in space which correspond to (say) the crest of the wave. This surface moves at the speed of light. It's a *concept" not a physical thing, it makes no sense to talk about its thickness. Its just a way of tagging one wave so we can think about how it moves and changes shape. As a counter-example, imagine setting the trigger on a 'scope to 50%, seeing it trigger on a sine wave, and asking how thick the trigger point is. Oct 30 '18 at 4:24 That image illustrates the electric field just along one line for "clarity". The line is shown in dark grey, running perpendicular to the antenna, from the lower-left to upper-right of the image. I suppose the author thought trying to show an animated electric field in three dimensional space when mapped onto a two-dimensional image would just be too much information to be clear. The length of the green arrows should not be interpreted as any kind of "width". Rather, the length of the arrow is proportional to the electric field intensity at the single point at the base of the arrow. This kind of visual analogy is a common way to represent a vector field. Here's another visualization of an electric field around an oscillating dipole (source: https://gfycat.com/MelodicThornyHairstreak): Here we can see the oscillating electric field exists all around the dipole in three dimensions. In this animation it also appears that the field changes everywhere at once, which implies that the distance visible from one side of the image to the other is much less than one wavelength. If we could "zoom out", we would see the field doesn't actually change everywhere at once, but that changes begin at the origin and then propagate out at the speed of light. In antenna theory, a plane wave is the spherical propagating wave that, after a short distance, appears essentially as a flat plane wave to the observer's antenna. This is due to the rapid expansion of the spherical wave and the relatively small size of a receiving antenna compared to the sphere. This can be likened to the sensation that the earth is flat from an individual's point of observation. The "thickness" of the plane wave is determined by how long the originating transmitter continues to transmit. As an example, imagine sending a single dit in morse code at 13 WPM. At this speed, a dit lasts approximately 100 milliseconds. Since an RF signal travels at 300,000 kilometers/second in free space, the "thickness" of this plane wave is 30,000 kilometers. If nothing else is sent from the transmitter, the plane wave will have passed the receiving antenna in ~100 milliseconds. In deference to comments made to this post, a harmonic plane wave is technically a flat surface (i.e. two dimensions) of identical phase from the emitting device. A true harmonic plane wave does not exist as it would require an infinite amount of energy to generate it. So we often take the liberty of analyzing EM energy as a plane wave when in fact it is not. It is in this same context of inexactitude that I consistently referenced thickness in quotes since in truth this would comprise an infinite number of non-existent two dimensional plane waves. That would be quite pedantic but not necessarily helpful. • To say a dit has a particular thickness is fine in a sense, but to say a wavefront has such a thickness is an entirely different thing. A wavefront is a locus of points with identical phase: to say this locus has a thickness would imply the transmitter's phase remained constant for some amount of time, which isn't how it works. Oct 31 '18 at 1:03 • Interesting interpretation, Glenn. By thickness, I (mistakenly?) thought the questioner was asking about the wavefront's dimensions perpendicular to the direction of propagation, i.e., in proportion to the cross-section of the antenna element(s). Oct 31 '18 at 1:21 • Note that Glenn said "...to the observer's antenna". Oct 31 '18 at 1:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715013861656189, "perplexity": 387.65883404466314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00025.warc.gz"}
http://gamingmouseunder30.club/how-to-highlight-blank-cells-in-excel/
# How To Highlight Blank Cells In Excel how to highlight blank cells in excel how to highlight non blank cells in excel. how to highlight blank cells in excel select my blank cells excel still counts them like in this picture how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel jeremycottinoblankvaluesfigure2 how to highlight empty cells in excel using conditional formatting. how to highlight blank cells in excel select the columns or rows where you want to fill in blanks how to highlight blank cells in excel using conditional formatting. how to highlight blank cells in excel a rule to change the background color of blank cells using a formula how to highlight blank cells in excel 2003. how to highlight blank cells in excel highlight the cells with the data and blank cells in the worksheet how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel 03selectingblanksforrowdeletion how to highlight blank cells in excel 2016 create. how to highlight blank cells in excel highlight blank cells in excel format how to highlight non blank cells in excel. how to highlight blank cells in excel hidden 0s will still be visible in the formula bar or in the cell if you edit in the cell to undo this format simply choose an alternate numeric format how to highlight blank cells in excel 2016 calculating. how to highlight blank cells in excel data set how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel how to highlight blank cells in excel 2016 how do you use flash. how to highlight blank cells in excel use red color to highlight blank cells in excel how to replace zeros with blank cells in an excel worksheet replace zeros in excel worksheet example. how to highlight blank cells in excel all the blank rows will be selected how to highlight empty cells in excel using conditional formatting. how to highlight blank cells in excel example of execution of macro deleting rows with empty cells in excel how to highlight blank cells in excel 2016 if then statement. how to highlight blank cells in excel conditional formatting rules to highlight rows based on other cells being blank or not blank how to highlight empty cells in excel using conditional formatting. how to highlight blank cells in excel highlight blank cells data table how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel click the empty rows button how to highlight blank cells in excel 2016 how do you use flash. how to highlight blank cells in excel excel len function how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel excel conditional formatting rule to highlight blank cells in the invoice column when the delivery date how to highlight blank cells in excel 2016. how to highlight blank cells in excel how to solve if excel do not filter blank cells filter checklist do not show blanks three ways to hide zero values in an excel sheet techrepublic hidden s will still be visible in the formula bar or in the cell if you edit in the cell to undo this format simply choose an alternate numeric format. how to highlight blank cells in excel explaining one of the three methods which you can use to highlight blank cells in excel how to highlight blank cells in excel 2016 is there any easy. how to highlight blank cells in excel kutools for excel how to conditional format blank cells in excel. how to highlight blank cells in excel how to select all columns in excel select blank columns excel how to highlight blank cells in excel 2016 where is the option. how to highlight blank cells in excel how to quickly highlight blank cells in excel how to highlight blank cells in excel 2016. how to highlight blank cells in excel replace zeros in excel worksheet example how to highlight only blank cells in excel. how to highlight blank cells in excel step 4 in the go to special dialog box check the blanks option see above screenshot how to highlight only blank cells in excel. how to highlight blank cells in excel the starting blank cells become the active cell for each respective column now when you press crtlshift any cell that does not equal the active cell how to highlight blank cells in excel 2016 create. how to highlight blank cells in excel convert zero to blank with kutools for excel the select specific cells how to highlight blank cells in excel. how to highlight blank cells in excel example of data with many blank cells left the initial chart created how to highlight blank cells in excel vba. how to highlight blank cells in excel image titled delete empty rows in excel step 8 how to highlight blank cells in excel 2016 if then statement. how to highlight blank cells in excel choose blanks to get only empty cells selected how to highlight blank cells in excel 2016 where is the title. how to highlight blank cells in excel 01selectingareaforrowdeletion how to highlight empty cells in excel using conditional formatting. how to highlight blank cells in excel how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel see the number of selected cells that contain data on the excel status bar how to highlight blank cells in excel 2016. how to highlight blank cells in excel then in the format cell dialog box select a fill color on the fill tab and click the ok button how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel replace zeros in excel worksheet how to conditional format blank cells in excel. how to highlight blank cells in excel how to highlight blank cells select data how to highlight blank cells in excel 2010. how to highlight blank cells in excel fill all blanknonblank cells with go to special function how to highlight blank cells in excel 2016 if then statement. how to highlight blank cells in excel doc highlight non blanks 1 how to highlight empty cells in excel using conditional formatting. how to highlight blank cells in excel doc fill blank cells with value above5 how to highlight blank cells in excel 2016 where is the option. how to highlight blank cells in excel jeremycottinoblankvaluesfigure4 click on format how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel doc select nonblank cell 2 how to highlight non blank cells in excel. how to highlight blank cells in excel excel autofilter show empty rows only select how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel how to conditional format blank cells in excel. how to highlight blank cells in excel excel formula highlight rows with blank cells how to highlight blank cells in excel vba. how to highlight blank cells in excel how to highlight blank cells in excel 2010. how to highlight blank cells in excel press ctrl enter to fill all the blank cells how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel enter image description here how to highlight non empty cells in excel. how to highlight blank cells in excel paste special with values add how to highlight non empty cells in excel. how to highlight blank cells in excel select blank cells in excel go to dialogue box how to highlight blank cells in excel 2016 calculating. how to highlight blank cells in excel and then click ok all of the blank columns of the selected range are highlighted see screenshot how to highlight duplicates in excel the builtin rule can highlight duplicates in one column or in several columns. how to highlight blank cells in excel how to remove blank rows in excel how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel select formulas how to highlight blank cells in excel 2016 row. how to highlight blank cells in excel vba methods to find the last cell row column in excel how to highlight blank cells in excel 2016 create. how to highlight blank cells in excel how to highlight blank cells select blank from drop down how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel excel remove blank cells from list how to highlight blank cells in excel 2016 calculating. how to highlight blank cells in excel doc select nonblank cell 1 how to highlight blank cells in excel 2016 where is the option. how to highlight blank cells in excel how to highlight blank cells select more rules how to highlight blank cells in excel 2016 if less than then this picture. how to highlight blank cells in excel 2 keyboard shortcuts to select a column with blank cells in excel how to highlight blank cells in excel 2010. how to highlight blank cells in excel microsoft excel how to highlight blank cells in excel vba. how to highlight blank cells in excel endxlup method to find last non blank row how to highlight blank cells in excel 2016 if then statement. how to highlight blank cells in excel ad select special cells select entire rows columns if containing certain value how to highlight blank cells in excel. how to highlight blank cells in excel step 2 select the range of cells that you need to navigate go to find select in the home tab choose the option go to special select blanks as how to highlight non empty cells in excel. how to highlight blank cells in excel delete blank cells in excel vba how to highlight blank cells in excel 2016. how to highlight blank cells in excel select the range you will select all non blank cells from and press f5 key to open the go to dialog box then click the special button to open the go to how to highlight blank cells in excel 2016 how to move. how to highlight blank cells in excel select blank cells in excel go to special dialogue how to highlight blank cells in excel. how to highlight blank cells in excel i did however finally figure it out i could solve it using conditional formatting i many workarounds online in the forums but unfortunately fill in blanks in excel with value abovebelow fill empty cells select the columns or rows where you want to fill in blanks. how to highlight blank cells in excel how to highlight blank cells in excel 2016 where is the title. how to highlight blank cells in excel macro to delete rows with empty cells in action how to highlight only blank cells in excel. how to highlight blank cells in excel quickly fill blank cells in excel how to highlight blank cells in excel 2010. how to highlight blank cells in excel now all non blank cells in the specified range are selected at once how to highlight non blank cells in excel. how to highlight blank cells in excel 10highlightedblankcolumns how to highlight blank cells in excel 2016 where is the option. how to highlight blank cells in excel image titled delete empty rows in excel step 1 how to highlight blank cells in excel 2016 row. how to highlight blank cells in excel the empty cells become highlighted how to highlight blank cells in excel 2010. how to highlight blank cells in excel highlight blank cells in excel module how to highlight blank cells in excel vba. how to highlight blank cells in excel 52516 6gif how to highlight blank cells in excel 2016 how do you use flash. how to highlight blank cells in excel how to highlight blank cells in excel 2003. how to highlight blank cells in excel adding same data to all blank cells in a table how to highlight non empty cells in excel. how to highlight blank cells in excel copy values to empty cells below filled cells in selection how to highlight blank cells in excel vba. how to highlight blank cells in excel apply color to highlight blank cells in excel . how to highlight blank cells in excel how to highlight blank cells in excel 2016 calculating. how to highlight blank cells in excel ad fill blank cells 3 how to highlight blank cells in excel 2016 where is the title. how to highlight blank cells in excel screen shot 2016 02 21 at 73535 pm how to highlight blank cells in excel 2016. how to highlight blank cells in excel when i have values in the cell and select multiple cells the count only shows the count of cells that have values in it and not the total cells selected how to highlight blank cells in excel 2016 row. how to highlight blank cells in excel highlighting entire rows based on duplicate values in the key column how to highlight blank cells in excel 2016. how to highlight blank cells in excel macro dialog box to run macro how to highlight blank cells in excel 2016 how do you use flash. how to highlight blank cells in excel there can be blank cells that show up in the used range i call this how to highlight blank cells in excel vba. how to highlight blank cells in excel right click within one of the selected cells and choose delete in the drop down menu that appears how to shade or color blank cells or nonblank cells in excel fill all blanknonblank cells with go to special function. how to highlight blank cells in excel excel formula highlight blank cells how to quickly and easily delete blank rows and columns in excel selectingblanksforrowdeletion. how to highlight blank cells in excel select formulas all formulas are selected how to highlight blank cells in excel 2016 create. how to highlight blank cells in excel merge cell contents of selected columns into one cell how to highlight blank cells in excel 2016 row. how to highlight blank cells in excel 04highlightedblankrows how to conditional format blank cells in excel. how to highlight blank cells in excel how to solve if excel do not filter blank cells filter checklist do not show blanks how to highlight blank cells in excel 2016 how do you use flash. how to highlight blank cells in excel highlight blank cells in excel code how to highlight blank cells in excel 2016. how to highlight blank cells in excel find method to find last row explained excel vba how to highlight blank cells in excel 2016 row. how to highlight blank cells in excel highlight blank cells in excel cf blanks how to highlight only blank cells in excel. how to highlight blank cells in excel excel remove blank cells from list how to highlight blank cells in excel 2010. how to highlight blank cells in excel the built in rule can highlight duplicates in one column or in several columns how to highlight blank cells in excel 2003. how to highlight blank cells in excel at this point cells b2 through b8 should be selected press ctrl 1 to display the format cells dialog box clear the locked checkbox on the protection tab how to highlight blank cells in excel.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234886527061462, "perplexity": 2530.4116981110615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202658.65/warc/CC-MAIN-20190322115048-20190322141048-00280.warc.gz"}
http://physics.stackexchange.com/questions/44188/what-is-the-relativistic-particle-in-a-box?answertab=oldest
# What is the relativistic particle in a box? I know people try to solve Dirac equation in a box. Some claim it cannot be done. Some claim that they had found the solution, I have seen three and they are all different and bizarre. But my main issue is what would make the particle behave differently (in the same box). How useful is it, aside from the physicist curiosity. To improve the question I am adding some clarification. In this paper Alhaidari solves the problem, but he states in the paper that “In fact, the subtleties are so exasperating to the extent that Coulter and Adler ruled out this problem altogether from relativistic physics: “This rules out any consideration of an infinite square well in the relativistic theory”[4]. Ref [4] B. L. Coulter and C. G. Adler, Am. J. Phys. 39, 305 (1971) But also there are attempts in this paper. And I quote from this paper (page 2). “A particular solution may be obtained by considering the Dirac equation with a Lorentz scalar potential [7]; here the rest mass can be thought of as an x-dependent mass. This permits us to solve the infinite square well problem as if it is were a particle with a changing mass that becomes infinite out of the box, so avoiding the Klein paradox [8].” So my question is why all these discrepancies. - I'm definitely not an expert in this subject, but one obvious difficulty with any 'relativistic particle-in-a-box' problem is that in QFT the number of particles is not fixed. The smaller the box, the higher the ground-state energy and the more probability that quantum fluctuations can create another Dirac fermion. Once the size of the box becomes of the order of the Compton wavelength of the particle then the single-particle approximation completely breaks down. David Tong gives a lovely exposition of this point in his first lecture here –  Mark Mitchison Nov 14 '12 at 11:53 Heuristically potential gradients imply pair creation. In this case you have an infinite gradient. If you have a Dirac field in a potential, pairs would spontaneously form out of nowhere. This violate the applicability of the wave equation which is a single particle equation. –  Prathyush Nov 15 '12 at 0:41 @Prathyush , you right, but there are quite few physicists like I have listed that think otherwise. They seem to say That there is a solution as long as you don't go below certain distance or potential, not sure really. –  QSA Nov 15 '12 at 1:00 The edits help, but what discrepancies are you talking about? I think the question could still use some clarification. –  David Z Nov 15 '12 at 4:38 @DavidZaslavsky, Well, as you can see. Adler claims that there is no solution, Alhaidari finds a particular solution and Vidal Alonso finds a solution where mass is proportional to the width of the box, and there are others. So it is not clear to me what is going on. I usually try to ask questions related to fundamental issues that seem to be either glossed over or not well explained in the textbooks and literature. –  QSA Nov 15 '12 at 15:08 By several reasons explained in textbooks, the Dirac equation is not a valid wavefunction equation. You can solve it and find solutions, but those solutions cannot be interpreted as wavefunctions for a particle [1]. I have checked the three articles linked by you and I do not find any discussion of this. For instance, if $\psi(x)$ is a solution to the Dirac equation then $|\psi(x)|^2$ is not the probability density of finding the particle at $x$ because $x$ in Dirac theory is not observable [2]. Moreover, their treatment is far from being completely relativistic. They are working in a pseudo-relativistic approach as in the Coulomb-Dirac approach. [1] This is the reason why the solutions to the Dirac equation are re-interpreted as operators in QFT. [2] This is the reason why $x$ is downgraded from operator status to parameter in QFT. - thanks for going through the papers. What you are saying is of course textbook stuff. But my question( I reformulate) is why are they going about it i.e. how useful is it. and what is wrong with what is been proposed. Something along these lines. –  QSA Nov 15 '12 at 23:19 I cannot know "why are they going about it", somehow as I cannot know why some few people still believes that Earth is flat. I already said in my answer what is wrong with their work, e.g. their solutions to the Dirac equation have no physical meaning and I avoided objectionable statements about particles with changing masses that become infinite out of the box. What is then the utility of their work? Well, it is useless for me. –  juanrga Nov 16 '12 at 18:15 Maybe it is useless to you, but the authors publish in well respected journals. and there are others. for example this paper is published in IOP.iopscience.iop.org/0143-0807/17/1/004 and this generalization adsabs.harvard.edu/abs/2011PhLA..375.1436A and this one cdsweb.cern.ch/record/1272071 –  QSA Nov 18 '12 at 2:52 Yes that is why I wrote "it is useless for me". Physics Letters A has published some of the most famous wrong articles about relativity --a recent but not famous retraction is here-- and has published the scam papers on 'quantum health' and 'IEwatter'. I am not saying that those articles about the Dirac equation are at the same low level, merely informing you that being published is not a synonym for correct neither for useful. If you love them that is fine for me. –  juanrga Nov 18 '12 at 12:53 @QSA: Actually, it's all the excuse you need in Mathematics. If an equation is challenging, bears some elegance or some kind of mystery, finding its solution is interesting. It doesn't matter to the mathematician that it doesn't work with physics of our universe. While it's useless to a physicist, it does push the frontier of Mathematics a little, and besides is it entirely wrong or just inaccurate? Sometimes an order of approximation is all an engineer needs too, mr. Physicist :D –  SF. Jan 14 '13 at 23:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8400370478630066, "perplexity": 508.11188581028216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773061.155/warc/CC-MAIN-20141217075253-00009-ip-10-231-17-201.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/100239/problem-with-continuous-functions
# problem with continuous functions $f,g:\mathbb{R}\longrightarrow\mathbb{R}$ f,g are continuous functions. $\forall q\in\mathbb{Q}$ $f\left(q\right)\leq g\left(q\right)$ I need to prove that $\forall x\in\mathbb{R}$ $f\left(x\right)\leq g\left(x\right)$ - Pick $x\in\mathbb{R}\setminus\mathbb{Q}$. Suppose $f(x)>g(x)$. Let $\varepsilon>0$ be less than the difference between $f(x)$ and $g(x)$. Then there exists $\delta>0$ such that if $w$ differs from $x$ by less than $\delta$, then $f(w)-g(w)$ differs from $f(x)-g(x)$ by less than $\varepsilon$. But some rational numbers are in the interval $(x-\delta,x+\delta)$, so we have a contradiction. Hint: Note that if $(x_n)_{n\in\mathbb{N}}$ is a convergent sequence of real numbers and $f,g$ are continuous functions, $\lim\limits_{n\to\infty} f(x_n)=f(\lim\limits_{n\to\infty}x_n)$ and $\lim\limits_{n\to\infty} g(x_n)=g(\lim\limits_{n\to\infty}x_n)$, and that for any real number $x$ we have some sequence $(x_n)_{n\in\mathbb{N}}$ of rational numbers that converges to it. How can we manipulate limits to show that $f(x)=f(\lim\limits_{n\to\infty}x_n)\leq g(\lim\limits_{n\to\infty}x_n)=g(x)$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996324360370636, "perplexity": 37.18885035537586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00292-ip-10-180-136-8.ec2.internal.warc.gz"}
http://libros.duhnnae.com/2017/aug6/150293273625-Synchronization-of-oscillators-with-long-range-power-law-interactions-Condensed-Matter-Statistical-Mechanics.php
# Synchronization of oscillators with long range power law interactions - Condensed Matter > Statistical Mechanics Synchronization of oscillators with long range power law interactions - Condensed Matter > Statistical Mechanics - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: We present analytical calculations and numerical simulations for thesynchronization of oscillators interacting via a long range power lawinteraction on a one dimensional lattice. We have identified the critical valueof the power law exponent $\alpha c$ across which a transition from asynchronized to an unsynchronized state takes place for a sufficiently strongbut finite coupling strength in the large system limit. We find $\alpha c=3-2$.Frequency entrainment and phase ordering are discussed as a function of $\alpha\geq 1$. The calculations are performed using an expansion about the alignedphase state spin-wave approximation and a coarse graining approach. We alsogeneralize the spin-wave results to the {\it d}-dimensional problem. Autor: Debanjan Chowdhury, M. C. Cross Fuente: https://arxiv.org/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336062669754028, "perplexity": 3661.0075745210343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00209.warc.gz"}
http://mathhelpforum.com/differential-equations/181847-simple-first-order-diff-eq.html
# Thread: Simple first order diff. eq 1. ## Simple first order diff. eq Hi! I have a simple question that i cant get thorugh at the moment. I have a differential equatoin of the form: y' = C - y where y' = dy/dt How can I solve this. I cant do separation of variables here. Help plz 2. $y'=C-y\Leftrightarrow \dfrac{dy}{C-y}=dx$ (separated variables) . 3. Do you mean you aren't allowed to do separation of variables? Or that you don't think separation of variables will work for this problem? Because you can do separation of variables. You could also do an integrating factor if you wanted. And probably a few other techniques as well. 4. Originally Posted by Zogru11 Hi! I have a simple question that i cant get thorugh at the moment. I have a differential equatoin of the form: y' = C - y where y' = dy/dt How can I solve this. I cant do separation of variables here. Help plz \displaystyle \begin{align*}\frac{dy}{dt} &= C - y \\ \frac{dy}{dt} + y &= C \\ e^{\int{1\,dt}}\,\frac{dy}{dt} + e^{\int{1\,dt}}\,y &= C\,e^{\int{1\,dt}} \\ e^t\,\frac{dy}{dt} + e^t\,y &= C\,e^t \\ \frac{d}{dt}\left(e^t\,y\right) &= C\,e^t \end{align*} Can you go from here? 5. No I just didnt realise I was able to do separation of variables :P Thanks for all the answers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999421834945679, "perplexity": 949.6095549194986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00061-ip-10-171-10-108.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/lightning/new
# Tag Info 0 I know this is a little more than you asked for, but lightning is very interesting. A lightning event is usually called a flash and lasts about 0.5 seconds. It consists of a near-invisible stepped leader followed by a very bright return stroke backwards along the path of the stepped leader. Following the first stroke, there may be additional strokes in the ... 10 The point of the point is to increase the electric field near the point. Small radius curves will have a higher local electric field, eventually creating a localize area where the field is greater than the dielectric strength of the air. This results in what I refer to as "micro-lightning." This microlightning discharges the air (or cloud) before the ... 9 Suppose that you have an negatively charged cloud. Floating over your conductor. Then making your lightning conductor pointy at the edge, facilitates better discharge. Because the electric field set up would be high. ${\sigma}=\frac{q}{4\pi r^2}$, We will take an spherical approximation of the pointed end. It will have a very small radius thus high surface ... 1 It is safer to sit inside the car instead of sitting under tree because trees attract the lightning and can make the person die. On the other hand, the car protects the person inside it. It shields the person from any external electric fields. 3 The electric charge difference between the earth and the atmosphere grows with altitude, at around 88 DC volts per meter. This electric potential may be shorted out when a thermonuclear explosion releases radiation which ionizes the atmosphere. About 5% of a nuclear explosion's energy is in the form of ionizing radiation. A study of lightning flashes ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904033899307251, "perplexity": 791.0539967828322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.87/warc/CC-MAIN-20150521113208-00174-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/noncommotative-structural-numbers.7315/
# Noncommotative structural numbers 1. Oct 16, 2003 ### Organic Noncommutative structural numbers Hi, In the attached address (at the end of the web page) there is a short paper (a pdf file) on noncommutative structural numbers: http://www.geocities.com/complementarytheory/CATpage.html Thank you (and special thanks to Hurkyl that gave the formal definition, which is written in the first 7 sentences of the paper). Organic Last edited: Oct 17, 2003 2. Oct 17, 2003 ### HallsofIvy I don't see any formal definition- certainly not in the first 7 sentences. I don't actually see any definition at all. I see a lot of general vagueness and use of undefined symbols. You say "A single-simultaneous-connection is any single real number included in p, q ( = D = Discreteness = a localized element = {.} )." What do you mean by "included in p,q"? In particular, what do you mean by "p, q"? I would tend to assume you mean "any of [p,q], [p,q), (p,q], (p,q) which you had given above. I take it then that "A single-simultaneous connection" is a singleton set? "Double-simultaneous-connection is a connection between any two different real numbers included in p, q , where any connection has exactly 1 D as a common element with some other connection ( = C = Continuum = a non-localized element = {.___.} )." Okay, so a "double-simultaneous-connection" is a pair of numbers? "where any connection has exactly 1 D as a common element with some other connection" is not clear. You appear to be saying that two "connections" (I take you mean "double-simultaneous-connection") that have both elements the same are not considered to be different. That's actually part of the definition of set. I have absolutely no idea what " = C = Continuum = a non-localized element = {.___.} )" could possibly mean. "Therefore, x is . XOR .___." This makes no sense. The only use of "x" before this was as a bound variable in the (standard) definition of [a,b], [a,b), etc. In any case, you have been told repeatedly that your use of "XOR" has no relation to the standard use. Please don't use a standard notation for a non-standard use. You seem to be still agonizing over the difference between the discrete integers and the continuous real numbers. I can only suggest again that you take a good course in basic mathematical analysis. (And it might be a good idea to learn what a "definition" really is.) By the way, what does "non-commotative" mean? Did you mean "non-commutative"? I didn't see any reference to that in you post. 3. Oct 17, 2003 ### Organic Hi HallsofIvy, I wrote: First, thank you for the correction. it is noncommutative. Please after you open the web page, go to the end of it (as I wrote above) and then open the pdf file, which is under the title: Noncommutative structural numbers
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379134178161621, "perplexity": 1127.469661289685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00465.warc.gz"}
https://www.dummies.com/test-prep/asvab-test/multiplying-and-reducing-fractions-on-the-asvab-math-knowledge-test/
Multiplying and Reducing Fractions on the ASVAB Math Knowledge Test Some questions on the ASVAB Mathematics Knowledge subtest may involve multiplying fractions. As you solve an equation, you may need to perform the extra step of reducing the fraction to make it easier to find the right answer. Multiplying fractions is easy. You just multiply the numerators and then multiply the denominators. Look at the following equation: You multiply 1 × 3 × 3 × 3 = 9 (the numerators) and then 2 × 4 × 5 = 40 (the denominators) to result in 9/40. Occasionally, when you multiply fractions, you end up with an extremely large fraction that can be simplified or reduced. To express a fraction in its lowest terms means to put it in such a way that you can’t evenly divide the numerator and the denominator by the same number (other than 1). A number that you can divide into both the numerator and the denominator is called a common factor. If you have the fraction 6/10, both the numerator (6) and the denominator (10) can be divided by the same number, 2. If you do the division, 6 ÷ 2 = 3 and 10 ÷ 2 = 5 you find that 6/10 can be expressed in the simpler terms of 3/5. You can’t reduce (simplify) 3/5 any further; the only other number that both the numerator and denominator can be divided by is 1, so the result would be the same, 3/5. Remember, you can’t use a calculator on the ASVAB, so multiplying large numbers can take extra steps and valuable time. You can make your work easier by canceling out common factors before multiplying. For example, suppose you have the following problem Multiplying the numerators (20 × 14) = 280, then multiplying the denominators (21 × 25) = 525, and finally reducing the fraction may require you to write out three or more separate multiplication/division problems. But you can save time if a numerator and denominator have common factors. Here, the numerator of the first fraction (20) and the denominator of the second (25) have a common factor of 5, so you can divide both of those numbers by 5: Your problem becomes The numerator of the second fraction (14) and the denominator of the first fraction (21) are both divisible by 7, so you can cancel out a 7: Divide 14 and 21 by 7. This changes the equation to a much simpler math problem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982858419418335, "perplexity": 247.6727462444114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00235.warc.gz"}
https://www.physicsforums.com/threads/physics-dynamics-question-homework-help.589547/
# Physics Dynamics Question homework help 1. Mar 23, 2012 ### huzjm 1. The problem statement, all variables and given/known data Find the acceleration in the system of a fletcher’s trolley given m1 = 1.0 kg and m2 = 9.0 kg and a 50 N force of friction exists. θ = 33° See the image below, to explain the diagram. This one has an angle of 33 degrees that is why it is hard. http://i39.tinypic.com/3497zo9.png 2. Relevant equations I am not sure. 3. The attempt at a solution Okay so first I found out the horizontal component of m1 by the formula cos(33)*mass(which is 1.0kg) and got the value 0.838 , then I put the formula Fnet=[(M2*A)-Ff]/(m1+m2) which means 9.0kg*9.81m/s2 - 50 N divided by (0.838+9) .. the answer was 3.9 M/s2 but this is the wrong answer, i asked my teacher, he said you are somewhere close but this is not right. I cannot figure this out, I have been trying different things since hours. 2. Mar 23, 2012 ### tiny-tim welcome to pf! hi huzjm! welcome to pf! (try using the X2 button just above the Reply box ) sorry, but even if you mean a = Fnet/(m1+m2) = [(M2*A)-Ff]/(m1+m2), that's still completely wrong try calling the tension "T", and doing two F = ma equations (one for each block) alternatively, if you're treating the two blocks as a single system, your m in ma has to be the total (unadjusted) mass, and you have to use the component of m1g parallel to the string try it both ways … what do you get? 3. Mar 27, 2012 ### huzjm Thank you very much :) In fact, thank you really very much :) This is what I did Fnet = ma = 9kg * a = mg - T = 9kg * 9.8m/s² - T = 88.2N - T → T = 88.2 - 9a For the lighter mass, Fnet = ma = 1kg * a = T - mgsinΘ - Ff = T - 1kg * 9.8m/s² * sin33º - 50N 1kg * a = T - 5.34N - 50N = T - 55.34N → T = a + 55.34 Since T = T, 88.2 - 9a = a + 55.34 32.86 = 10a a ≈ 3.3 m/s² 4. Mar 27, 2012 ### tiny-tim excellent! (btw, you'll notice you could get the same result by treating it as a one-dimensional motion, with a single body with a mass of 10 kg, friction of 50 N and gravitational forces of 9g N and -gsin33° N ) 5. Mar 27, 2012 ### huzjm Oh so the formula should have been acceleration= [(9.0kg*9.81m/s) - 50N - (9.81m/s * sin 33°)] / 10 KG Thank you really very much :) 6. Mar 27, 2012 ### huzjm I mean m/s(square) Similar Discussions: Physics Dynamics Question homework help
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873628973960876, "perplexity": 1759.5716735973954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00329.warc.gz"}
http://algassert.com/post/1620
31 Jul 2016 One of the annoying things about the Quantum Fourier transform is that it uses very small phase factors. An exact $n$-qubit QFT requires applying a $Z^{2^{-n}}$ gate. Of course, in practice you don't need your QFT to be exact. You can use multi-gate constructions that approximate small phase change gates, and past some reasonable cutoff (that depends on your error budget) you can just completely skip the tiny phase changes because they have so little effect on the measured result. But still, it'd be nice to be able to apply an exact QFT without having to deal with all these finnicky gates every time. In this post, I'm going to explain one way to do that, by using a re-usable gradient resource that can be prepared ahead of time. # Shifting vs Phasing The Fourier transform converts between the time domain and the frequency domain. One of the consequences of this fact is that, when you shift samples in the time domain, you end up phasing values in the frequency domain. Rotating indices gets turned into rotating values (and vice versa). Concretely speaking, this means that you can hop a QFT gate over an increment gate, without changing the function of the circuit, as long as you replace the increment with a phase gradient: But keep in mind which direction you're hopping! If the QFT gate goes from left to right, instead of from right to left, the gradient is negated: This "increment is fourier-transformed phasing" doesn't just apply to incrementing, it also applies to conditional incrementing. That allows us to also apply this transformation to addition. If you add $a$ into $b$, that's equivalent to Fourier-transforming $b$, applying a controlled phase gradient for each bit of $a$, then un-Fourier-transforming: Now lets consider what happens when you increment into a phase gradient. Specifically, the phase gradient you get when you fourier-transform -1. Surprisingly, we end up phasing the control instead of affecting the gradient: What about addition? Basically the same thing happens. A phase gradient is unaffected when you add $a$ into it... except that $a$ itself gets phased by a gradient! # Applying to the Fourier Transform Note: I used subtraction in the diagram, not addition, but it's exactly the same idea. Why would we do this? Because addition doesn't require arbitrarily precise gates. We can implement it starting with a gate set containing just the Hadamard gate, the controlled-not gate, and the $Z^{1/4}$ gate (also known as the $T$ gate). By setting up the phase gradient ahead of time, we only need to pay the price of approximating exponentially precise gates once. We can then use that gradient again and again for as many QFTs, inverse QFTs, and various other things-that-need-precise-phase-gates, as we want. Assuming we need to accumulate at most $\epsilon$ phase error, this approach requires only $O(n \lg \frac{n}{\epsilon})$ gates. That's pretty good! We can also combine this adding-into-gradients idea with the combine-phases-where-possible-using-multiplication idea that I discussed in another post: Compared to the linked post, MAC-into-gradient uses half as many multiply-accumulates and reduces the total number of more-precise-than-$T$ phase gates (including preparation) from $O(n \lg n)$ to $O(n)$. That's a big improvement! (Side note: At first I was hoping that we could recursively apply these improvements to the multiply-accumulates. Fast multiplication algorithms start with a Fourier-transform of the inputs after all, and we're in the middle of doing that already. Alas, it doesn't quite seem to work because the QFT treats the amplitudes as the time-domain samples whereas the multiplication algorithm needs to treat the individual qubits as the samples.) # Polynomial Similarities Let me mention one last interesting thing. You may or may not be aware that Fourier transforms can be applied in fields other than the complex numbers. Basically all you need is a big root of -1, and lots of fields and even rings have a big root of -1. Specifically, you can do a Fourier transform within the space of polynomials modulo $x^n + 1$, where $x^n \equiv -1$ and $x$ is a $2n$'th principal root of unity. If you map what that polynomial FFT is doing into a quantum circuit form, in the same way that the QFT circuit maps what the Cooley-Tukey algorithm is doing into a quantum circuit form, you end up with something that looks exactly like the adding-into-gradients circuit we just made... except that there's no phase gradient preparation at the start. Or, equivalently, the phase gradient has been Fourier transformed into a shift. So, in a sense, the Fourier transform on negacyclic polynomials is a Fourier-transformed Fourier transform. Discuss on Reddit « Wait, What?: Genetic Algorithms for Digital Quantum Simulations Java should Autobox Arrays, or maybe not »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687389492988586, "perplexity": 660.4413701126452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00263.warc.gz"}
https://paperswithcode.com/paper/gradient-estimation-with-discrete-stein
# Gradient Estimation with Discrete Stein Operators 19 Feb 2022  ·  , , , , · Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems. However, when the distribution is discrete, most common gradient estimators suffer from excessive variance. To improve the quality of gradient estimation, we introduce a variance reduction technique based on Stein operators for discrete distributions. We then use this technique to build flexible control variates for the REINFORCE leave-one-out estimator. Our control variates can be adapted online to minimize variance and do not require extra evaluations of the target function. In benchmark generative modeling tasks such as training binary variational autoencoders, our gradient estimator achieves substantially lower variance than state-of-the-art estimators with the same number of function evaluations. PDF Abstract
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304563999176025, "perplexity": 585.4682617533472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00098.warc.gz"}
https://www.redcrab-software.com/en/Calculator/Vector/2/Distance-Square
# Square of the vector distance Online calculator for calculating the square of the distance between vectors with 2 elements Enter the two vectors whose distance is to be calculated. Then press the Caculate button. Square vector distance calculator Vector 1 Vector 2 Result · = Decimal places 0 1 2 3 4 6 8 10 ### More vector functions for 2 elements Addition Subtraction Multiplication Scalar Multiplication Division Scalar-Division Scalar Product Interpolation Distance Distance Square Normalization Length ## Description and formulas of the quadratic vector distance To find the distance or its square between two vectors, use the distance formula. $$d^2=(x_2-x_1)^2 + (y_2-y_1)^2 + (z_2-z_1)^2$$ In the formula the $$x$$ and $$y$$ vectors stand for the position in a vector space. ### Example The following example calculates the square of the distance between the points $$(0, -2, 7)$$ and $$(8, 4, 3)$$. $$d^2=(x_2-x_1)^2 + (y_2-y_1)^2 + (z_2-z_1)^2$$ $$d^2=(8-0)^2 + (4-(-2))^2 + (7-3)^2$$ $$d^2=(8)^2 + (6)^2 + (4)^2$$ $$d^2=64 + 36 +16$$ $$d^2=116$$ The square of the distance between the points $$(0, -2, 7)$$ and $$(8, 4, 3)$$ is $$116$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692522287368774, "perplexity": 723.3570597347542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00437.warc.gz"}
https://www.learndatasci.com/tutorials/python-finance-part-2-intro-quantitative-trading-strategies/
# Python for Finance, Part 2: Intro to Quantitative Trading Strategies Looking more into quantitative trading strategies and determining returns • Python fundamentals • Pandas and Matplotlib • Some Linear Algebra Learn all three interactively through dataquest.io In Python for Finance, Part I, we focused on using Python and Pandas to 1. retrieve financial time-series from free online sources (Yahoo), 2. format the data by filling missing observations and aligning them, 3. calculate some simple indicators such as rolling moving averages and 4. visualise the final time-series. As a reminder, the dataframe containing the three “cleaned” price timeseries has the following format: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style='darkgrid', context='talk', palette='Dark2') data.head(10) AAPLMSFT^GSPC 2000-01-033.62564339.3346301455.219971 2000-01-043.31996438.0059001399.420044 2000-01-053.36854838.4066281402.109985 2000-01-063.07703937.1200801403.449951 2000-01-073.22279437.6051721441.469971 2000-01-103.16611237.8793541457.599976 2000-01-113.00416236.9091701438.560059 2000-01-122.82399335.7069861432.250000 2000-01-133.13372236.3818971449.680054 2000-01-143.25315937.8793541465.150024 We have also calculated the rolling moving averages of these three timeseries as follows. Note that when calculating the $M$ days moving average, the first $M-1$ are not valid, as $M$ prices are required for the first moving average data point. # Calculating the short-window moving average short_rolling = data.rolling(window=20).mean() short_rolling.head() AAPLMSFT^GSPC 2000-01-03NaNNaNNaN 2000-01-04NaNNaNNaN 2000-01-05NaNNaNNaN 2000-01-06NaNNaNNaN 2000-01-07NaNNaNNaN # Calculating the short-window moving average long_rolling = data.rolling(window=100).mean() long_rolling.tail() AAPLMSFT^GSPC 2016-12-26110.95820558.4181822176.628791 2016-12-27111.04787458.4761172177.500190 2016-12-28111.14058958.5329362178.244490 2016-12-29111.23369858.5861122178.879189 2016-12-30111.31527058.6352672179.426990 Building on these results, our ultimate goal will be to design a simple yet realistic trading strategy. However, first we need to go through some of the basic concepts related to quantitative trading strategies, as well as the tools and techniques in the process. See Best Data Science Courses of 2019 There are several ways one can go about when a trading strategy is to be developed. One approach would be to use the price time-series directly and work with numbers that correspond to some monetary value. For example, a researcher could be working with time-series expressing the price of a given stock, like the time-series we used in the previous article. Similarly, if working with fixed income instruments, e.g. bonds, one could be using a time-series expressing the price of the bond as a percentage of a given reference value, in this case the par value of the bond. Working with this type of time-series can be more intuitive as people are used to thinking in terms of prices. However, price time-series have some drawbacks. Prices are usually only positive, which makes it harder to use models and approaches which require or produce negative numbers. In addition, price time-series are usually non-stationary, that is their statistical properties are less stable over time. An alternative approach is to use time-series which correspond not to actual values but changes in the monetary value of the asset. These time-series can and do assume negative values and also, their statistical properties are usually more stable than the ones of price time-series. The most frequently used forms used are relative returns defined as r_{\text{relative}}\left(t\right) = \frac{p\left(t\right) - p\left(t-1\right)}{p\left(t-1\right)} and log-returns defined as $$r\left(t\right) = \log\left( \frac{p\left(t\right)}{p\left(t-1\right)} \right)$$ where $p\left(t\right)$ is the price of the asset at time $t$. For example, if $p\left(t\right) = 101$ and $p\left(t-1\right) = 100$ then $r_{\text{relative}}\left(t\right) = \frac{101 - 100}{100} = 1\%$. There are several reasons why log-returns are being used in the industry and some of them are related to long-standing assumptions about the behaviour of asset returns and are out of our scope. However, what we need to point out are two quite interesting properties. Log-returns are additive and this facilitates treatment of our time-series, relative returns are not. We can see the additivity of log-returns in the following equation. r\left(t_1\right) + r\left(t_2\right) = \log\left( \frac{p\left(t_1\right)}{p\left(t_0\right)} \right) + \log\left( \frac{p\left(t_2\right)}{p\left(t_1\right)} \right) = \log\left( \frac{p\left(t_2\right)}{p\left(t_0\right)} \right) which is simply the log-return from $t_0$ to $t_2$. Secondly, log-returns are approximately equal to the relative returns for values of $\frac{p\left(t\right)}{p\left(t-1\right)}$ sufficiently close to $1$. By taking the 1st order Taylor expansion of $\log\left( \frac{p\left(t\right)}{p\left(t-1\right)} \right)$ around $1$, we get \log\left( \frac{p\left(t\right)}{p\left(t-1\right)} \right) \simeq \log\left(1\right) + \frac{p\left(t\right)}{p\left(t-1\right)} - 1 = r_{\text{relative}}\left(t\right) Both of these are trivially calculated using Pandas: # Relative returns returns = data.pct_change(1) returns.head() AAPLMSFT^GSPC 2000-01-03NaNNaNNaN 2000-01-04-0.084310-0.033780-0.038345 2000-01-050.0146340.0105440.001922 2000-01-06-0.086538-0.0334980.000956 2000-01-070.0473690.0130680.027090 # Log returns - First the logarithm of the prices is taken and the the difference of consecutive (log) observations log_returns = np.log(data).diff() log_returns.head() AAPLMSFT^GSPC 2000-01-03NaNNaNNaN 2000-01-04-0.088078-0.034364-0.039099 2000-01-050.0145280.0104890.001920 2000-01-06-0.090514-0.0340720.000955 2000-01-070.0462810.0129840.026730 Since log-returns are additive, we can create the time-series of cumulative log-returns defined as c\left(t\right) = \sum_{k=1}^t r\left(t\right) The cumulative log-returns and the total relative returns from 2000/01/01 for the three time-series can be seen below. Note that although log-returns are easy to manipulate, investors are accustomed to using relative returns. For example, a log-return of $1$ does not mean an investor has doubled the value of his portfolio. A relative return of $1 = 100\%$ does! Converting between the cumulative log-return $c\left(t\right)$ and the total relative return $c_{\text{relative}}\left(t\right) = \frac{p\left(t\right) - p\left(t_o\right)}{p\left(t_o\right)}$ is simple $$c_{\text{relative}}\left(t\right) = e^{c\left(t\right)} - 1$$ For those who are wondering if this is correct, yes it is. If someone had bought $\$1000$worth of AAPL shares in January 2000, her/his portfolio would now be worth over$\$30,000$. If only we had a time machine... fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16,12)) for c in log_returns: ax1.plot(log_returns.index, log_returns[c].cumsum(), label=str(c)) ax1.set_ylabel('Cumulative log returns') ax1.legend(loc='best') for c in log_returns: ax2.plot(log_returns.index, 100*(np.exp(log_returns[c].cumsum()) - 1), label=str(c)) ax2.set_ylabel('Total relative returns (%)') ax2.legend(loc='best') plt.show() ## What is a quantitative trading strategy? Our goal is to develop a toy trading strategy, but what does the term "quantitative trading strategy" actually mean? In this section we will give a definition that will guide us in our long-term goal. Assume we have at our disposal a certain amount of dollars, $N$, which we are interested to invest. We have at our disposal a set of $K$ assets from which we can buy and sell freely any arbitrary amount. Our goal is to derive weights $w_i\left(t\right), i = 1, \ldots, K$ such that $$w_i\left(t\right) \in \mathbb{R} \ \text{and} \ \sum_{i=1}^K w_i\left(t\right) \leq 1$$ so that an amount of dollars equal to $w_i\left(t\right) N$ is invested at time $t$ on asset $i$. The inequality condition signifies $\sum_{i=1}^K w_i\left(t\right) \leq 1$ that the maximum amount we can invest is equal to amount of dollars we have, that is $N$. #### Example For example, assume we can invest in $2$ instruments only and that $N=\$1000$. The goal is to derive two weights$w_1\left(t\right)$and$w_2\left(t\right)$. If at some point$w_1\left(t\right) = 0.4$and$w_2\left(t\right) = 0.6$, this means that we have invested$w_1\left(t\right)N = \$400$ in asset $1$ and $w_2\left(t\right)N = \$600$in asset$2$. Since we only have$\$1000$, we can only invest up to that much which means that $$w_1\left(t\right)N + w_2\left(t\right)N \leq N \Rightarrow w_1\left(t\right) + w_2\left(t\right) < 1$$. Note that since we have allowed $w_i\left(t\right)$ to be any real number, we are implying that we are allowed to have negative weights. Negative weights imply that we have sold a given asset short. Selling short an asset means selling an asset we do not currently hold and receiving its value in cash. Selling short is different than selling an asset we already own, which is called selling long. The mechanics behind this can be complicated and are usually subject to regulatory scrutiny. However, on a high level, it involves borrowing the asset from a third party and then selling it to the buyer. Since at some point the asset needs to be returned to the party from which it was borrowed, the short position needs to be closed. This is achieved by buying the asset back from the original buyer or any other willing seller. For the purpose of this article it will be assumed that selling an asset short can be accomplished at no added cost, an assumption which is not true. #### Note More on selling long vs. selling short here. The assuming that the weights can be unbounded is not realistic. For example, based on the definition given above we could sell short an amount of AAPL shares of value equal to $N$. This means that, for now, we have at our disposal an additional $N$ dollars to invest from the short sale. Thus, together with our original $N$ dollars, we can the purchase shares of MSFT worth $2N$ dollars. In our framework, this translates to $w_{\text{AAPL}}=-1$ and $w_{\text{MSFT}}=2$. In theory, the weights could be $-999$ and $1000$ respectively. However, an increase in the absolute values of the weights leads to an increase in the risk of our portfolio for reasons we will see further down this series of tutorials. Therefore, when developing our trading strategy, appropriate thresholds will be imposed on the weights $w_i\left(t\right)$. A final note has to do with cash. Any portfolio will at some point in time include cash. In the aforementioned setup if at any point in time $W = \sum_{i=1}^K w_i\left(t\right) < 1$, then it means that our portfolio includes $\left(1-W\right)N$ dollars in cash. Of course, if $W<0$, our net position is short, which means we are currently holding more than $N$ dollars which is the initial value of the portfolio. ## Putting it all together # Last day returns. Make this a column vector r_t = log_returns.tail(1).transpose() r_t 2016-12-30 00:00:00 AAPL-0.007826 MSFT-0.012156 ^GSPC-0.004648 # Weights as defined above weights_vector = pd.DataFrame(1 / 3, index=r_t.index, columns=r_t.columns) weights_vector 2016-12-30 00:00:00 AAPL0.333333 MSFT0.333333 ^GSPC0.333333 # Total log_return for the portfolio is: portfolio_log_return = weights_vector.transpose().dot(r_t) portfolio_log_return 2016-12-30 00:00:00 2016-12-30-0.00821 If computer memory is not an issue, a very fast way of computing the portfolio returns for all days, $t = 1, \ldots, T$ is the following: Assume that $\mathbf{R} \in \mathbb{R}^{T \times K}$ is a matrix, the $t$th row of which is the row vector $\vec{r}\left(t\right)^T$. Similarly, $\mathbf{W} \in \mathbb{R}^{T \times K}$ is a matrix, the $t$th row of which is the row vector $\vec{w}\left(t\right)^T$. Then if $\vec{r}_p = \left[ r_p\left(1\right), \ldots, r_p\left(T\right) \right]^T \in \mathbf{R}^{T \times 1}$ is a column vector of all portfolio returns, we have $$\vec{r}_p = \text{diag}\left\{ \mathbf{W} \mathbf{R}^T \right\}$$ where $\text{diag}\left\{A \right\}$ is the diagonal of a matrix $\mathbf{A}$. The diagonal extraction is required because only in the diagonal the weights and log-returns vectors are properly time-aligned. ## An example To illustrate the concepts of the previous section, let us consider a very simple trading strategy, where the investor splits his investments equally among all three assets we have been looking at. That is: $$w_{\text{AAPL}} = w_{\text{MSFT}} = w_{\text{^GSPC}} = \frac{1}{3}$$ In this case matrix $\mathbf{W}$ will be: weights_matrix = pd.DataFrame(1 / 3, index=data.index, columns=data.columns) weights_matrix.tail() AAPLMSFT^GSPC 2016-12-260.3333330.3333330.333333 2016-12-270.3333330.3333330.333333 2016-12-280.3333330.3333330.333333 2016-12-290.3333330.3333330.333333 2016-12-300.3333330.3333330.333333 Matrix $\mathbf{R}$ is simply our log-returns dataframe defined before. log_returns.head() AAPLMSFT^GSPC 2000-01-03NaNNaNNaN 2000-01-04-0.088078-0.034364-0.039099 2000-01-050.0145280.0104890.001920 2000-01-06-0.090514-0.0340720.000955 2000-01-070.0462810.0129840.026730 Thus, the portfolio returns are calculated as: # Initially the two matrices are multiplied. Note that we are only interested in the diagonal, # which is where the dates in the row-index and the column-index match. temp_var = weights_matrix.dot(log_returns.transpose()) temp_var.head().iloc[:, 0:5] 2000-01-03 00:00:002000-01-04 00:00:002000-01-05 00:00:002000-01-06 00:00:002000-01-07 00:00:00 2000-01-03NaN-0.0538470.008979-0.041210.028665 2000-01-04NaN-0.0538470.008979-0.041210.028665 2000-01-05NaN-0.0538470.008979-0.041210.028665 2000-01-06NaN-0.0538470.008979-0.041210.028665 2000-01-07NaN-0.0538470.008979-0.041210.028665 # The numpy np.diag function is used to extract the diagonal and then # a Series is constructed using the time information from the log_returns index portfolio_log_returns = pd.Series(np.diag(temp_var), index=log_returns.index) portfolio_log_returns.tail() 2016-12-26 0.000000 2016-12-27 0.003070 2016-12-28 -0.005753 2016-12-29 -0.000660 2016-12-30 -0.008210 Freq: B, dtype: float64 Note that these returns are only estimates because of our use of log-returns instead of relative returns. However, for most practical purposes the difference is negligible. Let us see what our cumulative log returns and the total relative returns for this portfolio look. total_relative_returns = (np.exp(portfolio_log_returns.cumsum()) - 1) fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16,12)) ax1.plot(portfolio_log_returns.index, portfolio_log_returns.cumsum()) ax1.set_ylabel('Portfolio cumulative log returns') ax2.plot(total_relative_returns.index, 100 * total_relative_returns) ax2.set_ylabel('Portfolio total relative returns (%)') plt.show() So this simple investing strategy would yield a total return of more than $325\%$ in the course of almost $16$ years. How does this translate to a yearly performance? Since we have kept all weekdays in our portfolio, there are $52 \times 5 = 260$ weekdays each year. There are $4435$ days in our simulation which corresponds roughly to $16.92$ years. We will be calculating the average geometric return, that is an average return $\bar{r}$ which when compounded for $16.92$ years will produce the total relative return of $325.14\%$. So we need to solve: \left(1 + \bar{r}\right)^{16.92} = 1 + 3.2514 # Calculating the time-related parameters of the simulation days_per_year = 52 * 5 total_days_in_simulation = data.shape[0] number_of_years = total_days_in_simulation / days_per_year # The last data point will give us the total portfolio return total_portfolio_return = total_relative_returns[-1] # Average portfolio return assuming compunding of returns average_yearly_return = (1 + total_portfolio_return)**(1 / number_of_years) - 1 print('Total portfolio return is: ' + '{:5.2f}'.format(100 * total_portfolio_return) + '%') print('Average yearly return is: ' + '{:5.2f}'.format(100 * average_yearly_return) + '%') Total portfolio return is: 325.14% Average yearly return is: 8.85% ## What next? Our strategy is a very simple example of a buy-and-hold strategy. The investor simply splits up the available funds in the three assets and keeps the same position throughout the period under investigation. Although simple, the strategy does produce a healthy $8.85\%$ per year. However, the simulation is not completely accurate. Let us not forget that we have used ALL weekdays in our example, but we do know that on some days the markets are not trading. This will not affect the strategy we presented as the returns on the days the markets are closed are 0, but it may potentially affect other types of strategies. Furthermore, the weights here are constant over time. Ideally, we would like weights that change over time so that we can take advantage of price swings and other market events. Also, we have said nothing at all about the risk of this strategy. Risk is the most important consideration in any investment strategy and is closely related to the expected returns. In what follows, we will start designing a more complex strategy, the weights of which will not be constant over time. At the same time we will start looking into the risk of the strategy and present appropriate metrics to measure it. Finally, we will look into the issue of optimizing the strategy parameters and how this can improve our return to risk profile. See Part 3 of this series: Moving Average Trading Strategies.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956015110015869, "perplexity": 1045.8513654171215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00482.warc.gz"}
https://math.stackexchange.com/questions/608514/relationship-between-topological-and-quillens-k-theory
# Relationship between topological and Quillen's K-theory Up until now, I've taken it for granted that the topological k-theory of a space $X$ is equal to the K-theory of vector bundles on $X$. $K_0$'s of both coincide (Serre-Swan) however, is it the case that $K_i(\text{Vect}(X)) \cong K_{top}^i(X)$? ($\text{Vect}(X)$ being the category of (real/complex) vector bundles over $X$.) I'm doubting this now, however, and I can't seem to find anything written up explicitly on the relationship between the two. In what way, if at all, is topological K-theory a special case of Quillen's K-theory? What is the relationship? • Topological $K$-theory is a special case of the $K$-theory of operator algebras. Algebraic $K$-theory doesn't have Bott periodicity, so the higher $K$-groups needn't agree. I don't know whether the algebraic and operator algebraic $K_1$s are the same. – Kevin Carlson Dec 16 '13 at 1:15 • Here's a reference: looks like the algebraic $K$-theory of $C^*$-algebras is not well-understood beyond $K_0$ or finite coefficients. math.uiuc.edu/K-theory/0128/Kregularity.pdf – Kevin Carlson Dec 16 '13 at 1:29 • Rephrased the question. – Joshua Seaton Dec 16 '13 at 1:43 • Thanks, Kevin. We wouldn't need a Bott periodicity for algebraic K-theory in general, just one for the class of rings of functions on spaces. I couldn't initally why this couldn't be, a priori. In any case though, you're right that this doesn't work. As we can take $X$ to be a point. There's a theorem of Suslin (I think) that desrcibes the alg K-theory of an algebraically closed field of char 0 and it has that K_2 is uniquely divisible, which can't be the case for X=point, in which which we're comparing the algebraic K-theory of $\mathbb{C}$ with the top K-theory of a point. – Joshua Seaton Dec 16 '13 at 1:47 • (Somewhat) related: Topological vs. Algebraic K-Theory – Grigory M Dec 26 '13 at 22:09 This is not true even for a point: $K_1^{alg}(\mathrm{Vect}(pt))=K_1^{alg}(\mathbb C)=\mathbb C ^\times$. (Of course, there is a map from $GL(\mathbb C)$ with discrete topology to ‘ordinary’ $GL(\mathbb C)$ inducing a map $K^{\text{alg}}(\mathbb C)\to K^{\text{top}}(pt)$; but this map is far from being isomorphism. That’s how I think about the difference in general case: in algebraic K-theory we forget about non-trivial topology [on our group]…)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397479057312012, "perplexity": 496.7724880881009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998943.53/warc/CC-MAIN-20190619083757-20190619105757-00237.warc.gz"}
https://support.google.com/docs/answer/9366258?hl=en&ref_topic=3105395
# IMCSCH function The IMCSCH function returns the hyperbolic cosecant of the given complex number.  For example, a given complex number "x+yi" returns "csch(x+yi)." ## Parts of an IMCSCH function `IMCSCH(number) ` Part Description Notes `number` The complex number for which you want the hyperbolic cosecant. This can be either the result of the COMPLEX function, a real number interpreted as a complex number with imaginary parts equal to 0, or a string in the format “x+yi” where x and y are numeric. ## Sample formulas `IMCSCH(COMPLEX(4,6))` `IMCSCH(4)` `IMCSCH("2+3i")` ## Notes The `IMCSCH` function returns an error if the given number isn't a valid complex number. ## Examples A B 1 Formula Result 2 `=IMCSCH(COMPLEX(4,1))` 0.0197797995721927-0.0308258875766998i 3 `=IMCSCH(3.5)` 0.0604498900091561 4 `=IMCSCH("3+2i")` -0.0412009862885741-0.0904732097532074i ## Related functions IMCSC:  The IMCSC function returns the cosecant of the given complex number. IMSINH: The IMSINH function returns the hyperbolic sine of the given complex number. COMPLEX: The COMPLEX function creates a complex number, given real and imaginary coefficients.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737314343452454, "perplexity": 1725.6655218528626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00169.warc.gz"}
https://www.physicsforums.com/threads/fluids-question-syringe.276272/
Fluids question (Syringe) 1. Dec 1, 2008 RafaFutbol 1. The problem statement, all variables and given/known data I'll just post the question given: You are at the clinic getting your flu shot. The syringe that is being used to deliver the vaccine has a volume of 2.0 mL, an inner diameter of 6.0 mm, and the needle has an inner diameter of 0.25 mm. The plunger on which the nurse has placed her finger has a diameter of 1.2 cm. (a) What is the minimum force that the nurse needs to apply for serum to enter you? Take into consideration that you are a little stressed by the whole needle business so your blood pressure is a bit high: 140-over-100 (be sure you know exactly what those numbers mean before trying to use them in a calculation!) (b) The nurse empties the needle in 2 seconds. What is the flow speed of the serum through the needle? I'm not sure exactly how to get this problem. I know you have to look into the conservation, but what variables would represent what, and what equations could be used to relate all the terms? I'm really stuck here, any help is appreciated! 2. Dec 2, 2008 horatio89 a) The first part is on Pascal's Principle. We know that the pressure exerted by the nurse must be equal to the pressure required to push the serum into the blood. (Hint: Do check out what 140 - 100 means, it's very important) b) Fluid flow rate must always be...? 3. Dec 2, 2008 RafaFutbol Ok, for part A I converted it into Pascals 120mm* (1 atm)/760mm*(101.3 kPa)/1atm*(1000 Pa)/(1 kPa)=15994.7 Pa and found the Force to be: p=F/A F=pA F=pπr^2 F=(15994.7 )π〖(0.003)〗^2 F=0.452 N but for part b, what variables would I use? Would I use bernoulli's equation of p + 1/2pv^2 + pgy? and the pgy would go away because there would be no change in height. Am I right for this, or am I off? *edit: I must be off, because I'm not even using time.... how would I include time in this relation, since i don't have the lenght of the syringe? Last edited: Dec 3, 2008 4. Dec 2, 2008 horatio89 fluid flow rate, or dV/dt, which is a product of the velocity and the area, must always be constant for any non-viscous fluid. 5. Dec 2, 2008 RafaFutbol So I can get it by calculating: dV/dt = flow speed 2.0mL/2s = flow speed flow speed = 1mL/s is it that easy? :S 6. Dec 2, 2008 horatio89 dV/dt is rate of volume flow, which is not flow speed (which is in m/s). In a non-viscous fluid, the rate of volume flow must be constant, which is proportional to the flow speed and the area of the flow path. Hence, dV/dt = A1v1 = A2v2 (V = volume, A= area, v = velocity). 7. Dec 2, 2008 RafaFutbol Right, ok. So, since the syring is 6mm wide: (1mL/s) / (pi * (3.0mm)^2) = 0.0354 mm/s are those the proper units to use for the radius? 8. Dec 2, 2008 horatio89 You're looking for the flow speed in the needle (which differs from the speed in the syringe body), so the diameter should be 0.25mm. Otherwise, it's correct. 9. Dec 2, 2008 RafaFutbol Oh ok, I see. Well thank you so much for the help ^_^ I really appreciate it! 10. Dec 3, 2008 RafaFutbol Oh, and just to check... did I do part A correcty?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8394788503646851, "perplexity": 1825.5409642271056}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170940.45/warc/CC-MAIN-20170219104610-00525-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/orientation-preserving-and-determinants.83360/
# Orientation preserving and determinants 1. Jul 27, 2005 ### JSG31883 Can someone help me prove two theorems? I know they both are true, but can't come up with proofs. 1) Prove that a 3x3 matrix A is orientation preserving iff det(A)>0. 2) Prove that for A, B (both 3x3 matrices) that det(AB)=detA*detB. (A, B may or may not be invertible). THANK YOU!!!! 2. Jul 27, 2005 ### AKG I'm not entirely sure about this one. Let (v w x) be the 3x3 matrix with vectors v, w, and x as columns. A is orientation preserving if and only if det (Av Aw Ax) > 0 iff det (v w x) > 0 if and only if det (A(v w x)) > 0 iff det (v w x) > 0 if and only if det(A)det(v w x) > 0 iff det (v w x) > 0 (using number 2. which you need to prove) if and only if det(A) > 0 2. I can't think of an easy way to do it, but if you actually expand it out in full, you will be able to show it. 3. Jul 27, 2005 ### JSG31883 for 2) how can I expand it out? You say if I expand it out I will be able to show it... 4. Jul 27, 2005 ### AKG Take two general matrices, for example, take A to be: (a11 a12 a13) (a21 a22 a23) (a31 a32 a33) and B to be something similar. Actually compute the product AB and then compute it's determinant, and similarly compute the determinants |A| and |B|, then their product. You'll get some big, long, ugly expressions, but you'll be able to cancel them to show that they're equal. 5. Jul 30, 2005 ### shmoe Expanding out 2) will be disgusting (but would certainly work). Another way is to first prove it for elementary matrices, then show that any invertible matrix is the product of elementary matrices and you've pretty much handled the invertible case. A or B non-invertible is easier, assuming you know non-invertible<=>determinant is zero (or can prove this). Similar Discussions: Orientation preserving and determinants
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9125759601593018, "perplexity": 1495.435634662612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00096.warc.gz"}
http://mathhelpforum.com/trigonometry/109544-finding-solutions-annoying-equation-print.html
# Finding solutions to this annoying equation .. • October 21st 2009, 03:52 PM ZaZu Finding solutions to this annoying equation .. Find solutions for Sin(2x)-Sin(x)-Cos(x)+1=0 I have been trying to come up with my own ways but I keep failing =\ I tried squaring both sides after moving cos(x) to the other side ... I tried making 1 = Sin^2(x)+Cos^2(x) and solving it out .. But I just cant get it right !!!! Really appreciated :) Thanks • October 21st 2009, 04:03 PM pickslides $\sin(2x) = 2\sin(x)\cos(x)$ • October 21st 2009, 04:08 PM ZaZu I did try that =\ This is all I got : http://img8.imageshack.us/img8/9008/22102009064.jpg • October 21st 2009, 07:26 PM mr fantastic Quote: Originally Posted by ZaZu Find solutions for Sin(2x)-Sin(x)-Cos(x)+1=0 I have been trying to come up with my own ways but I keep failing =\ I tried squaring both sides after moving cos(x) to the other side ... I tried making 1 = Sin^2(x)+Cos^2(x) and solving it out .. But I just cant get it right !!!! $2 \sin x \cos x + 1 - (\sin x + \cos x) = 0$ Now substitute $1 = \cos^2 x + \sin^2 x$ and simplify: $(\sin x + \cos x)^2 - (\sin x + \cos x) = 0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8627703785896301, "perplexity": 2229.285812304577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824230.71/warc/CC-MAIN-20160723071024-00257-ip-10-185-27-174.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/244401/writing-a-token-list-conditional-with-expl3
Writing a token list conditional with expl3 This is a follow-up question to LaTeX3 conditional with grouping fails to compile. I would like to write a token list conditional with expl3 that is fully expandable in order to use it as a predicate conditional. Motivated by Joseph Wright's comment I thought of a recursive implementation, but I cannot figure out how to put it into the expl3 syntax. For the sake of this question let's implement a function, which checks whether its argument is an integer, i.e. only composed of digits. It is straightforward to save the argument in a local token list and then run an appropriate test. However, the resulting code is not expandable due to the assignment. Without an assignment a possible solution might look as follows. \prg_new_conditional:Npnn \is_integer:n #1 { p, T, F, TF } { \tl_if_empty:nTF { #1 } { % We are done if the token list is empty \prg_return_true: }{ \exp_args:NNx \tl_if_in:nnTF { 0123456789 } { \tl_head:n { #1 } } { % Call \is_integer:n with \tl_tail:n { #1 } }{ \prg_return_false: } } } Is there any way how to call \is_integer:n again in the indicated line? From my understanding the desired implementation would be expandable, as for any given input the token replacement done by TeX would eventually terminate. Of course, alternative approaches with the same result are appreciated. As a side remark, I am aware of already existing implementations e.g. in the xstring package. However, I need to make adjustments for my own needs and cannot use such solutions. • The number can only be positive? – egreg May 11 '15 at 20:08 • For simplicity, yes. Once this works, more general cases should be feasable. – ranguwud May 11 '15 at 20:10 The old trick for checking if a token list <tl> consists only of digits is to use \romannumeral-<tl>, which will return nothing in that case. As far as I know there's no public interface for the trick in expl3, only \__int_to_roman:w: \documentclass{article} \usepackage{xparse} \ExplSyntaxOn \prg_new_conditional:Npnn \is_integer:n #1 { p, T, F, TF } { \tl_if_blank:oTF { \__int_to_roman:w -0#1 } { \prg_return_true: } { \prg_return_false: } } \NewDocumentCommand{\isinteger}{m} { #1~is \bool_if:nF { \is_integer_p:n {#1} } {~not}~an~integer } \ExplSyntaxOff \begin{document} \isinteger{42} \isinteger{2ab1} \end{document} • Thanks a lot for your suggestion. However, to my understanding this is not flexible to obtain more sophisticated checks. As I stated in my question, the check for integers is just a mere example. If I want to check for more complex patterns, I won't be able to rely on already implemented functions such as \__int_to_roman:w. – ranguwud May 11 '15 at 20:29 • @ranguwud More complex patterns require unexpandable functions, or LuaTeX – egreg May 11 '15 at 20:36 • Actually I don't think this is true. If I choose not to use \prg_new_conditional:Npnn but to directly define \is_integer_p:n, then I should be able to call \is_integer_p:n recursively and end up with flexible and fully expandable code. Tell me if I am mistaken in that point! So my question would rather be: Can I achieve the same using \prg_new_conditional:Npnn? If not, how can I properly return true and false in my self-defined macro \is_integer_p:n? – ranguwud May 11 '15 at 20:46 • @ranguwud \prg_new_conditional:Npnn isn't your problem it is defining your test by expansion for the example you gave , \romannumeral gives an expandable test, for other tests, it depends. In general it may not be possible. – David Carlisle May 11 '15 at 21:13 • @Gaussler I fixed it; note that only positive integers can be tested this way. – egreg Aug 12 '15 at 17:01 To answer my own question, using only expandable macros and avoiding the use of \prg_new_conditional:Npnn the following example just works fine and is flexible in the way that all used tests can be adapted to specific problems and do not rely on predefined code like \__int_to_roman:w suggested by egreg. \documentclass{article} \usepackage{expl3} \begin{document} \ExplSyntaxOn \bool_new:N \true_bool \bool_new:N \false_bool \bool_gset_true:N \true_bool \bool_gset_false:N \false_bool \prg_new_conditional:Npnn \is_digit:N #1 { TF } { \bool_if:nTF { \token_if_eq_charcode_p:NN 0 #1 || \token_if_eq_charcode_p:NN 1 #1 || \token_if_eq_charcode_p:NN 2 #1 || \token_if_eq_charcode_p:NN 3 #1 || \token_if_eq_charcode_p:NN 4 #1 || \token_if_eq_charcode_p:NN 5 #1 || \token_if_eq_charcode_p:NN 6 #1 || \token_if_eq_charcode_p:NN 7 #1 || \token_if_eq_charcode_p:NN 8 #1 || \token_if_eq_charcode_p:NN 9 #1 }{ \prg_return_true: }{ \prg_return_false: } } \cs_new:Npn \is_integer_p:n #1 { \tl_if_empty:nTF { #1 } { % We are done if the token list is empty \bool_if_p:n { \true_bool } }{ \exp_args:Nf \is_digit:NTF { \tl_head:n { #1 } } { \exp_args:Nf \is_integer_p:n { \tl_tail:n { #1 } } }{ \bool_if_p:n { \false_bool } } } } \bool_if:nTF { \is_integer_p:n { 1234 } } { true } { false } \ExplSyntaxOff \end{document} Interestingly, trying the same with \prg_new_conditional:Npnn doesn't seem to be possible. The following throws errors, although I don't understand why. I would assume that both implementations result in equivalent code, but apparently this isn't true. \prg_new_conditional:Npnn \is_integer_prg:n #1 { p } { \tl_if_empty:nTF { #1 } { % We are done if the token list is empty \prg_return_true: }{ \exp_args:Nf \is_digit:NTF { \tl_head:n { #1 } } { \exp_args:Nf \is_integer_prg_p:n { \tl_tail:n { #1 } } }{ \prg_return_false: } } } Maybe somebody can clarify, why the \prg_new_conditional:Npnn solution doesn't work. Edit: Eventually I figured out how to do the trick with \prg_new_conditional:Npnn. The problem in my previous attempt was that a predicate conditional is no legal "return value" of \prg_new_conditional:Npnn. This means that \prg_new_conditional:Npnn \foo: { p } { \int_compare_p:n { 1 = 2 } } fails, while \prg_new_conditional:Npnn \foo: { p } { \int_compare:nTF { 1 = 2 } { \prg_return_true: } { \prg_return_false: } } works, which of course makes sense in hindsight. So the above attempt can be corrected to the following working example: \documentclass{article} \usepackage{expl3} \begin{document} \ExplSyntaxOn \prg_new_conditional:Npnn \is_digit:N #1 { TF } { \bool_if:nTF { \token_if_eq_charcode_p:NN 0 #1 || \token_if_eq_charcode_p:NN 1 #1 || \token_if_eq_charcode_p:NN 2 #1 || \token_if_eq_charcode_p:NN 3 #1 || \token_if_eq_charcode_p:NN 4 #1 || \token_if_eq_charcode_p:NN 5 #1 || \token_if_eq_charcode_p:NN 6 #1 || \token_if_eq_charcode_p:NN 7 #1 || \token_if_eq_charcode_p:NN 8 #1 || \token_if_eq_charcode_p:NN 9 #1 }{ \prg_return_true: }{ \prg_return_false: } } \prg_new_conditional:Npnn \is_integer:n #1 { p, TF } { \tl_if_empty:nTF { #1 } { % We are done if the token list is empty \prg_return_true: }{ \exp_args:Nf \is_digit:NTF { \tl_head:n { #1 } } { \exp_args:Nf \is_integer:nTF { \tl_tail:n { #1 } } { \prg_return_true: }{ \prg_return_false: } }{ \prg_return_false: } } } \bool_if:nTF { \is_integer_p:n { 1234 } } { true } { false },~ \bool_if:nTF { \is_integer_p:n { 12ab } } { true } { false } \ExplSyntaxOff \end{document} As expected, this results in the text true, false in the document. Of course, as pointed out by Manuel, the use of \exp_args:Nf can be avoided by defining appropriate variants of \is_digit:NTF and \is_integer:NTF. Marking as solved here. • You should use \cs_generate_variant:Nn when possible. – Manuel May 12 '15 at 14:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186020851135254, "perplexity": 3128.823823590738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00320.warc.gz"}
http://mathhelpforum.com/discrete-math/139711-bijection-surjection-injection-increasing-function.html
# Thread: Bijection, Surjection, Injection or increasing function? 1. ## Bijection, Surjection, Injection or increasing function? Let f : R to R be a function. The statement (for all y in R)(there exists x in R) such that (f(x) = y) means that f is . . . increasing function, It might be the others but im not sure like a bijection can also be an increasing fucntion im just not sure how to distinguish what this is 2. Originally Posted by treetheta Let f : R to R be a function. The statement (for all y in R)(there exists x in R) such that (f(x) = y) means that f is . . . increasing function, It might be the others but im not sure like a bijection can also be an increasing fucntion im just not sure how to distinguish what this is It is not increasing. I'll give you a hint if $w$ is the word you wish to find and $\ell$ is the first letter of a word then $i<\ell(w)$ Spoiler: If this is true then $f(\mathbb{R})=\mathbb{R}$ 3. Originally Posted by Drexel28 It is not increasing. I'll give you a hint if $w$ is the word you wish to find and $\ell$ is the first letter of a word then $i<\ell(w)$ Spoiler: If this is true then $f(\mathbb{R})=\mathbb{R}$ wait what's i, wait i think i get it it has to be a surjection then right!!! =D 4. Originally Posted by treetheta wait what's i, wait i think i get it it has to be a surjection then right!!! =D Correct!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730767965316772, "perplexity": 585.4830809013905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982291592.14/warc/CC-MAIN-20160823195811-00131-ip-10-153-172-175.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/21866/ctl-and-ltl-logic-difference?noredirect=1
# CTL and LTL logic difference I am trying to understand these subtle differences between LTL and CTL logic and one thing I simply don't have an idea how to understand. Formula AG AF p in CTL and a formula GF p in LTL. Why aren't they equivalent? Can you give me an example, please? I tried finding and drawing things, but I simply can't find a counter example and there is no explanation in the lecture materials I have. • 'darxsys' asked about AGAFp vs GFp, and Klaus responded about a different pair of formulae, AFAGp vs FGp. I'd appreciate someone answering the original question, about AGAFp vs GFp, because it's on my mind today, too. – user40446 Jun 16 '16 at 14:40 • @RichardRaimi, the answer by Klaus Draeger clearly states that AG AF p is equivalent to G F p. – Radu GRIGore Jun 17 '16 at 4:26 There are already some rather good related answers regarding LTL versus CTL. In a nutshell, LTL is first and foremost a logic of traces, and an LTL formula is true for a transition system $S$ if and only if it is true for each trace of $S$. CTL, on the other hand, is a branching-time logic, which can in a sense talk about multiple paths at the same time. One standard example here (not the one you give, about which more below) is a labelled transition system $S=(Q,T,q_0,L)$ with set of locations $Q=\{q_0,q_1,q_2\}$, set of transitions $T=\{(q_0,q_0),(q_0,q_1),(q_1,q_2),(q_2,q_2)\}$, and labelling given by $L(q_0)=L(q_2)=\{p\}$, $L(q_1)=\emptyset$. This system satisfies $FGp$, but not $AFAGp$, which can be seen as follows. $FGp$ means that for every path $\pi=s_1,s_2,\dots$ in a given system, there is some point after which $p$ is always satisfied, i.e. there is some $i$ such that for all $j\geq i$, $p\in L(s_i)$. This is satisfied by $S$ since every path in $S$ either remains in $q_0$ forever (such that $p$ is always satisfied) or eventually gets to $p_2$ (after which $p$ is always satisfied). On the other hand, $AFAGp$ means that every path $\pi=s_1,s_2,\dots$ eventually reaches a state satisfying $AGp$, i.e. a state such that on every path $\pi'$ starting there, $p$ is always satisfied. Formally, this means that there is an $i$ such that for all $\pi'=s_1',s_2',\dots$ with $s_1'=s_i$ and all $j$, we have $p\in L(s_j')$. But in $S$, for the path which always remains in $q_0$, the transition to $q_1$, where $p$ is not satisfied, is always available, so that at no point of that path $AGp$ holds; therefore $AFAGp$ is not satisfied by $S$. • You can do that, if all you want to do is express that every path satisfies the (unconditional) fairness condition. What you usually want to express, though, is that all fair paths (or some fair path) satisfy some other formula, i.e. something like $A(fair\Rightarrow\varphi)$. This is not allowed in CTL - you cannot have boolean combinations of path formulas. The lecture slides at react.uni-saarland.de/teaching/verification-11-12/downloads/… have some more information that you may find helpful (including the fairness issue, on slides 17,18). – Klaus Draeger Apr 2 '14 at 13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414797425270081, "perplexity": 362.26265416242114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195532251.99/warc/CC-MAIN-20190724082321-20190724104321-00401.warc.gz"}
https://www.physicsforums.com/threads/impulse-question.854788/
# Impulse Question 1. Jan 30, 2016 ### master_333 1. The problem statement, all variables and given/known data A 75-g ball is dropped from rest from a height of 2.2 m. It bounces off the floor and rebounds to a maximum height of 1.7 m. If the ball is in contact with the floor for 0.024 s, what is the magnitude and direction of the average force exerted on the ball by the floor during the collision? 2. Relevant equations p = Ft 3. The attempt at a solution I found the speed of the ball right before the drop using vfinal ^2 = vinitial^2 +2ad The answer I got was 6.56 m/s. Now I don't know what to do after that. 2. Jan 30, 2016 ### Staff: Mentor Show the details of your calculation. It's not clear which velocity you were calculating --- If the ball was dropped, it should have zero velocity "right before the drop". What is your plan for calculating the average force? How is average force defined in terms of momentum? 3. Jan 30, 2016 ### master_333 Okay tell me if what I did is correct Velocity right before the ball hit the ground vfinal^2 = (0)+2(9.8)(2.2m) vfinal = 6.57m/s If this was the velocity that the ball hit the ground with then the force is F = p/t F = (0.075kg)(6.57m/s)/0.024s F = 20.53N If 20.53N was the force, then the ball would be sent up to 2.2m Now find the force required to send it to 1.7m (20.53)(1.7)/2.2 F = 15.86 N 4. Jan 30, 2016 ### Staff: Mentor That's fine so far. Nope. You need to go to the definition of average force based on change in momentum. What's the formula for the average force based on change in momentum? Hint: You need to find the momentum of the ball when it first contacts the floor and when it just loses contact with the floor. Remember that momentum is a vector quantity. 5. Jan 30, 2016 ### master_333 Okay, the momentum of the ball when it first contacts the floor is (0.075kg)(6.57m/s) = 0.49275 kg.m/s I don't know how to find the momentum of the ball when it just loses contact with the floor Please, can you show me how to do it. Rather than giving me hints that I don't understand 6. Jan 30, 2016 ### Staff: Mentor What speed must it have when it just leaves the floor? It's a projectile at that point, and reaches a certain maximum height.... 7. Jan 30, 2016 ### master_333 Okay thanks for trying to help me. But the way your trying to help me, I clearly do not understand anything. Whatever, I guess I will just skip this question. 8. Jan 30, 2016 ### Staff: Mentor Sorry master_333, but the forum rules are clear that helpers cannot simply provide answers or do your homework for you. This includes telling you step by step how to solve a problem. We can only offer hints or point out errors or suggest things to investigate so that you can gain the knowledge to solve the problem yourself. I have suggested that you look up the definition of average force in terms of change of momentum. Have you looked it up in your notes, text, or on the web? 9. Jan 31, 2016 ### rude man OK so far. Now relate the change in kinetic energy between 2.2m and 1.7m to the work done by the force over the distance of floor contact. Since you don't know the contact distance s you might consider the chain rule: ds = ds/dt * dt. Draft saved Draft deleted Similar Discussions: Impulse Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322460651397705, "perplexity": 673.8130617084976}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828411.81/warc/CC-MAIN-20171024105736-20171024125736-00175.warc.gz"}
https://aiuta.org/en/talking-about-the-newtons-law-which-interaction-does-not-take-place-due-to-field-forces-a-intera.972712.html
Physics talking about the Newtons Law: Which interaction does not take place due to field forces? A) interaction between protons and electrons B) interaction between a ball in flight and the earth below C) interaction between Mars and the Sun D) interaction between a pen and paper while you write iza90290 2 years ago All the interactions are all example of field forces except option D. D) interaction between a pen and paper while you write The interaction between a pen and paper is by direct contact force, not field force. Peanut2 2 years ago "interaction between a pen and paper while you write" is the interaction among the interactions given in the question that does not take place due to field forces. The correct option among all the options that are given in the question is the last option or option "D". I hope that this answer has actually come to your help.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090267419815063, "perplexity": 785.561366362055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160641.81/warc/CC-MAIN-20180924185233-20180924205633-00124.warc.gz"}
https://arxiv.org/abs/1407.2033v1
# Computer Science > Data Structures and Algorithms arXiv:1407.2033v1 (cs) # Title:FPT Algorithms for Weighted Graphs Can be (Almost) as Efficient as for Unweighted Abstract: We present a general framework for solving parameterized problems on weighted graphs. We use this framework to obtain efficient algorithms for such fundamental problems as {\sc Vertex Cover}, {\sc 3-Hitting Set}, {\sc Edge Dominating Set} and {\sc $k$-Internal Out-Branching}, on weighted graphs. For each of these problems, given an instance of size $n$ and a weight parameter $W\geq 1$, we seek a solution of weight at most (or at least) $W$. The best known algorithms for these problems, on weighted graphs, admit running times of the form $c^W n^{O(1)}$, for some constant $c>1$. We improve these running times to $c^s n^{O(1)}$, where $s\leq W$ is the minimum size of a solution of weight at most (at least) $W$. Clearly, $s$ can be substantially smaller than $W$. In particular, the running times of our algorithms are (almost) the same as the best known $O^*$ running times for the unweighted variants. Thus, we show that * {\sc Weighted Vertex Cover} can be solved in $1.381^s n^{O(1)}$ time and $n^{O(1)}$ space. * {\sc Weighted 3-Hitting Set} can be solved in $2.168^s n^{O(1)}$ time and $n^{O(1)}$ space. * {\sc Weighted Edge Dominating Set} is solvable in $2.315^s n^{O(1)}$ time and $n^{O(1)}$ space. * {\sc Weighted Max Internal Out-Branching} is solvable in $6.855^s n^{O(1)}$ time and space. We further improve our results, by showing that {\sc Weighted Vertex Cover} and {\sc Weighted Edge Dominating Set} admit fast algorithms whose running times are of the form $c^t n^{O(1)}$, where $t \leq s$ is the minimum size of a solution for the unweighted version. Subjects: Data Structures and Algorithms (cs.DS) Cite as: arXiv:1407.2033 [cs.DS] (or arXiv:1407.2033v1 [cs.DS] for this version) ## Submission history From: Meirav Zehavi [view email] [v1] Tue, 8 Jul 2014 11:00:00 UTC (50 KB) [v2] Sun, 22 Feb 2015 11:48:16 UTC (2,002 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625317811965942, "perplexity": 652.5242999789467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402885.41/warc/CC-MAIN-20200529085930-20200529115930-00323.warc.gz"}
https://arxiv.org/abs/1606.04550
# Title:Settling the complexity of computing approximate two-player Nash equilibria Abstract: We prove that there exists a constant $\epsilon>0$ such that, assuming the Exponential Time Hypothesis for PPAD, computing an $\epsilon$-approximate Nash equilibrium in a two-player (nXn) game requires quasi-polynomial time, $n^{\log^{1-o(1)} n}$. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [LMM03]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP); this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an $\epsilon$-approximate Nash equilibrium in a game with n players requires $2^{\Omega(n)}$ oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [HN13], Babichenko [Bab14], and Chen et al. [CCT15]. In fact, our results for n-player games are stronger: they hold with respect to the $(\epsilon,\delta)$-WeakNash relaxation recently introduced by Babichenko et al. [BPR16]. Subjects: Computational Complexity (cs.CC); Computer Science and Game Theory (cs.GT) Cite as: arXiv:1606.04550 [cs.CC] (or arXiv:1606.04550v2 [cs.CC] for this version) ## Submission history [v1] Tue, 14 Jun 2016 20:22:28 UTC (695 KB) [v2] Mon, 29 Aug 2016 20:35:38 UTC (1,008 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8604891300201416, "perplexity": 1617.2370653248697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00283.warc.gz"}
https://export.arxiv.org/abs/2101.05780
math.PR (what is this?) # Title: Explicit non-asymptotic bounds for the distance to the first-order Edgeworth expansion Abstract: In this article, we study bounds on the uniform distance between the cumulative distribution function of a standardized sum of independent centered random variables with moments of order four and its first-order Edgeworth expansion. Existing bounds are sharpened in two frameworks: when the variables are independent but not identically distributed and in the case of independent and identically distributed random variables. Improvements of these bounds are derived if the third moment of the distribution is zero. We also provide adapted versions of these bounds under additional regularity constraints on the tail behavior of the characteristic function. We finally present an application of our results to the lack of validity of one-sided tests based on the normal approximation of the mean for a fixed sample size. Comments: 41 pages, 3 figures Subjects: Probability (math.PR); Econometrics (econ.EM); Statistics Theory (math.ST) MSC classes: Primary: 62E17, Secondary: 60F05, 62F03 Cite as: arXiv:2101.05780 [math.PR] (or arXiv:2101.05780v1 [math.PR] for this version) ## Submission history From: Alexis Derumigny [view email] [v1] Thu, 14 Jan 2021 18:33:11 GMT (140kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079615235328674, "perplexity": 820.7564049912578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00040.warc.gz"}
https://brilliant.org/problems/block-on-incline/
# Block on Incline A block of mass $$M$$ rests on an fixed incline plane of angle $$\theta$$. A horizontal force of magnitude $$Mg$$ is applied on the block. If the friction force is large enough to keep the block at rest, for what value of $$\theta$$ in degrees is the normal force exerted by the plane on the block at its maximum? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340019226074219, "perplexity": 159.66642428714687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00290-ip-10-171-10-70.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/7863/reaction-between-ironii-and-vanadiumv
# reaction between Iron(ii) and Vanadium(v) When we reduce Vanadium (V) using iron(II), why are we adding 85% Phosphoric acid? After the above reduction, if we are going to titrate the mixture with Potassium permanganate, why are we adding dipotassiumpersulfate to the mixture before the titratioin? Here's the procedure: To 5 ml of vanadium (V) solution, add 2 ml of 6 M sulfuric acid, 1 ml of phosphoric acid and 0.20 g of hydrated ferrous sulfate to reduce vanadium (V). Then swirl the solution and allow it to stand for 3 minutes. Then add 0.23 g of dipotassiumpersulfate and let to stand for 5 minutes. Stir the contents vigorously and titrate with 0.02 standard potassium permanganate. • Could you provide either 1) a link to your procedure or 2) more detail about it? These questions are hard to answer out of context. – Ben Norris Jan 16 '14 at 17:20 • Procedure : to 5 ml of Vanadium (v) solution, add 2 ml of 6M sulfuric acid, 1ml of Phosphoric acid and 0.20g of hydrated Ferrous sulfate to reduce Vanadium (v). then, swirl the solution and allow it to stand for 3 minutes. then add 0.23 g of dipotassiumpersulfate and let to stand for 5 minutes. stir the contents vigorously and titrate with 0.02 standard potassium permanganate. – user4191 Jan 17 '14 at 14:42 I think I realize (a part of) the answer, concerning the conditions of a redox titration of vanadium(V), for which a metavanadate, such as $\ce{NH4VO3}$ is typically used as starting material. Under acidic conditions, i.e. in the presence of 85% $\ce{H3PO4}$, the colourless $\ce{VO2+}$ is formed. $\ce{VO3- + 2 H+ -> VO2+ + H2O}$ Again under acidic conditions, the latter species is reduced by Fe(II) to the blue $\ce{VO^{2+}}$. $\ce{VO2+ + Fe^{2+} + 2 H+ -> VO^{2+} + Fe^{3+} + H2O}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8306722044944763, "perplexity": 3466.7987267315625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00421.warc.gz"}
https://infoscience.epfl.ch/record/218092
## Investigations of the non-adiabatic photophysics of Cu(I)-phenanthroline complexes Cu(I)-phenanthrolines are an important class of metal-organic molecules that exhibits much promise for solar energy harvesting and solar-driven catalysis applications. Although many experimental studies have been performed calling for high-level simulations to elucidate their photophysics, a complete picture is still missing. This is the goal of the present thesis. On the ultrafast (femtosecond) timescale we studied the non-adiabatic relaxation of a prototypical Cu(I)-phenanthroline, [Cu(dmp)2]+, by performing excited state simulations using two approaches: quantum dynamics and trajectory surface hopping. These simulations help to identify several mechanisms, internal conversion, pseudo Jahn-Teller distortion, intersystem crossing, occurring in the subpicosecond time scale. Surprisingly, we have found that intersystem crossing does not take place between the lowest singlet and triplet excited states, as previously proposed, but between the lowest singlet and higher triplet states. Moreover, we observed the initial stages (< 100 fs) of the solvent reorganization due to the electronic density changes in the excited state. This leads to an energy stabilization of the excited states that is associated with an increase of the non-radiative decay rate. The quantum dynamics simulations allowed us to provide indications for performing additional spectroscopy measurements by using the recently developed X-ray Free Electron Lasers (X-FELs). This technology can monitor both electronic and structural changes with an unprecedented time resolution of tens of femtoseconds and, therefore, is capable of revealing the aforementioned processes. In addition, we questioned the feasibility of such experiments and calculated the signal strengths for XAS and XES transient spectra. Finally, we analyzed the luminescence quenching, which has been observed for all Cu(I)-phenanthroline complexes when they are dissolved in strongly donating solvents. By performing Molecular Dynamics calculations we showed that, in contrast with the previously accepted model based on the formation of an exciplex (a species formed by two molecules, one in the excited state and one in the ground state), no stable exciplex is formed and that quenching is due to electrostatic solute-solvent interactions. In addition, we investigated how the geometry configuration can affect the luminescence lifetime in these molecules. We found a correlation between rigidity of the copper complex - inhibition of the pseudo Jahn-Teller distortion - and lifetime of the emission. The more the metal complex retains the ground state structure (large substituents), the longer its lifetime. This effect is attributed to a higher energy gap (excited state minus ground state energy) due to the reduction of relaxation. Our research reveals important insights into the relaxation mechanism and the complex interplay between geometry and electronic structure in Cu(I)-phenanthroline. These results can be exploited for guiding the synthesis of complexes with the desired physical properties. Chergui, Majed Tavernelli, Ivano Year: 2016 Publisher: Lausanne, EPFL Keywords: Other identifiers: urn: urn:nbn:ch:bel-epfl-thesis6966-7 Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555746078491211, "perplexity": 1575.4663807535342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647612.53/warc/CC-MAIN-20180321102234-20180321122234-00217.warc.gz"}
https://www.12000.org/my_notes/faq/maple_faq/insu72.htm
#### arrays in group/transgroup (14.4.00) ##### Mike May, S.J. Using Maple 6 I have parallel constructions that are being treated differently. Can anyone tell me why? Motivational background: I am updating worksheets I use in teaching abstract algebra so that they will run under Maple 6. The worksheets on computing Galois groups need the most revision since the galois function was rewritten between versions V4 and V5 and again with version 6. The worksheet walks the students throught he computation focusing on the technique rather than just on the answer. (By the way, I think it is a nice feature of Maple that you can gain access to the code used for a computation.) In the revision to version 6, information that was stored in the table group/transgrp is now scattered across a collection of tables group/transgroup/InfoDeg where Info is one of {name, order, parity, generators, SnConjugates} and Deg is an integer from 1 to 11. At the same time the table galois/groups no longer contains group names that my students will recognize. I was able to rewrite the worksheets, but ran into the following interesting behavior: Maple question: Why is Maple treating the list of SnConjugates different from the other 4 lists in the code below? As the second block of code shows I can make Maple behave in the desired fashion. I am curious as to why parallel constructions seem to be treated differently. ##### Robert Israel (20.4.00) group/transgroup/names5 and group/transgroup/generators5 are arrays of sets, while group/transgroup/order5 and group/transgroup/parity5 are arrays of integers. So e.g. group/transgroup/names5[3] is a set, and this is subject to normal evaluation rules. However group/transgroup/SnConjugates5 is an array of arrays, and arrays have last-name evaluation. Therefore group/transgroup/SnConjugates5[3]; does not fully evaluate the array group/transgroup/SnConjugates5[3] unless you use ”eval”.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401677250862122, "perplexity": 1490.5412528956724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00321.warc.gz"}
https://programmatic.solutions/hinb1n/binary-vector-t-in-spans-over-mathbbz-q-mathbbz-for-all-prime-power
Theoretical Computer Science reference-request linear-algebra finite-fields Updated Sat, 11 Jun 2022 15:13:10 GMT # Binary vector $t$ in $span(S)$ over $\mathbb{Z}/q\mathbb{Z}$ for all prime powers $q$ $\Rightarrow$ $t$ in $span(S)$ over $\mathbb{Z}$? I have a set of $n$ binary vectors $S = \{s_1, \ldots, s_n \} \subseteq \{0,1\}^k \setminus \{1^k\}$ and a target vector $t = 1^k$ which is the all-ones vector. Conjecture: If $t$ can be written as a linear combination of elements of $S$ over $\mathbb{Z}/q\mathbb{Z}$ for all prime powers $q$, then $t$ can be written as a linear combination of $S$ over $\mathbb{Z}$, i.e., there is a linear combination with integer coefficients which sums to $t$ over $\mathbb{Z}$. Is this true? Does it look familiar to anyone? I'm not even sure what keywords to use when searching for literature on this topic, so any input is appreciated. Observe that the converse certainly holds: if $t = \sum_{i=1}^n \alpha_i s_i$ for integers $a_i$, then evaluating the same sum mod $q$ for any modulus $q$ still gives equality; hence a linear combination with integer coefficients implies the existence of a linear combination for all moduli. Edit 14-12-2017: The conjecture was initially stronger, asserting the existence of a linear combination over $\mathbb{Z}$ whenever $t$ is a linear combination mod $q$ for all primes $q$. This would have been easier to exploit in my algorithmic application, but turns out to be false. Here is a counter-example. $s_1, \ldots, s_n$ are given by the rows of this matrix: $\left( \begin{array}{cccccc} 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 \\ \end{array} \right)$ Mathematica verified that the vector $t = (1,1,1,1,1,1)$ is in the span of these vectors mod $q$ for the first 1000 primes, which I take as sufficient evidence that this is the case for all primes. However, there is no integer linear combination over $\mathbb{Z}$: the matrix above has full rank over $\mathbb{R}$ and the unique way to write $(1,1,1,1,1,1)$ as a linear combination of $(s_1, \ldots, s_6)$ over $\mathbb{R}$ is using coefficients $(1/2, 1/2, 1/2, -1/2, -1/2, 1/2)$. (You cannot write $t$ as a linear combination of these vectors mod $4$, though, so it does not contradict the updated form of the conjecture.) ## Solution The revised conjecture is true, even under relaxed constraints on $S$ and $t$they may be arbitrary integer vectors (as long as the set $S$ is finite). Notice that if we arrange the vectors from $S$ into a matrix, the question simply asks about the solvability of the linear system $$Sx=t$$ in the integers, hence I will formulate the problem as such below. Proposition: Let $S\in\mathbb Z^{k\times n}$ and $t\in\mathbb Z^k$. Then the linear system $Sx=t$ is solvable in $\mathbb Z$ if and only if it is solvable in $\mathbb Z/q\mathbb Z$ for all prime powers $q$. This can be proved in at least two ways. Proof 1: For any prime $p$, the solvability of the system modulo each $p^m$ implies that it is solvable in the ring of $p$-adic integers $\mathbb Z_p$. (There is a minor problem in that the solutions are not unique, hence given solutions mod $p^m$ and mod $p^{m'}$ need not be compatible. This can be sorted out e.g. using the compactness of $\mathbb Z_p$, or using Knigs lemma.) Consequently, the system is also solvable in the product $$\hat{\mathbb Z}=\prod_{p\text{ prime}}\mathbb Z_p,$$ i.e., the ring of profinite integers. I claim that this implies its solvability in $\mathbb Z$. Notice that solvability of the system (i.e., $\exists x\,Sx=t$) is expressible as a (primitive positive) first-order sentence in the language of abelian groups, augmented with a constant $1$ so that we can define $t$. Now, one can check that the complete first-order theory of the structure $(\mathbb Z,+,1)$ can be axiomatized as follows (its an order-free version of Presburgers arithmetic, or rather, of the theory of $\mathbb Z$-groups): 1. the theory of torsion-free abelian groups, 2. the axioms $\forall x\,px\ne1$ for each prime $p$, 3. the axioms $\forall x\,\exists y\,(x=py\lor x=py+1\lor\dots\lor x=py+(p-1))$ for each prime $p$. However, all these axioms hold in $\hat{\mathbb Z}$ as well. Thus, the structures $(\mathbb Z,+,1)$ and $(\hat{\mathbb Z},+,1)$ are elementarily equivalent, and the solvability of $Sx=t$ in $\hat{\mathbb Z}$ implies its solvability in $\mathbb Z$. In fact, we do not actually need the full axiomatization of $(\mathbb Z,+,1)$ above: it is enough to observe that $\hat{\mathbb Z}$ satisfies the axioms 2., which means that $\mathbb Z$ is a pure subgroup of $\hat{\mathbb Z}$, and therefore a pure $\mathbb Z$-submodule. Proof 2: There exist matrices $M\in\mathrm{GL}(k,\mathbb Z)$ and $N\in\mathrm{GL}(n,\mathbb Z)$ such that the matrix $S'=MSN$ is in the Smith normal form. Put $t'=Mt$. If $x$ is a solution of $Sx=t$, then $x'=N^{-1}x$ is a solution of $S'x'=t'$, and conversely, if $x'$ is a solution of $S'x'=t'$, then $x=Nx'$ is a solution of $Sx=t$. (This equivalence holds over any commutative ring, as $M,M^{-1},N,N^{-1}$ are integer matrices.) Thus, we may assume without loss of generality that $S$ is a diagonal matrix (meaning that the excess rows or columns are zero if $k\ne n$). Then the system $Sx=t$ is unsolvable in $\mathbb Z$ only if 1. for some nonzero diagonal entry $s_{ii}$ of $S$, the corresponding entry $t_i$ of $t$ is not divisible by $s_{ii}$, or 2. for some $i$, the $i$th row of $S$ is zero, but $t_i\ne0$. Let $q$ be a prime power such that $q\nmid t_i$, and, in the first case, $q\mid s_{ii}$. Then the system $Sx=t$ is not solvable in $\mathbb Z/q\mathbb Z$. • +0 – Ditto. Also, interestingly, the second solutions shows that it suffices to consider only those primes that divide the elementary divisors of $S$ (to handle all the $s_{ii}$, in case (1)), as well as one sufficiently large number (to handle case (2)). — Dec 15, 2017 at 05:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812846779823303, "perplexity": 96.48628435027071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00593.warc.gz"}
http://math.stackexchange.com/questions/475137/graph-of-the-function-gx-fx-x-given-graph-of-fx
# Graph of the function g(x)=f(x)/x given graph of f(x) Since, from the given graph it seems f(a) and f(b)are equal(or approx. equal since there is no scaling given). Then f(a)/a>f(b)/b as a - Since, from the given graph it seems f(a) and f(b)are equal(or approx. equal since there is no scaling given). Then f(a)/a>f(b)/b as a<b.The only graph that satisfies this is Fig:4. But the answer is given to be (B). Where have I gone wrong? –  Rajath Krishna R Aug 24 '13 at 15:36 Depending on the scale, $\frac{1}{x}$ could be close to constant on $[a,b]$, which would explain how Figure 2 could be correct. $$g'(x)=\frac{xf'(x)-f(x)}{x^2}$$ you can see that at $f$'s maximal x-value, $g'$ should be negative. This seems to rule out Fig 4. You've already ruled out Figs 1 and 3, but the reasoning does not apply to Figure 2 because there may be a difference between $g(a)$ and $g(b)$ that is too small to see. You could also rule out Figures 1 and 3 by looking at $g'(b)$. The formula says it should be negative, but in Figures 1 and 3, it's positive.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968530893325806, "perplexity": 675.2580910904691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682102.57/warc/CC-MAIN-20151001215802-00095-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/continuity-question.705974/
# Continuity question 1. Aug 17, 2013 ### tylersmith7690 1. The problem statement, all variables and given/known data For which values of a E ℝ, is the function given by f(x) = pieceise function x^2+4x-4, x<a cos((x-a)/2) , x ≥ a. continuous at x=a 2. Relevant equations I'm getting stuck on the algebra part to be honest. 3. The attempt at a solution lim x→a f(x)= f(a) to be continuous lim x→a- = x^2+4x-4 and this must be equal to lim x→a+ = cos((x-a)/2) I think its 1 because cos(1-1/2)=1 and 1+4-4=1 I'm just confused on the working out part, how to I algebraically manipulate the equations to show this? or am I completely wrong. 2. Aug 17, 2013 ### vela Staff Emeritus You seem to have the basic idea. You need $$\lim_{x \to a^-} x^2 + 4x - 4 = \lim_{x \to a^+} \cos\frac{x-a}{2}.$$ So what are the two sides equal to? 3. Aug 17, 2013 ### tylersmith7690 Thanks for the reply, I'm guessing the value of (a) in the equation has to be 1 for both sides of the equation to equal 1. I'm almost certain a=1 for the function to be continuous, just having a hard time showing the working out besides me just putting a=1 into the limit equation. Thanks again. 4. Aug 17, 2013 ### vela Staff Emeritus Don't plug any value in for a. Just work out what the limits equal in terms of a. 5. Aug 17, 2013 ### tylersmith7690 Thank you for your replies, but I literally have no idea of how to solve a, i think its the cosine function that is throwing me off. Any tips for trying to solve it? 6. Aug 17, 2013 ### vela Staff Emeritus Yes, do what I've already suggested twice. 7. Aug 17, 2013 ### pasmith You are given that if $x \geq a$ then $f(x) = \cos((x - a)/2)$. Therefore $f(a) = \cos((a-a)/2) = \cos(0) = 1$. 8. Aug 17, 2013 ### Staff: Mentor Quoted for the record. Please remember to ALWAYS quote the OP's posts. Draft saved Draft deleted Similar Discussions: Continuity question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980629444122314, "perplexity": 2384.749162119735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109893.47/warc/CC-MAIN-20170822031111-20170822051111-00026.warc.gz"}
https://www.lessonplanet.com/teachers/solar-system-sun
Solar System: Sun Students research information about the Sun, sunspots and how the Sun creates nuclear energy. They investigate how the Sun affects Earth.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826056122779846, "perplexity": 3518.9244163921085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880616.1/warc/CC-MAIN-20200706104839-20200706134839-00051.warc.gz"}
https://byjus.com/question-answer/sam-ranked-ninth-from-the-top-and-thirty-eighth-from-the-bottom-in-a-class/
Question # Sam ranked ninth from the top and thirty-eighth from the bottom in a class. How many students are there in the class? A 45 B 46 C 47 D 48 Solution ## The correct option is B 46Sam is ranked ninth from the top. That means there are 8 students above Sam.Sam is 38th from the bottom. That means there are 37 students below him.So total students 8+Sam+37=46 studentsAnswer is Option BLogical Reasoning Suggest Corrections 0 Similar questions View More People also searched for View More
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728511333465576, "perplexity": 4693.782628096358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00647.warc.gz"}
https://www.arxiv-vanity.com/papers/1805.04501/
# The perturbed sublimation rim of the dust disk around the post-AGB binary IRAS08544-4431††thanks: Based on observations performed with PIONIER mounted on the ESO Very Large Telescope interferometer (programme: 094.D-0865). J. Kluska 1Instituut voor Sterrenkunde (IvS), KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium M. Hillen 1Instituut voor Sterrenkunde (IvS), KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium H. Van Winckel 1Instituut voor Sterrenkunde (IvS), KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium R. Manick 1Instituut voor Sterrenkunde (IvS), KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium M. Min 2SRON Netherlands Institute for Space Research, Sorbonnelaan 2, 3584 CA, Utrecht, The Netherlands 23Astronomical institute Anton Pannekoek, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands 3    S. Regibo 1Instituut voor Sterrenkunde (IvS), KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium P. Royer 1Instituut voor Sterrenkunde (IvS), KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium ###### Key Words.: Stars: AGB and post-AGB, binaries: general, circumstellar matter, Stars: individual: IRAS08544-4431, Radiative transfer ###### Abstract Context:Post-Asymptotic Giant Branch (AGB) binaries are surrounded by stable dusty and gaseous disks similar to the ones around young stellar objects. Whereas significant effort is spent on modeling observations of disks around young stellar objects, the disks around post-AGB binaries receive significantly less attention, even though they pose significant constraints on theories of disk physics and binary evolution. Aims:We want to examine the structure of and phenomena at play in circumbinary disks around post-AGB stars. We continue the analysis of our near-infrared interferometric image of the inner rim of the circumbinary disk around IRAS08544-4431. We want to understand the physics governing this inner disk rim. Methods:We use a radiative transfer model of a dusty disk to reproduce simultaneously the photometry as well as the near-infrared interferometric dataset on IRAS08544-4431. The model assumes hydrostatic equilibrium and takes dust settling self-consistently into account. Results:The best-fit radiative transfer model shows excellent agreement with the spectral energy distribution up to mm wavelengths as well as with the PIONIER visibility data. It requires a rounded inner rim structure, starting at a radius of 8.25 au. However, the model does not fully reproduce the detected over-resolved flux nor the azimuthal flux distribution of the inner rim. While the asymmetric inner disk rim structure is likely to be the consequence of disk-binary interactions, the origin of the additional over-resolved flux remains unclear. Conclusions:As in young stellar objects, the disk inner rim of IRAS08544-4431 is ruled by dust sublimation physics. Additional observations are needed to understand the origin of the extended flux and the azimuthal perturbation at the inner rim of the disk. ## 1 Introduction It is by now well established that some post-asymptotic giant branch (post-AGB) binaries are surrounded by stable circumbinary disks (e.g. vanwinckel03; vanwinckel17). Observational signatures are the presence of a flux excess at near-infrared (near-IR) wavelengths, indicating that this circumstellar dust must be close to the central star near sublimation temperatures, while the central stars are not in a dust-losing phase (e.g. deruyter06). A typical picture of a post-AGB binary involves that the unevolved star hosts an accretion disk and an outflow. The inner binary system is surrounded by a circum-binary disk that is possibly accreted onto the inner binary. Observational indicators for longevity of the disk are the evidence of strong dust grain processing in the form of a high degree of crystallinity (e.g. gielen08; gielen11) and the presence of large grains (e.g. deruyter05; gielen11; 2015aahillen) in many systems. The strongest observational evidence comes from the objects in which the Keplerian velocity field is spatially resolved in CO using the ALMA and the Plateau De Bure interferometers (bujarrabal13a; bujarrabal15; bujarrabal17a). The single-dish survey of bujarrabal13b confirms that rotation must be widespread among these sources. As the infrared emission is compact, interferometric techniques are needed to resolve the inner dusty region. Applications of high-spatial-resolution techniques (deroo06; deroo07; 2013aahillen; 2014aahillen; 2015aahillen; hillen17) unveiled both the compact inner region as well as the strong similarity between these disks and protoplanetary disks in hydrostatic equilibrium. The infrared luminosity of these objects is significant, pointing to a large scaleheight of the disk. The near-IR excess is originating specifically from the inner rim of the disk. Based on the spectral energy distribution (SED), new such objects can be efficiently identified. In recent searches for post-AGB stars in the Large and Small Magellanic Clouds (vanaarle11; kamath14a; kamath15), disk sources represent about half of the population of optically bright post-AGB stars. Disks also appear at lower luminosities, indicating that the central evolved star is a post-Red Giant Branch (post-RGB) star, rather than a post-AGB star (kamath16). In the Galaxy a sample of about 85 of these disk objects have been identified (deruyter06; gezer15). The orbits determined so far (e.g. vanwinckel09; manick17; oomen18) are too small to accommodate an AGB star. The evolved binary escaped the phase of strong interaction when the primary was at giant dimensions on the AGB on a surprisingly wide and often eccentric orbit. While the post-AGB stars are now within their Roche-lobe, there is strong observational evidence for continuous interaction between the disk and the binary. Indeed, the post-AGB stars with disk-SEDs display often a chemical anomaly in which the refractory elements are depleted, while the volatile elements have higher abundances. The photospheric abundances scale with the condensation temperature of the chemical element (e.g. gezer15, and references therein). While this depletion is not well understood yet, the general picture as described by Waters1992 is generally acknowledged: circumstellar gas devoid of refractory elements (as these remain part of the dust in the disk) is re-accreted by the post-AGB star. The presence of a stable disk is a needed but not a sufficient condition for dust-gas separation and re-accretion of cleaned gas to occur. The binaries should be seen as still interacting. Using orbital phase resolved spectroscopy, several systems are documented with observational evidence for a fast outflow, originating around the companion (e.g. thomas13; gorlova15; bollen17). The accretion disk launches a fast outflow, which is seen in absorption at superior conjunction (i.e. when the secondary is seen in front of the primary). The physical model is that continuum photons of the primary are scattered outside the line-of-sight when passing through the jet. The measured deprojected outflow velocities are indicative of the escape velocity of a main sequence companion and not a White Dwarf (WD). Whether the circum-companion accretion disks are fed by the circumbinary dusty disk, or the evolved primary is not yet known. The stable circumbinary dusty disks are therefore thought to play a lead role in the final evolution of a large population of binary stars. However, their structure, dispersal and evolution remain elusive. We therefore started a large project to study the physical processes which govern the very inner region of these systems on the basis of dedicated multi-wavelength and high spatial-resolution observations. In this contribution we report on a very specific system IRAS08544-4431. In our earlier paper (Hillen2016, hereafter Paper I), we presented the results of our successful interferometric imaging experiment using the PIONIER instrument (2011aalebouquin) on the Very Large Telescope Interferometer (VLTI) of ESO. The inner rim of the disk is well resolved and the different components contributing to the flux in the H-band are: the central star, the accretion disk around the companion, the inner rim of the circumbinary disk and an overresolved scattering component. While the near-IR is sensitive to the very inner region of the disk which determines the energetics of the system, the outer region of the disk of IRAS08544-4431 was resolved in CO (3-2) using the ALMA array (Bujarrabal2018). A complex resolved profile emerged and different regions could be discriminated: the inner gaseous disk, likely with a molecular free inner region has a radius of around 600 au and was found to be in Keplerian rotation, while the extended disk reaching a radius of about 1300 au was found to be at sub-Keplerian velocities. On top of that an even larger CO slow outflow was resolved. In this contribution we present the physical model of the circumbinary disk based on 2D radiative transfer modeling and we focus on the very inner parts. We also perform an image reconstruction of the best-fit model in order to compare it to the image from the actual interferometric dataset of IRAS 08544-4431. The paper is organised as follows: in Sect. 2 we describe the observations that we analyse in Sect. 3. Then, we discuss our findings in Sect. 4 and conclude in Sect. 5. ## 2 Observations ### 2.1 Photometry To constrain the overall energetics of the object, we assembled broad-band photometric data to construct the full spectral energy distribution (SED). From published catalogs we collected measurements in several photometric bandpasses (see Appendix A). In addition, we include new SPIRE (spiregriffin) photometry taken with the Herschel satellite (2010aapilbratt). The observations were done simultaneously in three bands (250, 350 and 500 m) on December 22nd 2012 (obs. id. 1342247249; program OT2_cgielen_4). The fluxes are extracted with the standard ’timeline extraction’ method. The uncertainties on the fluxes are dominated by the absolute flux calibration (we assume the upper limit of 15%; spireswinyard). We also include a sub-mm flux measurement, acquired with the LABOCA (Siringo2009) instrument that is mounted on the APEX telescope (Gusten2006). This 295-bolometer total power camera was used on October 23rd 2008 to observe the continuum emission at 870 m (4 scans with 35 s integration time in OTF mode). Standard procedures were applied for the data reduction and calibration. ### 2.2 Extended CORALIE dataset We extended the spectroscopic time series of IRAS08544-4431 (2003aamaas) with more data from the Swiss 1.2m Telescope at La Silla, on which the CORALIE spectrograph (Queloz1999) is mounted (see Fig. 2). The extended time base of the radial velocities allows a more accurate determination of the spectroscopic orbital elements. IRAS08544-4431 displays complex light variations, with several low-amplitude pulsation modes being excited (2007mnraskiss). The radial velocities are not only affected by orbital motion, but also by these complex photospheric variations. ### 2.3 Pionier The interferometric dataset was taken with the PIONIER instrument (2011aalebouquin) which is a four-beam combiner mounted on the VLTI. This instrument operates in the near-infrared -band (1.65m). The dataset was taken on the nights of 2015-01-21, 2015-01-24 and 2015-02-23 (prog. ID: 094.D-0865, PI: Hillen). Here we remind its main characteristics. It consists in 828 data points divided in 6 spectral channels accross the -band. They were recorded on the three auxiliary telescope configurations available at the VLTI resulting in a -coverage with baselines ranging from 7 to 129 m (see Fig. 1). We refer to Paper I for a more extensive description of the data. ## 3 Analysis ### 3.1 Updated spectroscopic orbital elements of the central binary To derive the spectroscopic orbital elements, the raw radial velocities were fit with a Keplerian model. We iteratively pre-whitened the pulsation signal with a Lomb-Scargle method. The dominant pulsation period was subtracted from the original data and the residuals were used to determine the new orbital elements, after which the iteration process was restarted. The fractional variance reduction was used as a stop criterion. We found two dominant pulsation periods in the radial velocity data, 69 d and 77 d, which are in accordance with the 68.9 d and 72.3 d periods found in the light curve (2007mnraskiss). We subtracted the pulsation model with the 69 d and 77 d periods from the original signal and derived the optimized orbital parameters, as listed in Table 2. The uncertainties were computed with a Monte Carlo simulation, assuming Gaussian-distributed noise and making 250 equivalent data sets. We found a period of 506.0 days which is at 2.3- of the one determined previously in 2003aamaas. The eccentricity () we derived is also higher, 0.22, at 4- from the one derived previously (0.14). These discrepancies are likely due to the pulsations that interfere with the binary signal in the radial velocities and which were not taken into account in the earlier analysis. ### 3.2 Determination of all the orbital elements of the central binary We now put limits on some physical parameters of the system, i.e. the primary’s luminosity and radius, and the distance. Often a canonical post-AGB luminosity of 5000 L is assumed, from which a distance of 0.8 kpc can be derived. The Gaia mission found a parallax of 0.860.6 mas for IRAS08544-4431 (Gaia). This parallax is likely to be affected by the orbital motion as the parallax value is similar to the binary angular separation that we detect in our data (assuming that the off-center point source in our model indicates the position of the secondary star). Here we use the detected angular separation () between the two stars (0.8 mas; Paper I) to estimate the distance. This requires the projected linear separation () to be known: , with being the distance to the target. We compute the projected linear separation at the time of the PIONIER observations with the Thiele-Innes constants (Hilditch2001): rproj=a√[cos(ω)X−sin(ω)Y]2+cos2(i)[sin(ω)X+cos(ω)Y]2 (1) with , and the eccentric anomaly in Kepler’s equation . All these quantities are known when the spectroscopic and interferometric constraints are combined, except the ratio . Since the primary is a post-AGB star, its mass can only take a narrow range of values (typically 0.5-0.9 M) as the future WD is only surrounded by the thin remaining envelope. We therefore use the measured spectroscopic mass function: f(m)=M32sin3(i)(M1+M2)2→M1∝M2 (2) to estimate a range for . Since the mass ratio is used in the interferometric fit to fix the position of the center of mass, we repeat the fit with different mass ratio values. We find the mass ratio to have a small influence on the fit, but we take a conservative approach and continue our analysis with the 2- upper limit on the fitted binary angular separation (0.91 mas; as well as on the inclination: 23). The choice for an upper limit is also motivated by the fact that the best-fit separation (0.81 mas) from Paper I is significantly below the formal resolution limit of the observations (=1.25 mas). One expects in this case a degeneracy between the best-fit separation and the flux contribution from the secondary (e.g. Willson2016) that was indeed observed in Paper I. Hence we derive 2- lower bounds on the distance and luminosity of IRAS08544-4431, as listed in Table 3 and that we will use in the rest of the paper. The mass of the central object is estimated from the relation between the (core) mass and the luminosity of a post-AGB star (Vassiliadis1994): L1L⊙=56694(M1M⊙−0.5)→L1∝M1, (3) and the measured mass function. This results in a total mass of . This mass is compatible with the analysis of the ALMA maps of CO and CO lines that trace Keplerian motion of the circumbinary disk (Bujarrabal2018). ### 3.3 Determination of fundamental stellar parameters Because no direct flux contribution has been detected from the companion in the high-resolution optical spectra (2003aamaas), we adopt a single-star atmosphere model to fit the stellar part of the SED (<1.5 m). Assuming spectroscopically-derived photospheric parameters (2003aamaas) we fit ATLAS models (Castelli2003) to the SED to derive the angular diameter of the primary and the total line-of-sight reddening to the system (2004aspcfitzpatrick; 2013aahillen). To improve the accuracy of the fit, we also include the stellar part of the H-band flux as determined in Paper I (59.7% of the total flux). We use a grid-based method and a statistic for the parameter estimation (2004aspcfitzpatrick). The stellar parameters of the best-fit model are listed in Table 4. #### 3.4.1 The radiative transfer code: MCMax We want to reproduce our photometric and interferometric observations with a self-consistent physical model of the dusty circumbinary disk, to investigate the physical conditions at the inner rim. We assume the disk is in hydrostatic equilibrium and that its energetics are fully determined by the stellar irradiation. The code that we use to compute disk structures, MCMax (2009AAMin), is based on the Monte Carlo method. Photon packages are randomly emitted by a source at the origin of the coordinate system, which are then absorbed or scattered by dust that is distributed in an axisymmetric geometry. The thermal structure is hence determined from the interaction of the dust with the stellar radiation (i.e., a passively heated disk in which the gas is in thermal equilibrium with the dust). The radial distribution of the gas and dust is a basic input of the model, but the vertical structure is obtained by solving the equation of hydrostatic equilibrium (i.e., the vertical component of the local gravitational force is balanced by the local gas pressure gradient). The temperature and density profiles in the disk are iteratively determined. The Monte Carlo process is very efficient to explore a reasonable parameter space of disk and dust properties and to compute a collection of observables when the disk is axisymmetric with a single central heating source. Therefore, the model takes into account the binary in terms of the central mass but sets a gravitational potential of a disk-centred single star, ignoring any gravitational perturbations produced by the binarity. We show that our observables, which probe the thermal and density structure of the innermost circumbinary material, are well reproduced with an axisymmetric disk model in hydrostatic equilibrium. We also keep this limitation in mind for further discussion. We summarize the main properties of our radiative transfer models: • The vertical extension of the disk is set by the hydrostatic equilibrium, • photon scattering by the dust is included in a full angle-dependent way, • the composition of the dust is assumed to be an ISM-like mixture of silicates in the DHS (Distribution of Hollow Spheres) approximation (2007aamin), • the size of the dust grains follows a power-law distribution, • a grain-size-dependent settling of dust, counteracted by turbulence, is included self-consistently (2012aamulders), • we adopt a double-power-law formalism to parameterize the surface density distribution (2015aahillen). These properties translate into the following parameters. First, the double-power-law surface density distribution is parameterized with five parameters: the inner and outer radius ( and ), the turnover radius (), the power-law exponent in the inner part and in the outer part ( and ). The grain size distribution requires three parameters: the minimum grain size (), the maximum grain size () and the grain size power law exponent (). There is also a measure of the total dust mass () and the total gas mass (for the vertical structure computation; here in the form of a global gas/dust ratio). Finally, the turbulence is parameterized with the -prescription (Shakura1973), in which is a scale parameter called the turbulent mixing strength (2012aamulders). We computed a grid of models and we refer to Table 5 for the ranges of the different parameters of our grid. #### 3.4.2 Goodness-of-fit criteria We evaluated the quality of our models by comparing the SED and the PIONIER squared visibilities with the corresponding synthetic observables. As our model does not include the binary properties, we only try to match the radial morphology of the detected near-IR emission in all channels, and not its detailed azimuthal profile. We therefore do not include the closure phase residuals in our merit function because none of our models can successfully reproduce the detected asymmetry. This is not surprising because the structure of the radiative transfer models is axisymmetric. At moderate or high disk inclination the radiative transfer model can produce an asymmetric image, due to the disk self-absorption effects, and hence non-zero closure phases. However, in our case the disk is seen almost pole-on and these effects are negligible. We tested this by computing models with various inclinations, finding a preference for 207.5. On the other hand, our tests do indicate a difference with respect to the disk position angle that was derived with the parametric model in Paper I (by about -3015). To fit the interferometric data we need to reproduce the integrated flux of each component (stars, disk) simultaneously with the scale from which this flux arises. The squared visibilities are very sensitive to the relative flux fractions. Here, we fix the stellar flux contributions on the basis of the values derived from the parametric model of Paper I. Similarly, the location of the companion, its temperature, and its flux contribution are fixed in these models. In other words, we neglect the influence of the duplicity of the central heating source in the computation of the disk structure (the luminosity of the secondary is negligible compared to the primary). However, in Paper I we detect 6% of the flux in -band is coming from the position of the secondary. The binary has therefore a direct influence in the interferometric data and the SED in the -band. We summarize the practical requirements for a model to be considered good: it needs to 1) contribute about 21% of the total flux at 1.65 micron in the form of thermal emission from the inner rim of the disk, as well as 15% in the form of scattering on larger scales, 2) fit the position and shape of the visibility minimum and secondary maximum, and 3) be in good agreement with the full SED. #### 3.4.3 An issue with the over-resoled flux seen by PIONIER Most models that match the mid- to far-IR SED produce too much thermal emission in the near-IR, because the inner rim is too hot and optically thick. The models that are thermally in the right range seem to underestimate the 15% over-resolved flux (typically 5-10% is reached). This discrepancy also has a color: more flux is lacking at long- than at short wavelengths. We estimate color temperatures for the over-resolved flux of 3500 K, which is well above the 2400 K detected in the real data. To decide on the best-fit model, we select a subset of models among our grid in which the total 1.65 m disk flux does not exceed the 36% derived parametrically (i.e., smaller than the sum of the ring and background flux). For each of these MCMax models, we arbitrarily add over-resolved flux (i.e., background flux like in the parametric case), until the sum of all flux components reaches 100% at 1.65 m. The temperature of this added flux is determined by minimizing the gradient with respect to wavelength of the average of the visibility residuals. Finally, we select the model with the lowest visibility reduced chi-square (). #### 3.4.4 The best-fit model The parameter values of our best model are listed in Table 5. For this model an additional component of 8.1% is needed at 1.65 m, and with a color temperature of 1000250 K. The synthetic visibilities are shown in Fig. 5 along with the residuals. The SED of our best-fit model is displayed in Fig. 4, along with an enlarged view of the near-IR wavelength region in order to highlight the various constituents. Our model matches the observed fluxes from sub-m to sub-mm wavelengths. The pure MCMax+binary model (i.e., without the additional over-resolved flux) is shown as well. A simple black-body for the additional component extrapolates correctly over the whole near-IR wavelength range, but overestimates the observations slightly between 4 and 8 m. In order to estimate the acceptable range of the model parameters we selected the acceptable models that have . The range of values of these parameters is shown in Table 5. We find that as well as is well constrained by the interferometric dataset which is not surprising as the spatially resolved data strongly constrain the disk structure. We also note that the minimum size of a dust particle () can only be larger that the one of the best-fit model. The turbulent mixing length () is not well constrained by the interferometric dataset. Finally, M is well constrained by the model. However, the models do not match the extended flux and additional dust would possibly be needed. The good constraint on the dust mass is therefore not exempt of a systematic error. ### 3.5 Image reconstruction comparison To compare the radiative transfer model with our data, we do an image reconstruction on the synthetic data built from the best-fit radiative transfer model. To be able to compare the reconstruction of the image on the original dataset to the one we do on the radiative transfer model, we use the same image reconstruction approach (SPARCO; 2014aakluska) as in Paper I with the same image configuration (number and angular size of the pixels), regularization (type, weight) and parametric model parameters (flux of the primary, of the secondary, the positions of the secondary, and , and the spectral index of the environment, \xspace). These parameters are summarized in Table 6. The two images are similar (see Fig. 7). The rings have the same radius and are very similar in brightness. This is not surprising as the radiative transfer model was selected because it is matching the squared visibilities. The flux located outside the ring is also similar in both images: it is the over-resolved flux and its distribution is mainly ruled by the -coverage. However, the ring azimuthal brightness distribution, which is coded in the closure phases, looks different in both images. We investigate the rim azimuthal brightness distribution by making polar plots (see Fig. 8). The polar plots are corrected for the disk orientations (=19; =6). We can see that both images show that the ring emission is centred at a radius of 7 mas. Whereas the ring from the radiative transfer model looks smooth and continuous, the actual ring shows azimuthal discontinuities (for example at 140 or 355). For both polar plots the light seems to be closer to the centre at an angle of 210. The image on the actual data is also displaying variations in width and dips that are not seen in the image reconstruction of the radiative transfer model. To investigate the rim azimuthal brightness variations more quantitatively we summed up the flux between 5 and 9 mas from the centre for 80 azimuthal segments (see Fig. 9). The profile from the actual data on IRAS08544-4431 displays indeed more variations than the one on the radiative transfer model. The change in the brightness throughout the ring is also larger in the actual disk having a ratio between maximum and minimum of 2.5 where the radiative transfer model image has a ratio of 1.8. Also the relative peak-to-peak variations is larger for the actual image (82%) than on the model (59%). We report the maximas (at 100 and 240) and the minimas (or dips, at 140 and 355) from the actual image on Fig 9. These extremas are not found in the radiative transfer model image at the same angles. This suggests strong azimuthal variations in the intrinsic object morphology. Lastly, we fit 1D-Gaussians to the polar plot of both images to retrieve the width of the inner rim emission in function of the azimuthal angle. On Fig. 10 we can see the width variations with azimuth. The width corresponding to low fluxes (half of the maximum flux, corresponding to the rings dips) are hidden as the width has no astrophysical meaning. The variations in the width of the actual disk rim are larger (peak-to-peak of 0.85 mas) than in the one of the radiative transfer model (peak-to-peak of 0.53 mas). We remark that in the largest flux maximum (at 240) the width is particularly large : 2.000.02 mas. ## 4 Discussion ### 4.1 On the best fit RT model The excellent agreement between our best model and the resolved observations gives strong support to our physical interpretation of the circumbinary material in this object: its structure is well reproduced by our model of a dusty settled disk. The best-fit grain size distribution shows that grain growth is significant. Sub-micron-sized particles hardly contribute to the total opacity in this disk, while the mm-sized grains are an important source of opacity and explain the sub-mm fluxes. Our double power-law radiative transfer model fits well both the photometric and interferometric data. Significant grain-growth is a common property of the disks around post-AGB stars (deruyter05; gielen11; 2014aahillen; 2015aahillen) and can be used as a tracer of longevity. The best fit model is gravitationally stable as the Toomre criterium stays above 1 for axi-symmetric disks or 1.5 for disks with asymmetric structures (see Fig. 11; Toomre1964; Papaloizou1991; Mayer2004; Durisen2007). This reinforces our choice of modeling the disk in hydrostatic equilibrium. As the disk is more likely to be gravitationally unstable in the outer parts, the notion of disk stability can be estimated more reliably by combining optical/near-IR scattered-light imaging with resolved data of the (sub-)mm continuum as well as of the gas. This will provide better constraints on the disk extension, mass, scale-height, and stratification. A detailed full model of the gas and the dust is beyond the scope of this paper. Such a model of the system needs to include, from wide to narrow, the slow gaseous outflow, the sub-Keplerian and Keplerian part, the inner puffed-up rim, the circum-companion accretion disk as the origin of the fast outflow and the evolved luminous primary. ### 4.2 Inner rim compatible with the dust sublimation radius The best fit inner rim radius is 8.25 au. Several processes can set the radius of the inner disk rim. One of them can be the dynamical disk truncation due to the inner binary. The instantaneous separation of the inner binary is about 1.2 au. With a mass ratio of 0.45 the ratio between the binary semi-major-axis and the disk gap radius is 1.7 (Artymowicz94). In our case the disk truncation radius would then correspond to 2 au. Unless there is a third component in the system the dynamical truncation is not determining the location of the disk inner rim. A more likely possibility is that this rim corresponds to the dust sublimation radius. The disks inner rim around young stellar objects (YSOs) are known to be ruled by dust sublimation physics and therefore by the luminosity of the central star. Therefore their sizes are proportional to the square-root of the luminosity of the central star (Monnier2002; Lazareff2017) following this equation: Rrim=1.1(Cbw/ϵ)12(L∗1000L⊙)12(Tsub1500K)−2, (4) where is the rim radius, is the backwarming coefficient (see Kama2009, for more details on this coefficient), is the dust grain cooling efficiency, is the stellar luminosity and is the dust sublimation temperature. Applying this equation to IRAS08544-4431 we find a dust sublimation radius of 8.2 au assuming classical values for inner rims of disks around YSOs (=1500 K, =4, =1). This value is remarkably close to the one we find in our observations. We can therefore conclude that the inner rim is indeed likely ruled by dust sublimation due to the heating from the central star. ### 4.3 Origin of the asymmetry In this paper we show that the inner disk rim around IRAS08544-4431 cannot be considered as fully axisymmetric. The inner binary is very likely to be at the origin of this asymmetry. Here we discuss the possible mechanisms that can make the inner binary disturb the inner disk rim. A dust grain located at the inner disk rim receives a variable illumination from the central star and is in a variable gravitational field during an orbital period. The star illuminating the disk is also the less massive one in the binary system. This means that for a full orbital period, a given part of the disk will see the primary star at different distances. The distance can vary between 7.0 au and 9.5 au. If we compute the averaged distance over one orbital period it will differ from the distance to a fixed star at the centre of the disk by less than 0.04 au. It will change the effective luminosity perceived by a point in the disk by less than a 1%. At the closest and furthest points, however, the stellar flux seen by a dust grain at the inner rim varies by up to 18%. If the radiative timescale is shorter than the orbital period, this, alone, can already generate a disk scale-height perturbation because of hydro-static equilibrium. Let us compute the typical disk scale-height by computing the hydrostatic equilibrium for extreme phases of the binary w.r.t. a dust grain at the inner disk rim. Assuming a very short radiative timescale at the inner rim, no variation in the disk radius and a circular orbit, the temperature of a dust grain () ranges from 1400 to 1610 K. We can roughly estimate the inner rim scale-height () due to hydro-static equilibrium in the case of a central binary with: H= ⎷kTrimμgG(M1d31+M2d32)−1 (5) where is the Boltzmann constant, is the mean molecular weight, is the proton mass, is the gravitational constant, the distance to the primary and the distance to the secondary. We can compute the rim scale-height for the two extreme orbital phases w.r.t. a dust particle at the inner rim: where the primary is the closest to the dust particle and the point where it is the furthest. This will translate into a variation of the scale-height of the inner rim of 2.4% (from 1.15 au to 1.18 au). This can explain only part of the brightness luminosity variations we see in the actual interferometric data. In the polar plot we see a variation of 91% of the disk brightness. Moreover, these disk scale-height variations due to the hydro-static equilibrium are expected to be smooth throughout the disk. The image shows a steep decrease in brightness between the maximum at 195 and the minimum at 125. We conclude that this kind of feature cannot be reproduced by a change in the local hydro-static equilibrium alone. ### 4.4 Origin of the extended flux Our radiative transfer model alone could not account for the whole over-resolved flux (15.5% at 1.65m; Paper I) as we needed to artificially add 8.1% of over-resolved flux. It means that the model is reproducing approximately half of the over-resolved flux by scattering the stellar flux on the disk surface. The problem of an unaccounted over-resolved flux contribution was also found when modelling 89 Her (2014aahillen) but it was detected at visible wavelengths. The origin of this high level of over-resolved flux in our target is not clear. One possibility is that it comes from a more complex disk structure. For example a spiral or a gaped disk can produce more extended emission in the near-infrared (e.g. Fukagawa2010; Tatulli2011). In gaped disks around YSOs, the polycyclic aromatic hydrocarbons (PAHs) in the gap are directly exposed to stellar high-energy photons from the central star. These PAHs are quantum heated and can radiate in the near-infrared continuum (Klarmann2017) explaining the over-resolved flux seen in near-infrared interferometric data of some of the young targets. However, IRAS08544-4431 is an oxygen-rich evolved star in which the circumstellar gas and dust originates from the star itself and therefore the disk is not expected to show PAH-features. Moreover, the central star is not emitting a great amount of high-energy photons, necessary to heat the PAHs to near-infrared emitting temperatures. Another possibility is that the extended component is coming from the primary flux scattered by dust in the outflow of the secondary. This outflow is not modeled by our 2D dust disk model. The secondary is surrounded by an accretion disk and launches a wide wind (2003aamaas). If this wind is loaded with dust, it can emit in the near-infrared continuum and be very extended (e.g. Bans2012). One way to have dust in the wind is that it is brought by the accretion disk. However, it is unlikely that dust particles can survive inside the dust sublimation radius of 8 au without sublimating. Another way is that the wind takes the dust from the disk inner regions. The H profiles in the CORALIE spectra always show a P-Cygni profile with an absorption feature which means that the outflow opening angle is at least as large as the disk inclination (; see Fig. 12). The opening angle should therefore be larger than 20. An opening angle of 70 would reach the disk atmosphere 5 au above the disk midplane at 15 au from the mass centre. At this distance from the midplane the dust density in this upper disk layer is 1000 smaller than in the midplane (g/cm). Whether or not this amount of dust is able to scatter the primary flux and account for the 8% of over-resolved emission has to be answered by further modeling work. Therefore, the origin of the extended flux is not clear and further observations are needed to characterize it like direct imaging with an instrument with adaptive optics and/or aperture masking observations. ### 4.5 Comparison to YSOs We have successfully reproduced the radial structure of the disk with a radiative transfer model developed to reproduce disks around YSOs. For disks around YSOs these models have trouble to reproduce the near-infrared flux and interferometric data simultaneously. A protoplanetary disk with hydrostatic equilibrium will not produce enough near-infrared emission at the inner rim. The inner rim emission is radially extended (Lazareff2017) probably due to dust segregation (Tannirkulam2007; Kama2009), disk accretion (Flock2016) and/or the presence of a thick gaseous disk inside the dust sublimation radius (Kraus2008). A rounded inner rim at hydro-static equilibrium is enough to reproduce both the SED and the interferometric observables around IRAS08544-4431. There is no need to add an emission inside the dust sublimation radius. This disk rim is also radially sharp compared to disk rims around young stellar objects (e.g. Lazareff2017). The photosphere of IRAS08544-4431 is affected by depletion (2003aamaas) suggesting that accretion from the circumbinary disk occured or is occuring. The gas/dust separation is expected for accretion rates smaller than 10M/yr (Waters1992). Flock2016 show that disks around YSOs show a radially extended rim for accretion rates down to 10. The radial extension of the disk seems more dependent on the luminosity of the central star, the inner rim being sharper at higher stellar luminosities. However, Flock2016 computed models for YSOs and did not extrapolate above a luminosity of 56 L. For a luminosity of 14000 L the inner rim profile could therefore be more vertical even for moderate accretion rates. ## 5 Conclusions We present in this paper a radiative transfer model of the circumbinary disk around the post-AGB binary IRAS08544-4431. We successfully reproduced the SED and the radial structure of the disk inner rim by reproducing the squared visibility measurements. We used a classical self-consistent model designed to reproduce disks around YSOs by defining the disk vertical scale-height by computing the hydrostatic equilibrium. Whereas the model has moderate success in reproducing disks around YSOs it turns out that it is successful to reproduce circumbinary disks around post-AGBs. The main results of this paper are: • the inner disk rim is ruled by dust sublimation physics and is well reproduced by our model which is in hydro-static equilibrium. • the inner disk rim is not axi-symmetric. This asymmetry might be explained by the central binary orbit but a detailed modeling of its effects is required to see whether it can truly explain the steep variations in azimuthal brightness at the disk rim. • an over-resolved near-infrared component is present and cannot be reproduced by a pure disk model. Its origin remains unclear but is likely linked to the low-mass outflow which is present in this system as well, as evidenced by the H profile. Given the number of post-AGB binaries with circumbinary disks, it is clear that complex disk structures are indeed common around post-AGB binaries. To answer the questions of 1) the physics ruling the radius of the inner rim (dust sublimation? binary dynamical truncation?), 2) the physics of the inner rim perturbation (what is the influence of the inner binary?), 3) the systematic detection of an emission around the secondary star (are circumcompanion accretion disks common and how are they fed), 4) the origin of the extended emission (disk structure? disk wind?) and 5) the physical relation between the compact dust disk as observed in the near-IR and the large gaseous disk as detected by ALMA, a systematic study of the near-infrared emission as well as the extended emission around post-AGB binaries is needed. To study the disk-binary interaction, we are planning to monitor this system with time-resolved near-infrared long baseline interferometric observations. We conclude that in many ways the disks around luminous post-AGB binaries are scaled-up, more irradiated versions of protoplanetary disks around YSOs. ###### Acknowledgements. JK and HVW acknowledge support from the Research Council of the KU Leuven under grant number C14/17/082. RM acknowledges support from the Research Council of the KU Leuven under contract GOA/13/012 and the Belgian Science Policy Office under contract BR/143/A2/STARLAB. We used the following internet-based resources: NASA Astrophysics Data System for bibliographic services; Simbad; the VizieR online catalogues operated by CDS.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418423771858215, "perplexity": 1551.988950440517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.66/warc/CC-MAIN-20230203091020-20230203121020-00296.warc.gz"}
https://math.stackexchange.com/questions/3423409/what-is-the-purely-synthetic-construction-for-producing-a-projective-collineatio
# What is the purely synthetic construction for producing a projective collineation, given two lines and three points on each line? It is a well-known fact (sometimes called the fundamental theorem of projective geometry) that given two lines and three points on each of the two lines, there is a unique projectivity between the two lines. It also appears to be a known fact that this projectivity can be extended to a collineation of the entire real projective plane. I'm looking for a specific recipe (draw these lines here, construct the intersection, draw these other lines, etc.) for how to actually implement the collineation, without needing to pass to coordinates. It feels like there's almost such a recipe in various sources, (e.g., Coxeter, partly Richter-Gebert) but I haven't been able to quite make it work. (Specifically, I want to be able to select a line, three points on the line, another line, three points on that line, and then any other 7th point, and be able to construct the image point via a tool in GeoGebra.) Coxeter, in his book Projective Geometry, writes about one and two dimensional projective mappings. In the second edition, Theorem 4.12 is the fundamental theorem of one dimensional projectivities, and is the one to which you refer (projectivities determined by three points on each of two lines). Theorem 6.13 is the fundamental theorem of (two-dimensional) projective collineations - basically that a projective transformation is determined by two complete quadrilaterals $$DEFPQR$$ and $$D'E'F'P'Q'R'$$. The correspondence between $$DEF$$ and $$D'E'F'$$ is like the correspondence that determines a 1D projectivity. But the extra correspondence between $$DQR$$ and $$D'Q'R'$$ adds more information. Altogether, it adds up to the usual determination of a projective transformation by specifying the mappings of four points. The diagram below, from Coxeter's book, summarizes the synthetic construction for a projective collineation that maps a line $$a=XY$$ to $$a'=X'Y'$$. Here the construction for the 1D projectivity is used twice, once for each of $$X \rightarrow Y$$ and $$X' \rightarrow Y'$$. There are infinitely many collineations between 3 pairs of points. So you can pick any 8th point as the image of the 7th point and get a collineation that maps the 4 points to their images. You can determine a unique collineation when given 4 pairs points and their images in general position. I'm not sure if there's a purely synthetic way to determine this collineation though. The analytic way to do this is given in the answer to another question. Note: A collineation induces a unique projectivity on a line and its image under the said collineation but the reverse is not true. A projectivity does not induce a unique collineation on the plane.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163536190986633, "perplexity": 273.2009380296703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00382.warc.gz"}
https://www.physicsforums.com/threads/questions-about-gravitational-waves.858044/
1. Feb 18, 2016 ### Ontophobe 1. If the Michelson-Morley experiment were to be conducted with gravitational waves instead of light waves, would the results be any different? 2. Ought we to expect the existence of "gamma" gravitational waves and "radio" and "microwave," etc. gravitational waves? In principle, could there be a "gravitational microwave background?" 3. Corollary to #2: Ought we to expect gravitational waves to undergo Doppler effects? 4. Back when we thought light moves through an ether, we hypothesized (based on the nature of light) that the ether must be something akin to a really rigid glass. We no longer believe in the ether, but we do believe in a "spatio-temporal fabric" (tomayto/tomahto) through which gravitational waves move. Given the nature of gravitational waves, to what material substance would we compare the fabric of space-time? Glass? Rubber? Water? 2. Feb 18, 2016 ### Orodruin Staff Emeritus Yes, you will have a very hard time finding a gravitational beam splitter and a gravitational wave mirror. You simply cannot do the MM experiment with gravitational waves as they will go right through your experimental setup. Yes, gravitational waves can have different frequencies. Different frequencies of gravitational waves are predicted by different phenomena. Gravitational waves from the very early Universe would essentially probe the physics of inflation and not be due to a thermal background. No we don't. This is just a popularised figure of speech to describe what is really going on in the theory. The rest of this question therefore makes no sense. 3. Feb 19, 2016 ### Ontophobe So gravity waves aren't waves in space-time, in contrast to EM waves which are waves in a field of potentials in space-time? Space-time isn't actually a thing (albeit non-material)? With what should I replace this popularized fiction?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581250905990601, "perplexity": 1124.1263047362052}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591578.1/warc/CC-MAIN-20180720100114-20180720120114-00158.warc.gz"}
http://www2.math.binghamton.edu/p/seminars/arit/arit_spring2016
### Sidebar seminars:arit:arit_spring2016 #### Spring 2016 • January 29 Speaker: FirstName LastName (Some University) Title: Organizational Meeting Abstract: We will discuss schedule and speakers for this semester • February 8 Speaker: Adrian Vasiu (Binghamton University) Title: Classification of Lie algebras with perfect Killing form Abstract: We will review basic properties of Lie algebras over arbitrary commutative rings. Then we will present the classification of Lie algebra over such rings whose killing forms are perfect. This re-obtains and generalizes prior works of Curtis, Seligman, Mills, Block–Zassenhaus, and Brown in late sixties and in seventies who worked over fields. This work will appear in Algebra and Number Theory journal. • February 15 Speaker: Adrian Vasiu (Binghamton University) Title: Classification of Lie algebras with perfect Killing form, Part II Abstract: We will go through few details of the proof of the classification stated last time that involve universal enveloping algebras, Casimir elements, and cohomology. Then we will talk about the main motivation behind this classification coming from extensions of group schemes. • February 22 Speaker: Jaiung Jun (Binghamton University) Title: Berkovich Analytification, Tropicalization, and hyperfields. Abstract: I will review the basic notions of Berkovich analytification in connection to tropicalization. Then I will explain how some basic definitions of Berkovich analytification can be restated by using hyperfields. In particular, this view can be linked to my previous work on hyperstructures of affine algebraic group schemes. • February 29 Speaker: Farbod Shokrieh (Cornell University) Title: Classes of compactified Jacobians in the Grothendieck ring Abstract: Let $C$ be a nodal curve over an algebraically closed field $k$. Denote with $\textrm{Pic}^0(C)$ the generalized Jacobian of $C$, which is the classifying space for line bundles on $C$ having degree zero on each irreducible component. If the dual graph of $C$ is not a tree, then $\textrm{Pic}^0(C)$ is not compact. But (many) nice compactifications of $\textrm{Pic}^0(C)$ are known. I will describe how one can use the combinatorics of the dual graph to compute the class of these compactifications in the Grothendieck ring of $k$-varieties''. This is ongoing joint work with Alberto Bellardini. The talk should be accessible to graduate students. • March 14 Speaker: Jaiung Jun (Binghamton University) Title: Matroid theory for Algebraic Geometers Abstract: This is an expository talk based on a survey paper “Matroid Theory for Algebraic Geometers” (by Eric Katz). We will introduce the basic definitions of matroid theory in connection to tropical linear spaces and explain the idea that tropical linear spaces and valuated matroids are the same things. We also review the recent paper “Matroids over hyperfields” (by Matt Baker) to see how various classes of matroids can be unified under one framework. • March 21 Speaker: Changwei Zhou (Binghamton University) Title: Cohomology of Lie algebras Abstract: In this talk we review the basic definition of cohomology of Lie algebras from an analytical point of view by tracing back the analytical theory of Lie groups using de Rham's theorem. If we have extra time we shall discuss related topics like the Loday-Quillen-Taygen theorem in cyclic homology, the unitary trick and some sample computations of groups. Loday&Quillen's paper directly motivated the computation of Hochschild homology groups of differential operators, and much of later work on Hochschild homology on pseudo-differential operators is built up on it. Hopefully we can connect some of the dots in the talk to see a united picture. The sources are Samuel&Ellenberg's paper “Cohomology groups of Lie groups and Lie algebras”, Loday&Quillen's paper “Cyclic homology and the Lie algebra homology of matrices”, and Melrose&Nister's paper “Homology of pseudodifferential operators I. Manifolds with boundary”. • April 11 Speaker: Patrick Milano (Binghamton University) Title: Growth in Groups Abstract: Let $G$ be a group and let $A$ be a finite subset of $G$. Write $A^k=\{x_1x_2\dots x_k:x_i\in A\}$. We can ask how $|A^k|$ grows as $k$ grows. We will survey some results and techniques related to this question, focusing on the case when $G$ is a linear algebraic group. The material in this talk is taken from a course taught by Harald Helfgott at the 2016 Arizona Winter School. • April 18 Speaker: Thomas Price (Toronto) Title: Numerical Cohomology Abstract: This talk will be an overview of a preprint of the same title. A lattice (i.e. a discrete subgroup of a finite-dimensional inner product space) can be thought of as a vector bundle over the “completion” of Spec(Z). We can associate numbers to a lattice that act like dimensions of cohomology vector spaces. Unfortunately, these numbers can be arbitrary nonnegative real numbers, and therefore can't literally be interpreted as dimensions of vector spaces. To get around this, we can develop a numerical approach to cohomology, where vector spaces and linear maps are replaced by real numbers. • April 25 Speaker: Jaiung Jun (Binghamton University) Title: Analytic geometry over $\mathbb{F}_1$ as relative algebraic geometry Abstract: Several years ago, Berkovich introduced a notion of analytic geometry over $\mathbb{F}_1$ by directly generalizing his construction of an analytic space over a non-Archimedean field. On the other hand, recently, Ben-Bassat and Kremnizer took a functorial approach of Toen and Vaquie on algebraic geometry over a closed symmetric monoidal category and proved that the category of analytic spaces (in the sense of Berkovich) embeds fully faithfully into the category of relative schemes which they constructed. I will present some of these ideas. The aim of this talk is to provide backgrounds on the material and explain my research projects in this direction. • April 26 (CROSS LISTING WITH THE ALGEBRA SEMINAR; SPECIAL DAY TUESDAY and TIME 2:50pm) Speaker: An Huang (Harvard University) Title: Riemann-Hilbert problem for period integrals Abstract: Period integrals of an algebraic variety are transcendental objects that describe, among other things, deformations of the variety. They were originally studied by Euler, Gauss and Riemann, who inspired modern Hodge theory through the theory of periods. Period integrals also play a central role in mirror symmetry in recent years. In this talk, we will discuss a number of problems on period integrals that are crucial to understanding mirror symmetry for Calabi-Yau manifolds. We will see how the theory of D-modules have led us to solutions and deep insights into some of these problems. • May 2 (CROSS LISTING WITH THE COLLOQUIUM –Dean's Speaker Series in Geometry/Topology) Speaker: Melvyn Nathanson (CUNY) Title: Every Finite Set of Integes is an Asymptotic Approximate Group Abstract: A set $A$ is an $(r, l)$-approximate group in the additive abelian group $G$ if $A$ is a nonempty subset of $G$ and there exists a subset $X$ of $G$ such that $|X| ≤ l$ and $rA ⊆ X + A.$ The set $A$ is an asymptotic $(r, l)$-approximate group if the sumset $hA$ is an $(r, l)$-approximate group for all sufficiently large integers $h.$ It is proved that every finite set of integers is an asymptotic $(r, r + 1)$-approximate group for every integer $r ≥ 2.$ • May 3 (CROSS LISTING WITH THE COLLOQUIUM –Dean's Speaker Series in Geometry/Topology; SPECIAL DAY TUESDAY and TIME 4:30pm): Speaker: Melvyn Nathanson (CUNY) Title: Every Finite Subset of an Abelian group is an Asymptotic Approximate Group Abstract: If $A$ is a nonempty subset of an additive abelian group $G$, then the $h$-fold sumset is $hA = \{x_1 + \cdots + x_h : x_i \in A_i \text{ for } i=1,2,\ldots, h\}.$ We do not assume that $A$ contains the identity, nor that $A$ is symmetric, nor that $A$ is finite. The set $A$ is an $(r,\ell)$-approximate group in $G$ if there exists a subset $X$ of $G$ such that $|X| \leq \ell$ and $rA \subseteq XA$. The set $A$ is an asymptotic $(r,\ell)$-approximate group if the sumset $hA$ is an $(r,\ell)$-approximate group for all sufficiently large $h.$ It is proved that every polytope in a real vector space is an asymptotic $(r,\ell)$-approximate group, that every finite set of lattice points is an asymptotic $(r,\ell)$-approximate group, and that every finite subset of every abelian group is an asymptotic $(r,\ell)$-approximate group. • May 9 (DEAN'S SPEAKER SERIES IN GEOMETRY/TOPOLOGY) Speaker: Alexandru Buium (University of New Mexico) Title: Arithmetic analogue of Painleve VI Abstract: The Painleve VI equations are a family of differential equations appearing in a number of contexts in mathematics and theoretical physics.On the other hand the theory of differential equations possesses an arithmetic analoguein which derivatives are replaced by Fermat quotients. The aim of the talk isto explain how one can set up an arithmetic analogue of the Painleve' VI equations.We prove that this arithmetic analogue has a “Hamiltonian structure” analogous to the classical one.The talk is based on joint work with Yuri I. Manin. • May 10 (CROSS LISTING WITH THE COLLOQUIUM –Dean's Speaker Series in Geometry/Topology; SPECIAL DAY TUESDAY and TIME 4:30pm): Speaker: Alexandru Buium (University of New Mexico) Title: The differential geometry of Spec Z Abstract: The aim of this talk is to show how one can develop an arithmetic analogue of classical differential geometry. In this new geometry the ring of integers Z will play the role of a ring of functions on an infinite dimensional manifold. The role of coordinate functions on this manifold will be played by the prime numbers. The role of partial derivatives of functions with respect to the coordinates will be played by the Fermat quotients of integers with respect to the primes. The role of metrics (respectively 2-forms) will be played by symmetric (respectively antisymmetric) matrices with coefficients in Z. The role of connections (respectively curvature) attached to metrics or 2-forms will be played by certain adelic (respectively global) objects attached to matrices as above. One of the main conclusions of our theory will be that Spec Z is intrinsically curved;” the study of this curvature will then be one of the main tasks of the theory. seminars/arit/arit_spring2016.txt · Last modified: 2017/01/25 15:17 by borisov
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380157351493835, "perplexity": 578.1451117138764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.83/warc/CC-MAIN-20180925182856-20180925203256-00410.warc.gz"}
https://jeremykun.com/tag/projection/
# The Inner Product as a Decision Rule The standard inner product of two vectors has some nice geometric properties. Given two vectors $x, y \in \mathbb{R}^n$, where by $x_i$ I mean the $i$-th coordinate of $x$, the standard inner product (which I will interchangeably call the dot product) is defined by the formula $\displaystyle \langle x, y \rangle = x_1 y_1 + \dots + x_n y_n$ This formula, simple as it is, produces a lot of interesting geometry. An important such property, one which is discussed in machine learning circles more than pure math, is that it is a very convenient decision rule. In particular, say we’re in the Euclidean plane, and we have a line $L$ passing through the origin, with $w$ being a unit vector perpendicular to $L$ (“the normal” to the line). If you take any vector $x$, then the dot product $\langle x, w \rangle$ is positive if $x$ is on the same side of $L$ as $w$, and negative otherwise. The dot product is zero if and only if $x$ is exactly on the line $L$, including when $x$ is the zero vector. Left: the dot product of $w$ and $x$ is positive, meaning they are on the same side of $w$. Right: The dot product is negative, and they are on opposite sides. Here is an interactive demonstration of this property. Click the image below to go to the demo, and you can drag the vector arrowheads and see the decision rule change. Click above to go to the demo The code for this demo is available in a github repository. It’s always curious, at first, that multiplying and summing produces such geometry. Why should this seemingly trivial arithmetic do anything useful at all? The core fact that makes it work, however, is that the dot product tells you how one vector projects onto another. When I say “projecting” a vector $x$ onto another vector $w$, I mean you take only the components of $x$ that point in the direction of $w$. The demo shows what the result looks like using the red (or green) vector. In two dimensions this is easy to see, as you can draw the triangle which has $x$ as the hypotenuse, with $w$ spanning one of the two legs of the triangle as follows: If we call $a$ the (vector) leg of the triangle parallel to $w$, while $b$ is the dotted line (as a vector, parallel to $L$), then as vectors $x = a + b$. The projection of $x$ onto $w$ is just $a$. Another way to think of this is that the projection is $x$, modified by removing any part of $x$ that is perpendicular to $w$. Using some colorful language: you put your hands on either side of $x$ and $w$, and then you squish $x$ onto $w$ along the line perpendicular to $w$ (i.e., along $b$). And if $w$ is a unit vector, then the length of $a$—that is, the length of the projection of $x$ onto $w$—is exactly the inner product product $\langle x, w \rangle$. Moreover, if the angle between $x$ and $w$ is larger than 90 degrees, the projected vector will point in the opposite direction of $w$, so it’s really a “signed” length. Left: the projection points in the same direction as $w$. Right: the projection points in the opposite direction. And this is precisely why the decision rule works. This 90-degree boundary is the line perpendicular to $w$. More technically said: Let $x, y \in \mathbb{R}^n$ be two vectors, and $\langle x,y \rangle$ their dot product. Define by $\| y \|$ the length of $y$, specifically $\sqrt{\langle y, y \rangle}$. Define by $\text{proj}_{y}(x)$ by first letting $y' = \frac{y}{\| y \|}$, and then let $\text{proj}_{y}(x) = \langle x,y' \rangle y'$. In words, you scale $y$ to a unit vector $y'$, use the result to compute the inner product, and then scale $y$ so that it’s length is $\langle x, y' \rangle$. Then Theorem: Geometrically, $\text{proj}_y(x)$ is the projection of $x$ onto the line spanned by $y$. This theorem is true for any $n$-dimensional vector space, since if you have two vectors you can simply apply the reasoning for 2-dimensions to the 2-dimensional plane containing $x$ and $y$. In that case, the decision boundary for a positive/negative output is the entire $n-1$ dimensional hyperplane perpendicular to $y$ (the projected vector). In fact, the usual formula for the angle between two vectors, i.e. the formula $\langle x, y \rangle = \|x \| \cdot \| y \| \cos \theta$, is a restatement of the projection theorem in terms of trigonometry. The $\langle x, y' \rangle$ part of the projection formula (how much you scale the output) is equal to $\| x \| \cos \theta$. At the end of this post we have a proof of the cosine-angle formula above. Part of why this decision rule property is so important is that this is a linear function, and linear functions can be optimized relatively easily. When I say that, I specifically mean that there are many known algorithms for optimizing linear functions, which don’t have obscene runtime or space requirements. This is a big reason why mathematicians and statisticians start the mathematical modeling process with linear functions. They’re inherently simpler. In fact, there are many techniques in machine learning—a prominent one is the so-called Kernel Trick—that exist solely to take data that is not inherently linear in nature (cannot be fruitfully analyzed by linear methods) and transform it into a dataset that is. Using the Kernel Trick as an example to foreshadow some future posts on Support Vector Machines, the idea is to take data which cannot be separated by a line, and transform it (usually by adding new coordinates) so that it can. Then the decision rule, computed in the larger space, is just a dot product. Irene Papakonstantinou neatly demonstrates this with paper folding and scissors. The tradeoff is that the size of the ambient space increases, and it might increase so much that it makes computation intractable. Luckily, the Kernel Trick avoids this by remembering where the data came from, so that one can take advantage of the smaller space to compute what would be the inner product in the larger space. Next time we’ll see how this decision rule shows up in an optimization problem: finding the “best” hyperplane that separates an input set of red and blue points into monochromatic regions (provided that is possible). Finding this separator is core subroutine of the Support Vector Machine technique, and therein lie interesting algorithms. After we see the core SVM algorithm, we’ll see how the Kernel Trick fits into the method to allow nonlinear decision boundaries. Proof of the cosine angle formula Theorem: The inner product $\langle v, w \rangle$ is equal to $\| v \| \| w \| \cos(\theta)$, where $\theta$ is the angle between the two vectors. Note that this angle is computed in the 2-dimensional subspace spanned by $v, w$, viewed as a typical flat plane, and this is a 2-dimensional plane regardless of the dimension of $v, w$. Proof. If either $v$ or $w$ is zero, then both sides of the equation are zero and the theorem is trivial, so we may assume both are nonzero. Label a triangle with sides $v,w$ and the third side $v-w$. Now the length of each side is $\| v \|, \| w\|,$ and $\| v-w \|$, respectively. Assume for the moment that $\theta$ is not 0 or 180 degrees, so that this triangle is not degenerate. The law of cosines allows us to write $\displaystyle \| v - w \|^2 = \| v \|^2 + \| w \|^2 - 2 \| v \| \| w \| \cos(\theta)$ Moreover, The left hand side is the inner product of $v-w$ with itself, i.e. $\| v - w \|^2 = \langle v-w , v-w \rangle$. We’ll expand $\langle v-w, v-w \rangle$ using two facts. The first is trivial from the formula, that inner product is symmetric: $\langle v,w \rangle = \langle w, v \rangle$. Second is that the inner product is linear in each input. In particular for the first input: $\langle x + y, z \rangle = \langle x, z \rangle + \langle y, z \rangle$ and $\langle cx, z \rangle = c \langle x, z \rangle$. The same holds for the second input by symmetry of the two inputs. Hence we can split up $\langle v-w, v-w \rangle$ as follows. \displaystyle \begin{aligned} \langle v-w, v-w \rangle &= \langle v, v-w \rangle - \langle w, v-w \rangle \\ &= \langle v, v \rangle - \langle v, w \rangle - \langle w, v \rangle + \langle w, w \rangle \\ &= \| v \|^2 - 2 \langle v, w \rangle + \| w \|^2 \\ \end{aligned} Combining our two offset equations, we can subtract $\| v \|^2 + \| w \|^2$ from each side and get $\displaystyle -2 \|v \| \|w \| \cos(\theta) = -2 \langle v, w \rangle,$ Which, after dividing by $-2$, proves the theorem if $\theta \not \in \{0, 180 \}$. Now if $\theta = 0$ or 180 degrees, the vectors are parallel, so we can write one as a scalar multiple of the other. Say $w = cv$ for $c \in \mathbb{R}$. In that case, $\langle v, cv \rangle = c \| v \| \| v \|$. Now $\| w \| = | c | \| v \|$, since a norm is a length and is hence non-negative (but $c$ can be negative). Indeed, if $v, w$ are parallel but pointing in opposite directions, then $c < 0$, so $\cos(\theta) = -1$, and $c \| v \| = - \| w \|$. Otherwise $c > 0$ and $\cos(\theta) = 1$. This allows us to write $c \| v \| \| v \| = \| w \| \| v \| \cos(\theta)$, and this completes the final case of the theorem. $\square$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 114, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874783754348755, "perplexity": 199.41846649545303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687324.6/warc/CC-MAIN-20170920142244-20170920162244-00025.warc.gz"}